text
stringlengths 1
2.25M
|
---|
---
abstract: |
We introduce in this paper an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective’s gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously existing variants of optimal first-order methods, this upper bound inequality was constructed from points computed during the current iteration. It was thus not possible to select the optimal value for this Lipschitz constant at the beginning of the iteration.
In our variant, the upper bound inequality is constructed from points available before the current iteration, offering us the possibility to set the Lipschitz constant to its optimal value at once. This procedure, even if efficient in practice, presents a higher worse-case complexity than standard optimal first-order methods. We propose an alternative strategy that retains the practical efficiency of this procedure, while having an optimal worse-case complexity. We show how our generic scheme can be adapted for smoothing techniques, and perform numerical experiments on large scale eigenvalue minimization problems. As compared with standard optimal first-order methods, our schemes allows us to divide computation times by two to three orders of magnitude for the largest problems we considered. **Keywords:** Convex Optimization, First-Order Methods, Eigenvalue Optimization.
author:
- 'Michel Baes[^1] , Michael Bürgisser[^2]'
bibliography:
- 'Jordan.bib'
title: 'An acceleration procedure for optimal first-order methods'
---
Introduction
============
With a few notable exceptions [@Gonzio_billions], first-order methods constitute the main family of algorithms able to deal with very large-scale convex optimization problems [@Pena_Poker; @nesterov:coreDP10/2010; @Richtarik:block_coordinate; @nesterov:coreDP16/2012]. Among them, optimal first-order methods play a distinguished role: they are practically as cheap as a first-order method can be, with a complexity per iteration growing as a moderate polynomial of the problem’s size, while the worst-case number of iterations they require is provably optimal for *smooth* instances [@Nemirovski_Yudin_book; @nesterov_first_accel]. Their scope of applicability is restricted to optimization problems with a differentiable convex objective function $f$, whose gradient is globally Lipschitz continuous. Nevertheless, Nesterov introduced a systematic procedure, applicable to many nonsmooth convex functions, for building a smooth approximation to which one can apply an optimal first-order minimization algorithm cite[nesterov:coreDP12/2003]{}. His construction can easily be specified to realize the optimal compromise between the smoothness of the substitute objective function and how accurately it approximates the original objective. Smoothing techniques extended dramatically the scope of optimal first-order methods, and many variants of the original scheme developed in [@nesterov_first_accel] have been studied since then (see e.g. [@Tseng:proximal; @Fista; @d'Aspremont:Approx_gradients; @Monteiro_restart]).
Critically, optimal first-order methods need an estimation of the corresponding Lipschitz constant with respect to an appropriate norm. Originally, this bound is used to build an approximation of the epigraph of the objective function. The larger the bound, the worse this approximation is, and the more steps the method is likely to take. Some strategies have been proposed to re-actualize at every step this bound [@nesterov_composite; @Candes:first-order]. These strategies are based on the fact that the Lipschitz constant is used at a particular iteration to satisfy a single inequality rather than as a global property. If this inequality is verified, it suffices to reduce this constant, redo the iteration with the new value, and recheck the inequality, until it is no longer satisfied. If the inequality is not verified, we simply multiply the constant by an appropriate value and re-perform the iteration as long as the inequality does not hold. This strategy yielded a significant increase in practical efficiency. However, the cost of a single iteration has to be multiplied by a number ranging between two and possibly a few dozen.
We show in this paper how a slight modification of these methods allows us to choose inexpensively the smallest possible approximation that guarantees the global convergence of the method. In particular, we avoid redoing several times the work needed for one iteration. The practical effect of such a procedure is appreciable, and is documented at the end of this paper. On the theoretical side, we show that our re-evaluation of the Lipschitz constant, if applied systematically, gives an algorithm which requires at worse ${\mathcal{O}}((LD)/\epsilon)$ iterations, where $L$ is the global Lipschitz of the objective’s gradient, $D$ measures the diameter of the feasible set, and $\epsilon>0$ is the desired absolute accuracy on the objective’s value. In comparison with the vanilla optimal first-order method, which has a complexity of ${\mathcal{O}}(\sqrt{(LD)/\epsilon})$ iterations, this algorithm is clearly worse. We propose a mixed strategy that presents simultaneously the practical efficiency of our systematic method for very large- scale problems and, up to a constant that we can take as close to 1 as desired, the theoretical efficiency of optimal methods.
When we apply to smoothing techniques, our mixed strategy suggests a different choice of the smoothness parameter than the standard one. This fact should not be too surprising: as our method is precisely designed to fit appropriate local estimates of the gradient’s Lipschitz constant, it allows us to be slightly sloppier in our request for global smoothness.
In order to validate our general scheme, we consider a well-known application of smoothing techniques to the problem of minimizing the largest eigenvalue of a convex combination of given symmetric matrices [@nesterov:coreDP73/2004]. This problem has many applications, and a large variety of methods have been devised to solve it [@Helmberg_bundle; @Arora_Kale_07], some of which are adaptations of optimal first-order methods [@Nem_Jud_Lan_Shap:stochastic; @Buergisser_FastExp; @d'Aspremont:stochastic_SDP]. To the best of our knowledge, these methods improve the complexity of each step of optimal first-order methods, but are not trying to decrease the number of these steps. With respect to original smoothing techniques, our method allows us to *divide* by hundreds the practical number of iterations for large-scale instances, that is, when we have 100 matrices of size larger than 200. Our methods even allowed us to deal with a problem involving 10% sparse matrices of dimension 12,800 within 9 hours, while standard optimal first-order methods would have taken more than one year if it were to perform all the iterations predicted by the worse-case analysis. It appears in practice that the standard optimal method needs about two third of these iterations: about eight months would be needed to solve that problem.
The paper is organized as follows. We outline our method in Section 2. First, we analyze its complexity for smooth convex problems and particularize our result to the two variants mentioned above. Then, we describe how the algorithm and its variants can be particularized to smoothed problems. In Section 3, we apply these methods to the eigenvalue minimization problem and present some numerical experiments. We have relegated the rather technical proof of the main theorem of the paper in the appendix.
An accelerated optimal first-order method {#sec:OFM}
=========================================
In this section, we introduce an accelerated version of Nesterov’s optimal first-order method that is presented in [@nesterov:coreDP12/2003] and discuss its application in smoothing techniques.
General algorithm
-----------------
We start by considering the following optimization problem: $$\begin{aligned}
\label{eq:opt_problem}f^*=\min_{x\in Q}f(x),\end{aligned}$$ where $Q$ is a closed and convex subset of ${\mathbb R}^n$ and $f:{\mathbb R}^n\rightarrow{\mathbb R}$ is a function, which is supposed to attain its minimum on the set $Q$. In addition, we assume that $f$ is convex and differentiable with a Lipschitz continuous gradient on $Q$.
We consider ${\mathbb R}^n$ with the standard Euclidean scalar product, which is denoted by ${\left\langle}\cdot,\cdot{\right\rangle}$. The space ${\mathbb R}^n$ is equipped with a norm ${\left\|}\cdot{\right\|}_{{\mathbb R}^n}$, which may differ from the norm that is induced by the scalar product. We write ${\left\|}\cdot{\right\|}_{{\mathbb R}^n,*}$ for the dual norm to ${\left\|}\cdot{\right\|}_{{\mathbb R}^n}$: $${\left\|}u{\right\|}_{{\mathbb R}^n,*}:=\max_{x\in {\mathbb R}^n}\left\{ {\left\langle}u, x{\right\rangle}:{\left\|}x{\right\|}_{{\mathbb R}^n} =1\right\},\qquad u\in{\mathbb R}^n.$$
As $f$ has a Lipschitz continuous gradient on $Q$, there exists a constant $L=L(Q)>0$ which satisfies the inequality: $$\begin{aligned}
\label{eq:lipschitz_condition}
{\left\|}\nabla f(x)-\nabla f(y){\right\|}_{{\mathbb R}^n,*}\leq L{\left\|}x-y{\right\|}_{{\mathbb R}^n}\qquad \forall\ x,y\in Q.\end{aligned}$$ Nesterov developed a first-order method (see Equations (5.6) in [@nesterov:coreDP12/2003]) that allows us to compute approximate solutions to Problem (\[eq:opt\_problem\]). This optimal first-order method has a convergence rate of $\mathcal{O}\left(L/T^2\right)$, which outperforms the rate of convergence of common subgradient methods by two orders of magnitude. We quickly recall that common subgradient methods converge with the order ${\mathcal{O}}(1/T^{0.5})$; see for instance [@Nemirovski_Yudin_book].
At every step of Nesterov’s optimal first-order method, the Lipschitz constant $L$ is used to update the iterates; see [@nesterov:coreDP12/2003] for the details. However, the constant $L$ is a global parameter of the function $f$, as $L$ needs to satisfy Condition $(\ref{eq:lipschitz_condition})$ on the whole set $Q$. In this subsection, we introduce a refined version of Nesterov’s optimal first-order method, where we replace the global parameter $L$ by local estimates.
This algorithm requires the following basic notions. We say that $d_Q:Q\rightarrow {\mathbb R}_{\geq 0}$ is a *[distance-generating function]{} for the set $Q$* if it complies with the following requirements:
1. $d_Q$ is continuous on $Q$;
2. $d_Q$ is strongly convex with modulus $1$ on $Q$: $$d_Q(\lambda x+[1-\lambda ]y)+\frac{\lambda [1-\lambda]}{2}{\left\|}x- y{\right\|}_{{\mathbb R}^n}^2\leq \lambda d_Q(x)+[1-\lambda]d_Q(y)\qquad\forall\ x,y\in Q;$$
3. given the set $Q^o(d_Q):=\left\{x \in Q:\partial d_Q(x)\neq \emptyset\right\}$, the subdifferential $\partial d_Q$ gives rise to a continuous selection $d'_Q$ on the set $Q^o$. If there is no possibility for confusion, we write $Q^o$ instead of $Q^o(d_Q)$.
Let $d_Q$ be a [distance-generating function]{} for the set $Q$ and choose $z\in Q^o$. We write $$V_z^{d_Q}(x)=d_Q(x)-d_Q(z)-\left\langle d_Q'(z),x-z\right\rangle\in{\mathbb R}_{\geq 0}$$ for the *Bregman distance of $x\in Q$ with respect to $z\in Q^o$*. Nesterov’s optimal first-order method and its accelerated version that we present in this paper utilize a *prox-mapping*, that is, a mapping of the form: $$\begin{aligned}
\label{eq:prox_mapping}
{\textup{Prox}}_{Q,z}^{d_Q}:{\mathbb R}^n\rightarrow Q^o:s\mapsto\arg\min_{x\in Q}\left\{{\left\langle}s,x-z {\right\rangle}+ V_z^{d_Q}(x)\right\},\qquad z\in Q^o.\end{aligned}$$ If there is no possibility for confusion, we abbreviate $V_z^{d_Q}$ and ${\textup{Prox}}_{Q,z}^{d_Q}$ into $V_z$ and ${\textup{Prox}}_{Q,z}$, respectively. Given $s\in {\mathbb R}^n$ and $z\in Q^o$, the value ${\textup{Prox}}_{Q,z}(s)$ can be rewritten as $${\textup{Prox}}_{Q,z}(s)=\arg\min_{x\in Q}\left\{{\left\langle}s-d_Q'(z),x {\right\rangle}+ d_Q(x)\right\}.$$ It can be easily verified that this optimization problem has indeed a unique minimizer (Note that the objective function $x\mapsto{\left\langle}s-d_Q'(z),x {\right\rangle}+ d_Q(x)$ is continuous and strongly convex. It remains to apply Lemma $6$ from [@nesterov:PDS].) and that this minimizer belongs to $Q^o$. For the reminder of this paper, we assume that this minimizer can be computed easily (Ideally, we can write it in a closed form.). The unique element $$c(d_Q):=\arg\min_{x\in Q}\left\{d_Q(x)\right\}\in Q^o$$ is called the $d_Q$-center (Note that $c(d_Q)={\textup{Prox}}_{Q,z}(d_Q'(z))$ for any $z\in Q^o$.). Without loss of generality, we may assume that $d_Q$ vanishes at the point $c(d_Q)$. Then, Lemma 6 in [@nesterov:PDS] can be used to justify the following inequality: $$\begin{aligned}
\label{eq:lower_bound_dgf}
d_Q(x)\geq \frac{1}{2}{\left\|}x-c(d_Q){\right\|}_{{\mathbb R}^n}^2\qquad\forall\ x\in Q. \end{aligned}$$
We discuss now the analytical complexity of the accelerated optimal first-order method displayed in Algorithm \[alg:opt\_first\_order\]. We choose $T\in{\mathbb N}_0:={\mathbb N}\cup\left\{ 0\right\}$ and assume that the sequences $\left(x_t\right)_{t=0}^{T+1}$, $\left(u_t\right)_{t=0}^{T+1}$, $\left(z_t\right)_{t=0}^T$, $\left(\hat{x}_t\right)_{t= 1}^{T+1}$, $\left(\gamma_t\right)_{t=0}^{T+1}$, $\left({\Gamma}_t\right)_{t= 0}^{T+1}$, $\left(\tau_t\right)_{t=0}^{T}$, and $\left(L_t\right)_{t=0}^T$ are generated by Algorithm \[alg:opt\_first\_order\]. Given $0\leq t\leq T$, we say that *Inequality $(\mathcal{I}_t)$ holds* if $${\Gamma}_tf(u_t)+\sum_{k=0}^{t-1}\left(L_{k+1}-L_k\right)\left(d_Q(z_{k+1})-\frac{1}{2}\left\|z_k-\hat{x}_{k+1}\right\|_{{\mathbb R}^n}^2 \right)\leq \psi_t, \tag{$\mathcal{I}_t$}$$ where $$\psi_t:=\min_{x\in Q}\left\lbrace
\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), x-x_k\rangle \right) + L_td_Q(x) \right\rbrace.$$
Choose $T\in{\mathbb N}_0$. Choose $\left(\gamma_t\right)_{t=0}^{T+1}$ with $\gamma_0\in (0,1]$, $\gamma_t\geq 0$, and $\gamma_t^2\leq {\Gamma}_t:=\sum_{k=0}^t\gamma_k$ for any $0\leq t \leq T+1$. Set $L_0=L$ and $x_0=c(d_Q)$. Compute $u_0:=\arg\min_{x\in Q}\left\lbrace
\gamma_0 \left(f(x_0)+\langle \nabla f(x_0), x-x_0\rangle \right)+ L_0d_Q(x)\right\rbrace$. Set $z_0=u_0$, $\tau_0=\gamma_1/{\Gamma}_1$, and $x_1=\tau_0z_0+(1-\tau_0)u_0=z_0$. Define $\hat{x}_1:={\textup{Prox}}_{Q,z}\left(\gamma_{1}\nabla f(x_1)/L_0\right)$. Set $u_1=\tau_0\hat{x}_1+(1-\tau_0)u_0.$ Choose $0<L_t\leq L$ such that: $$\begin{aligned}
\label{eq:adapted_L_cond_opt}
f(u_t)\leq f(x_t)+\left\langle \nabla f(x_t),u_t-x_t\right\rangle+\frac{L_t}{2}\left\| u_t-x_t\right\|_{{\mathbb R}^n}^2.\end{aligned}$$ Set $z_t=\arg\min_{x\in Q}\left\lbrace\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), x-x_k\rangle \right)+L_td_Q(x) \right\rbrace$. Set $\tau_t=\gamma_{t+1}/{\Gamma}_{t+1}$ and $x_{t+1}=\tau_t z_t+(1-\tau_t)u_t$. Compute $\hat{x}_{t+1}:={\textup{Prox}}_{Q,z_t}\left(\gamma_{t+1}\nabla f(x_{t+1})/L_{t}\right)$. Set $u_{t+1}=\tau_t\hat{x}_{t+1}+(1-\tau_t)u_t.$
As the proof of the following result is rather long and technical, we give it in the Appendix \[sec:proof\_opt\_first\_order\].
\[thm:I\_t\] Inequality $(\mathcal{I}_t)$ holds for any $0\leq t\leq T$.
For the reminder of this subsection, we refer to $x^*\in Q$ as an optimal solution to the optimization problem $f^*=\min_{x\in Q}f(x)$.
\[thm:convergence\_OFM\] For any $T\in{\mathbb N}_0$, we have: $$f(u_T)-f^*\leq\frac{1}{{\Gamma}_T}\left[L_T d_Q(x^*)+\sum_{t=0}^{T-1}\left(L_t-L_{t+1}\right)\left(d_Q(z_{t+1})-\frac{1}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2 \right)\right].$$
**Proof:** Let $0\leq t\leq T$. The convexity of the function $f$ and the definition of ${\Gamma}_t$ imply $$\begin{aligned}
\psi_t&:=&\min_{x\in Q}\left\lbrace
L_t d_Q(x)+\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), x-x_k\rangle \right) \right\rbrace\cr
&\leq& L_td_Q(x^*)+\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), x^*-x_k\rangle \right) \cr
&\leq& L_td_Q(x^*)+\sum_{k=0}^{t}\gamma_kf(x^*)\cr
&=&L_t d_Q(x^*)+{\Gamma}_t f(x^*).\end{aligned}$$ It remains to combine this inequality with Theorem \[thm:I\_t\].
${}$
------------------------------------------------------------------------
Nesterov [@nesterov:coreDP12/2003] suggests to choose the sequence $\left(\gamma_t\right)_{t=0}^{T+1}$ as $$\begin{aligned}
\label{eq:choice_gamma_t_s}
\gamma_t:=\frac{t+1}{2}\qquad \forall\ 0\leq t\leq T+1.\end{aligned}$$ Lemma 2 of [@nesterov:coreDP12/2003] shows that we have the following equations for this choice of the sequence $\left(\gamma_t\right)_{t=0}^{T+1}$: $$\tau_t=\frac{2}{t+3}\qquad\forall\ 0\leq t\leq T$$ and $${\Gamma}_t=\frac{(t+1)(t+2)}{4},\qquad \gamma_t^2\leq {\Gamma}_t\qquad \forall\ 0\leq t\leq T+1.$$ As an immediate consequence of Theorem \[thm:convergence\_OFM\], we obtain the following result for our accelerated optimal first-order method.
Let us choose the sequence $(\gamma_t)_{t=0}^{T+1}$ in Algorithm \[alg:opt\_first\_order\] as described in (\[eq:choice\_gamma\_t\_s\]). Then, we have for any $T\in{\mathbb N}_0$: $$\begin{aligned}
\label{eq:conv_ofom}
f(u_T)-f^*\leq \frac{4L_Td_Q(x^*)}{(T+1)(T+2)}+\sum_{t=0}^{T-1}\frac{4\left(L_t-L_{t+1}\right)}{(T+1)(T+2)}\left(d_Q(z_{t+1})-\frac{1}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2 \right).\end{aligned}$$
${}$
------------------------------------------------------------------------
There exist different strategies for updating the sequence $\left(L_t\right)_{t=0}^T$ in Algorithm \[alg:opt\_first\_order\]. When $\alpha:=0$, we recover the complexity results of Nesterov’s optimal first-order method (see e.g. Subsection 5.3 of [@nesterov:coreDP12/2003]), for which Inequality (\[eq:conv\_ofom\]) can be rewritten as $f(u_T)-f^*\leq\frac{4Ld_Q(x^*)}{(T+1)(T+2)}$.
**Alternative 1: (most aggressive adaptive setting)** Fix $0<\kappa\ll 1$ and let $1\leq t\leq T$. The most aggressive choice for the constant $L_t$ corresponds to $$\begin{aligned}
\label{eq:L_most_aggressive}
L_t:=\max\left\{\bar L_t, \kappa L\right\}\in [\kappa L,L],\qquad \bar L_t:= \frac{2\left[f(u_t)-f(x_t)-\left\langle \nabla f(x_t),u_t-x_t\right\rangle\right]}{\left\| u_t-x_t\right\|_{{\mathbb R}^n}^2}\leq L.\end{aligned}$$ The computation of the constant $L_t$ requires the entities $u_t$, $x_t$, and $\nabla f(x_t)$. In sharp contrast with the methods proposed so far [@nesterov_composite; @Candes:first-order], all these entities are known from the previous step $t-1$, implying that the constant $L_t$ can be determined immediately.
Independent of the choice of the $L_t$’s, we can always derive the following trivial convergence result for Algorithm \[alg:opt\_first\_order\] from Inequality (\[eq:conv\_ofom\]): $$\begin{aligned}
f(u_T)-f^*\leq \frac{4L\sup_{x\in Q}d_Q(x)}{(T+1)(T+2)}+\frac{20 LT \sup_{x\in Q}d_Q(x)}{(T+1)(T+2)}\leq \frac{20 L \sup_{x\in Q}d_Q(x)}{T+2},\end{aligned}$$ as $L_Td(x^*)\leq L \sup_{x\in Q}d_Q(x)$ and $$\begin{aligned}
\sum_{t=0}^{T-1}\left(L_t-L_{t+1}\right)\left(d_Q(z_{t+1})-\frac{1}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2\right)&\leq& \sum_{t=0}^{T-1}\left|L_t-L_{t+1}\right|\left(d_Q(z_{t+1})+\frac{1}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2\right)\cr
&\leq& \sum_{t=0}^{T-1}L \left(\sup_{x\in Q}d_Q(x)+4\sup_{x\in Q}d_Q(x)\right)\cr
&=&5 L T \sup_{x\in Q}d_Q(x).\end{aligned}$$ Note that the last inequality holds due to (\[eq:lower\_bound\_dgf\]).
Thus, Algorithm \[alg:opt\_first\_order\] equipped with the most aggressive update strategy, which is described in (\[eq:L\_most\_aggressive\]), needs at most $$T=\left\lceil 20 L\sup_{x\in Q}d(x)/\epsilon -2\right\rceil$$ iterations to find a feasible $\epsilon$-solution, provided that $\sup_{x\in Q}d(x)$ is finite.
**Alternative 2: (hybrid setting)** Finally, we can combine the two settings that are presented above. We choose a number $\alpha\geq 0$ and denote by $1\leq t\leq T$ the current iteration. As long as $$\begin{aligned}
\label{eq:switch_back_rule}
\sum_{k=0}^{\bar t-1}\left(L_k-L_{k+1}\right)\left(d_Q(z_{k+1})-\frac{1}{2}\left\|z_k-\hat{x}_{k+1}\right\|_{{\mathbb R}^n}^2 \right)\leq \alpha L d_Q(x^*)\qquad \forall\ 1\leq \bar t\leq t,\end{aligned}$$ we use the update strategy that is described in (\[eq:L\_most\_aggressive\]). When Condition (\[eq:switch\_back\_rule\]) is not satisfied for the first time, we set $L_{\bar t}:=L$ for any $\bar t\geq t$ and recompute the point $z_t$.
With the just specified setting, Inequality (\[eq:conv\_ofom\]) results in the bound $$f(u_T)-f^*\leq \frac{4(1+\alpha) L d_Q(x^*)}{(T+1)(T+2)}.$$ That is, we need to perform at most $$T=\left\lceil 2\sqrt{(1+\alpha) Ld_Q(x^*)/\epsilon}-1\right\rceil$$ iterations of Algorithm \[alg:opt\_first\_order\] to find a point $x\in Q$ with $f(x)-f^*\leq \epsilon$, where $\epsilon >0$. This complexity result deviates by a factor of $(1+\alpha)^{0.5}$ from the efficiency estimate of the non-adaptive method. With $\alpha=5(T+1)\sup_{x\in Q}d(x)-1$, the setting coincides with Alternative 1.
The accelerated optimal first-order method in smoothing techniques {#sec:ST}
------------------------------------------------------------------
Smoothing techniques [@nesterov:coreDP12/2003] constitute a two-stage procedure that can be applied to non-smooth optimization problems with a very particular structure. In a first step, a smooth approximation of the non-smooth objective function is formed, so that Nesterov’s optimal first-order method can be applied afterwards. In this section, we study the effects of replacing Nesterov’s original optimal first-order method by its accelerated version in smoothing techniques.
We assume that the sets $Q_1\subset{\mathbb R}^n$ and $Q_2\subset{\mathbb R}^m$ are both compact and convex. In addition, we endow the spaces ${\mathbb R}^n$ and ${\mathbb R}^m$ with two (maybe different) norms. We denote by ${\left\|}\cdot{\right\|}_{{\mathbb R}^n}$ and ${\left\|}\cdot{\right\|}_{{\mathbb R}^m}$ the norm of the spaces ${\mathbb R}^n$ and ${\mathbb R}^m$, respectively. Nesterov considers convex optimization problems of the form: $$\begin{aligned}
\label{eq:problem_min_max}
\min_{x\in Q_1}\max_{y\in Q_2} \phi(x,y),\qquad \phi(x,y):= f_1(x) + {\left\langle}\mathcal{A}(x),y{\right\rangle}- f_2(y),\end{aligned}$$ where $f_1:{\mathbb R}^n\rightarrow {\mathbb R}$, $f_2:{\mathbb R}^m\rightarrow{\mathbb R}$ are smooth and convex, and $\mathcal{A}:{\mathbb R}^n\rightarrow{\mathbb R}^m$ is a linear operator. With a slight abuse of notation, we write ${\left\langle}\cdot,\cdot{\right\rangle}$ for the Euclidean scalar product in both spaces ${\mathbb R}^n$ and ${\mathbb R}^m$.
According to the standard MiniMax Theorem in Convex Analysis (see Corollary 37.3.2 in [@Rockafellar:70]), we have, due to the compactness and convexity of the sets $Q_1$ and $Q_2$, the following pair of primal-dual convex optimization problems: $$\min_{x\in Q_1}\left\{\overline{\phi}(x):=\max_{y\in Q_2}\phi(x,y)\right\}=\max_{y\in Q_2}\left\{\underline{\phi}(y):=\min_{x\in Q_1}\phi(x,y)\right\}.$$ The operator $\mathcal{A}$ comes with an adjoint operator $\mathcal{A}^*:{\mathbb R}^m\rightarrow{\mathbb R}^n$, which is defined by the relation: $${\left\langle}\mathcal{A}(x),y{\right\rangle}= {\left\langle}x,\mathcal{A}^*(y){\right\rangle}\qquad \forall\ (x,y)\in {\mathbb R}^n\times{\mathbb R}^m.$$ The analysis of Nesterov’s smoothing techniques requires a norm of the operator $\mathcal{A}$. This norm is constructed as follows: $${\left\|}\mathcal{A}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}:=\max_{x\in{\mathbb R}^n,y\in{\mathbb R}^m}\left\{{\left\langle}{\mathcal{A}}(x),y{\right\rangle}:{\left\|}x{\right\|}_{{\mathbb R}^n}=1,\ {\left\|}y{\right\|}_{{\mathbb R}^m}=1\right\}.$$
We are ready to form a smooth approximation of $\overline{\phi}$ to which we can apply Algorithm \[alg:opt\_first\_order\]. We choose a [distance-generating function]{} $d_{Q_2}:Q_2\rightarrow{\mathbb R}_{\geq 0}$ for the set $Q_2$ and consider the auxiliary function $$\overline{\phi}_\mu:{\mathbb R}^n\rightarrow{\mathbb R}:x\mapsto \max_{y\in Q_2}\left\{f_1(x)+{\left\langle}\mathcal{A}(x),y{\right\rangle}-f_2(y)-\mu d_{Q_2}(y)\right\},$$ where $\mu>0$ is a positive smoothness parameter. This function defines a uniform approximation of $\overline{\phi}$, as $$\begin{aligned}
\label{eq:smooth_approximation_bounds}
\overline{\phi}_\mu(x)\leq \overline{\phi}(x)\leq \overline{\phi}_\mu(x)+\mu \max_{z\in Q_2}d_{Q_2}(z)\qquad \forall\ x\in Q_1; \end{aligned}$$ see Inequality (2.7) in [@nesterov:coreDP12/2003]. The function $y\mapsto {\left\langle}\mathcal{A}(x),y{\right\rangle}-f_2(y)-\mu d_{Q_2}(y)$ is strongly concave for any $x\in Q_1$, as the [distance-generating function]{} $d_{Q_2}$ is strongly convex by its definition. Hence, the function $y\mapsto {\left\langle}\mathcal{A}(x),y{\right\rangle}-f_2(y)-\mu d_{Q_2}(y)$ has a unique maximizer on $Q_2$. We denote this maximizer by $y_*(x)$.
Nesterov showed that $\overline{\phi}_\mu$ is differentiable with a Lipschitz continuous gradient. We write $M>0$ for the Lipschitz constant of the gradient of $f_1$.
\[thm:gradient\] The function $\overline{\phi}_\mu$ is well-defined, continuously differentiable, and convex on ${\mathbb R}^n$. The gradient of $\overline{\phi}_\mu$ takes the form $$\nabla \overline{\phi}_\mu(x)=\nabla f_1(x)+{\mathcal{A}}^*(y_*(x)),$$ and is Lipschitz continuous with the constant $L_\mu:=M+{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu$.
As an immediate consequence, we can apply Algorithm \[alg:opt\_first\_order\] to the problem: $$\begin{aligned}
\label{eq:smooth_problem}
\min_{x\in Q_1}\overline{\phi}_\mu(x). \end{aligned}$$
Choose $T\in{\mathbb N}_0$. Choose a smoothness parameter $\mu>0$ and a [distance-generating function]{} $d_{Q_1}:Q_1\rightarrow{\mathbb R}$ for the set $Q_1$. Set $L_0=L_\mu=M+{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu$ and $x_0=c(d_{Q_1})$. Set $u_0=\arg\min_{x\in Q_1}\left\lbrace
\frac{1}{2} \left(\overline{\phi}_\mu(x_0)+\langle \nabla \overline{\phi}_\mu(x_0), x-x_0\rangle \right)+ L_0d_{Q_1}(x)\right\rbrace$. Set $z_0=u_0$, $\tau_0=\frac{2}{3}$, and $x_1=\tau_0z_0+(1-\tau_0)u_0=z_0$. Define $\hat{x}_1:={\textup{Prox}}_{Q_1,z}\left(\nabla \overline{\phi}_\mu(x_1)/L_0\right)$. Set $u_1=\tau_0\hat{x}_1+(1-\tau_0)u_0.$ Choose $0<L_t\leq L_\mu$ such that: $$\begin{aligned}
\overline{\phi}_\mu(u_t)\leq \overline{\phi}_\mu(x_t)+\left\langle \nabla \overline{\phi}_\mu(x_t),u_t-x_t\right\rangle+\frac{L_t}{2}\left\| u_t-x_t\right\|_{{\mathbb R}^n}^2.\end{aligned}$$ Set $$z_t=\arg\min_{x\in Q_1}\left\lbrace
\sum_{k=0}^{t}\frac{k+1}{2}\left(\overline{\phi}_\mu(x_k)+\langle \nabla \overline{\phi}_\mu(x_k), x-x_k\rangle \right)+L_td_{Q_1}(x) \right\rbrace.$$ Set $\tau_t=\frac{2}{t+3}$ and $x_{t+1}=\tau_t z_t+(1-\tau_t)u_t$. Compute $\hat{x}_{t+1}:={\textup{Prox}}_{Q_1,z_t}\left(\frac{t+2}{2}\nabla \overline{\phi}_\mu(x_{t+1})/L_{t}\right)$. Set $u_{t+1}=\tau_t\hat{x}_{t+1}+(1-\tau_t)u_t.$
Algorithm \[alg:smoothing\_techniques\] corresponds to Algorithm \[alg:opt\_first\_order\] when we apply this method with step-sizes as described in (\[eq:choice\_gamma\_t\_s\]) to Problem (\[eq:smooth\_problem\]). A slight adaptation of the proof of Theorem 3 in [@nesterov:coreDP12/2003] yields to the following result, for which we need the definitions: $$D_1:=\max_{x\in Q_1}d_{Q_1}(x)\qquad \text{and}\qquad D_2:=\max_{y\in Q_2}d_{Q_2}(y).$$
\[thm:convergence\_smoothing\] Fix $T\in{\mathbb N}_0$ and assume that the sequences $\left(x_t\right)_{t= 0}^{T+1}$, $\left(u_t\right)_{t= 0}^{T+1}$, $\left(z_t\right)_{t= 0}^T$, $\left(\hat{x}_t\right)_{t= 1}^{T+1}$, and $\left(L_t\right)_{t= 0}^T$ are generated by Algorithm \[alg:smoothing\_techniques\] with the smoothness parameter $\mu>0$. For $$\bar{x}:=u_T\in Q_1\qquad\text{and}\qquad \bar{y}:=\sum_{t=0}^T\frac{2(t+1)}{(T+1)(T+2)}y_*(x_t)\in Q_2,$$ we have: $$\begin{aligned}
\label{eq:bound_st} \overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})\leq \frac{4\left(D_1{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu+D_1M-\chi_T\right) }{(T+1)^2}+\mu D_2,\end{aligned}$$ where $$\chi_T:=\sum_{t=0}^{T-1}\left(L_{t+1}-L_t\right)\left(d_{Q_1}(z_{t+1})-\frac{1}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2 \right).$$
For remainder of this section, we use the notations of Algorithm \[alg:opt\_first\_order\] and Theorem \[thm:convergence\_smoothing\].
**Proof:** In accordance to Theorem \[thm:I\_t\] and to the step-size choice (\[eq:choice\_gamma\_t\_s\]), we have the inequality: $$\begin{aligned}
\label{eq:intermediate_smoothing_techniques_1}
\overline{\phi}_\mu(\bar{x})= \overline{\phi}_\mu(u_T)
\leq \frac{4(L_TD_1-\chi_T)}{(T+1)(T+2)}+\min_{x\in Q_1}\frac{2\beta_T(x)}{(T+1)(T+2)},\end{aligned}$$ where $$\beta_T(x):= \sum_{t=0}^T(t+1)\left(\overline{\phi}_\mu(x_t)+{\left\langle}\nabla \overline{\phi}_\mu(x_t),x-x_t{\right\rangle}\right)\qquad\forall\ x\in Q_1.$$ Let $x\in Q_1$. Using Theorem \[thm:gradient\] and the convexity of $f_1$ and $f_2$, we can write: $$\begin{aligned}
\beta_T(x)&= &\sum_{t=0}^T(t+1)\left(f_1(x)+{\left\langle}{\mathcal{A}}(x),y_*(x_t){\right\rangle}-f_2(y_*(x_t))-\mu d_{Q_2}(y_*(x_t)) \right)\cr
&\leq& \sum_{t=0}^T(t+1)\left(f_1(x)-{\left\langle}{\mathcal{A}}(x),y_*(x_t){\right\rangle}-f_2(y_*(x_t))\right)\cr
&\leq&\frac{(T+1)(T+2)}{2}\left( f_1(x) +{\left\langle}{\mathcal{A}}(x),\bar{y}{\right\rangle}-f(\bar y)\right).\end{aligned}$$ The above inequality implies: $$\begin{aligned}
\label{eq:intermediate_smoothing_techniques_2}
\min_{x\in Q_1}\beta_T(x)\leq \frac{(T+1)(T+2)}{2} \underline{\phi}(\bar{y}). \end{aligned}$$ Recall that we have $L_T\leq L_\mu={\left\|}\mathcal{A}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu+M$ by construction. We use Inequalities (\[eq:intermediate\_smoothing\_techniques\_1\]), (\[eq:intermediate\_smoothing\_techniques\_2\]), and (\[eq:smooth\_approximation\_bounds\]) to justify the following inequalities: $$\begin{aligned}
\label{eq:aux}
\frac{4(D_1{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu-\chi_T)}{(T+1)^2}\geq \frac{4(L_TD_1-\chi_T)}{(T+1)(T+2)}
\geq\overline{\phi}_\mu(\bar{x})-\underline{\phi}(\bar{y})\geq\overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})-\mu D_2.\end{aligned}$$
${}$
------------------------------------------------------------------------
We conclude this section by discussing different strategies for choosing the sequence $(L_t)_{t=0}^T$ and the smoothness parameter $\mu$.
**Alternative 1: (most aggressive adaptive setting)** We can always give the following upper bound for the quantity $(-\chi_T)$ in Theorem \[thm:convergence\_smoothing\]: $$-\chi_T\leq 5 L_\mu D_1 T =5 D_1T\left({\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu+M\right),$$ which allows us to reformulate (\[eq:bound\_st\]) as $$\overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})\leq \frac{20D_1\left({\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu +M\right)}{T+1}+\mu D_2.$$ Minimizing the right-hand side of the above inequality with respect to $\mu$, that is, setting $\mu$ to $$\mu_2^*:=2{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}\sqrt{\frac{5 D_1}{(T+1)D_2}},$$ we obtain: $$\overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})\leq 4 {\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}\sqrt{\frac{5 D_1D_2}{T+1}}+\frac{20D_1M}{T+1}.$$ As this bound is independent of the choice the $L_t$’s, it is valid also for the most aggressive setting, that is, for $$\begin{aligned}
\label{eq:L_most_aggressive_st}
L_t:=\max\left\{\bar L_t, \kappa L_\mu\right\}\in [\kappa L_\mu,L_\mu],\qquad \bar L_t:= \frac{2\left[\overline{\phi}(u_t)-\overline{\phi}(x_t)-\left\langle \nabla \overline{\phi}(x_t),u_t-x_t\right\rangle\right]}{\left\| u_t-x_t\right\|_{{\mathbb R}^n}^2}\leq L_\mu,\end{aligned}$$ where $1\leq t\leq T$ and $0<\kappa\ll 1$ is fixed.
**Alternative 2: (hybrid setting)** Let $\alpha\geq 0$. We follow the setting described in (\[eq:L\_most\_aggressive\_st\]) for all $1\leq t\leq T$ as long as $$-\chi_{\bar t}\leq \alpha D_1\left({\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu+M\right)$$ is satisfied for any $1\leq\bar t\leq t$. When this condition fails for the first time, say for $t=t'$, we set $L_t$ to $L_\mu$ for any $t\geq t'$ and recompute the point $z_{t'}$. In this hybrid setting, Inequality (\[eq:bound\_st\]) yields to $$\overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})\leq \frac{4(1+\alpha)D_1 \left({\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}^2/\mu+M\right)}{(T+1)^2}+\mu D_2.$$ We choose $\mu$ such that the right-hand side of the above inequality is minimized, that is, we fix $\mu$ to $$\mu_3^*:=\frac{2{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}}{T+1}\sqrt{\frac{(1+\alpha)D_1}{D_2}},$$ and end up with the following bound: $$\overline{\phi}(\bar{x})-\underline{\phi}(\bar{y})\leq \frac{4{\left\|}{\mathcal{A}}{\right\|}_{{\mathbb R}^n,{\mathbb R}^m}\sqrt{(1+\alpha)D_1D_2}}{T+1}+\frac{4(1+\alpha)D_1M}{(T+1)^2}.$$ Note that Alternative 2 coincides with Alternative 1 if $\alpha = 5(T+1)-1$.
An application in large-scale eigenvalue optimization
=====================================================
In this section, we study the practical behavior of accelerated smoothing techniques. We apply them to the problem of finding a convex combination of given symmetric matrices such that the maximal eigenvalue of the resulting matrix is minimal.
Problem description
-------------------
Let $$\Delta_m:=\left\{ x\in{\mathbb R}^m_{\geq 0}:\sum_{j=1}^m x_j=1\right\}\subset{\mathbb R}^m$$ be the $(m-1)$-dimensional probability simplex. Denoting by $\mathcal{S}_n$ the space of symmetric real $(n\times n)$-matrices, we write $Y\succeq 0$ if $Y\in \mathcal{S}_n$ is positive semidefinite and $\textup{Tr}(Y):=\sum_{i=1}^n Y_{ii}$ for the trace of $Y$. We refer to $$\Delta_n^M:=\left\{ Y\succeq 0: \textup{Tr}(Y)=1\right\}\subset\mathcal{S}_n$$ as the simplex in matrix form. Finally, we denote by $$\lambda_n(Y)\geq \ldots\geq \lambda_1(Y)$$ the eigenvalues of the symmetric matrix $Y\succeq 0$ and assume that they are ordered decreasingly. Throughout this section, we consider the following problem: $$\begin{aligned}
\label{eq:min_max_eig}
\min_{x\in\Delta_m}\lambda_n\left(\sum_{j=1}^m x_jA_j\right)= \min_{x\in\Delta_m}\left\{\overline{\phi}(x):=\max_{Y\in\Delta_n^M}\sum_{j=1}^m x_j{\left\langle}A_j, Y{\right\rangle}_F\right\}, \end{aligned}$$ where $A_1,\ldots,A_m\in\mathcal{S}_n$ and ${\left\langle}\cdot,\cdot{\right\rangle}_F$ denotes the Frobenius scalar product.
Applying accelerated smoothing techniques
-----------------------------------------
### Smoothing the objective function
We equip $\mathcal{S}_n$ with the induced $1$-norm, that is, with ${\left\|}Y{\right\|}_{(1)}:=\sum_{i=1}^n|\lambda_i(Y)|$, where $Y\in\mathcal{S}_n$. The dual norm corresponds to the induced $\infty$-norm, that is, to the norm ${\left\|}W{\right\|}_{(\infty)}:=\max_{1\leq i\leq n}|\lambda_i(W)|$ with $W\in\mathcal{S}_n$. We choose $${d_{\Delta_n^M}}(Y):=\ln(n)+\sum_{i=1}^n\lambda_i(Y)\ln(\lambda_i(Y)),\qquad Y\in\Delta_n^M,$$ as [distance-generating function]{} for the set $\Delta_n^M$, for which we have ${d_{\Delta_n^M}}(Y)\leq \ln(n)$ for any $Y\in\Delta_n^M$; see for instance [@nesterov:coreDP73/2004] for a proof that ${d_{\Delta_n^M}}$ is a [distance-generating function]{} for $\Delta_n^M$. We obtain the following smooth objective function as an approximation to $\overline{\phi}$: $$\overline{\phi}_\mu(x):=\max_{Y\in\Delta_n^M}\left\{\sum_{j=1}^m x_j{\left\langle}A_j, Y{\right\rangle}_F-\mu{d_{\Delta_n^M}}(Y) \right\}=\mu\ln\left(\sum_{i=1}^n\exp\left[\lambda_i\left(\frac{\sum_{j=1}^mx_jA_j}{\mu}\right)\right]\right)-\mu \ln(n),$$ where $x\in\Delta_m$ and $\mu>0$ denotes the smoothness parameter. The approximation quality depends on the smoothness parameter: $$\overline{\phi}_\mu(x)\leq \overline{\phi}(x)\leq \overline{\phi}_\mu(x) +\mu\ln(n)\qquad\forall\ x\in\Delta_m.$$ Finally, the gradient of $\overline{\phi}_\mu$ is given by $$\left[\nabla \overline{\phi}_\mu(x)\right]_j={\left\langle}A_j, Y_*(x){\right\rangle}_F\qquad\forall\ 1\leq j\leq m,$$ where $x\in\Delta_m$ and $Y_*(x)$ denotes the unique maximizer of $Y\mapsto \sum_{j=1}^n x_j{\left\langle}A_j, Y{\right\rangle}_F-\mu {d_{\Delta_n^M}}(Y)$ over $\Delta_n^M$. Theorem \[thm:gradient\] implies that the gradient is Lipschitz continuous with a Lipschitz constant of $L_\mu:=\max_{1\leq j\leq m}{\left\|}A_j{\right\|}_{(\infty)}/\mu$.
### Applying the accelerated optimal first-order method with hybrid setting
Let the space ${\mathbb R}^m$ be equipped with the $1$-norm. We use $${d_{\Delta_m}}(x):=\ln(m)+\sum_{j=1}^mx_j\ln(x_j),\qquad x\in\Delta_m,$$ as [distance-generating function]{} for the set $\Delta_m$. Note that ${d_{\Delta_m}}(x)\leq \ln(m)$ for any $x\in\Delta_m$.
We run Algorithm \[alg:smoothing\_techniques\] with the hybrid setting that is described in Alternative 2 in Section \[sec:ST\]. Let us fix the accuracy $\epsilon>0$ and the parameter $\alpha\geq 0$ that defines when to switch back to the non-adaptive setting. The smoothness parameter is set as follows: $$\mu:=\frac{\epsilon}{2\ln(n)}.$$ Note that the smoothness parameter does not depend on $\alpha$. According to Theorem \[thm:convergence\_smoothing\], we need to perform at most $$\begin{aligned}
\label{eq:worst_case_T}
T=\left\lceil\frac{4\max_{1\leq j\leq m}{\left\|}A_j{\right\|}_{(\infty)}\sqrt{(1+\alpha)\ln(m)\ln(n)}}{\epsilon}-1\right\rceil \end{aligned}$$ iterations of Algorithm \[alg:smoothing\_techniques\] in order to find a tuple $(\bar{x},\bar{Y})\in\Delta_m\times\Delta_n^M$ such that $$\begin{aligned}
\label{eq:duality_gap}\max_{Y\in\Delta_n^M}\sum_{j=1}^m \bar x_j{\left\langle}A_j, Y{\right\rangle}_F - \min_{x\in\Delta_m}\sum_{j=1}^m x_j{\left\langle}A_j,\bar Y{\right\rangle}_F\leq \epsilon.\end{aligned}$$
Numerical results
-----------------
We consider randomly generated instances of Problem (\[eq:problem\_min\_max\]), where we fix $m$ to $100$ and where the symmetric $(n\times n)$-matrices $A_1,\ldots,A_m$ have a joint sparsity structure, each of them with about $n^2/10$ non-zero entries. We approximate the parameter $${\mathcal{L}}:=\max_{1\leq j\leq m}{\left\|}A_j{\right\|}_{(\infty)}$$ by applying the Power method to the matrices $A_j$ and taking the maximum, which we denote by ${\mathcal{L}}'$, of the computed values afterwards. We solve the randomly generated instances of Problem (\[eq:problem\_min\_max\]) up to a relative accuracy of $\epsilon{\mathcal{L}}'$ with $\epsilon:=0.002$.
All numerical results that we present in this section are averaged over ten runs and obtained on a computer with 24 processors, each of them with 2.67 GHz, and with 96 GB of RAM. The methods are implemented in Matlab (version R2012a). Matrix exponentials are computed through the Matlab built-in function `expm()`.
### Comparing the practical behavior of different methods
In Table \[table:different\_methods\], we present numerical results for the following two methods:
- Original smoothing techniques: This implementation corresponds to Algorithm \[alg:smoothing\_techniques\] with constant $L_t=L_\mu$ for any $0\leq t\leq T$. That is, we set $\alpha=0$ in Alternative 2 in Section \[sec:ST\].
- Accelerated smoothing techniques: We equip Algorithm \[alg:smoothing\_techniques\] with the hybrid setting described in Alternative 2 in Section \[sec:ST\], where we choose $\alpha:=3$ and $\kappa:=10^{-12}$. With this setting, we need to perform twice as many iterations as with original smoothing techniques with respect to the worst-case bounds; see (\[eq:worst\_case\_T\]).
For both methods, we check the duality gap (\[eq:duality\_gap\]) at every $100$-th iteration. Additionally for the later method, we also verify this condition at every of the first hundred iterations. The maximal eigenvalue that corresponds to the first term in (\[eq:duality\_gap\]) is computed through the Matlab built-in functions $\texttt{max()}$ and $\texttt{eig()}$.
**Average CPU time \[sec\]**\
[|l||cccc|]{} $n$ & $100$ & $200$ & $400$ &$800$ Original smoothing techniques & $139$ & $366$ & $1'406$ & $5'961$ Accelerated smoothing techniques & $116$ & $3$ & $9$ & $32$ Acceleration & $16.55\%$ & $99.18\%$ & $99.36\%$ & $99.46\%$
\
**Average \# of iterations that are required in practice**\
[|l||cccc|]{} $n$ & $100$ & $200$ & $400$ &$800$ Original smoothing techniques & $6'180$ &$6'690$ & $7'150$ & $7'520$ Accelerated smoothing techniques & $4'918$ & $18$ & $14$ & $13$Reduction & $20.42\%$ & $99.73\%$ & $99.80\%$ & $99.83\%$
\
**Average \# of iterations that are required in theory**\
[|l||cccc|]{} $n$ & $100$ & $200$ & $400$ &$800$ Original smoothing techniques& $9'210$ & $9'879$ & $10'505$ & $11'096$ Accelerated smoothing techniques& $18'420$ & $19'758$ & $21'011$ & $22'193$Reduction & $-100.00\%$ & $-100.00\%$ & $-100.01\%$ & $-100.01\%$
We observe that accelerated smoothing techniques require significantly less CPU time and iterations in practice than original smoothing techniques; see Table \[table:different\_methods\]. For problems involving matrices of size $200\times 200$ up to size $800\times 800$, we can reduce the number of iterations in practice and the CPU time by more than $99\%$. Interestingly, the number of iterations that are required by accelerated smoothing techniques in practice is even decaying when the matrix size $n$ is getting larger.
Note that there exists a gap in the average CPU time and number of iterations that are required by accelerated smoothing techniques in practice for solving the instances of size $100\times 100$ and the instances of size $200\times 200$. In Figure \[fig:beta\], we plot the values $$\begin{aligned}
\label{eq:beta}\beta_t:=\frac{-\chi_t}{\ln(m)L_0}= \frac{-\sum_{t'=0}^{t-1}\left(L_{t'+1}-L_{t'}\right)\left({d_{\Delta_m}}(z_{t'+1})-\frac{1}{2}\left\|z_{t'}-\hat{x}_{t'+1}\right\|_{1}^2 \right)}{D_1L_0}\qquad \forall\ t\geq 1.\end{aligned}$$ In contrast to the cases $n=200$, $n=400$, or $n=800$, where these values remain small (that is, below $0.25$), we have considerably large values $\beta_t$ for $n=100$. However, the values are still below $3$, as we switch back to a non-adaptive setting as soon as $\beta_t$ would be larger than $3$. This behavior is in full accordance with the gap mentioned in the beginning of this paragraph. The non-smooth patterns at the end of the plots in Figure \[fig:beta\] are due to the averaging over the different runs (We may need a different number of iterations in the different runs.).
![Ratios $\beta_t$; see (\[eq:beta\]) for the definition of these ratios.[]{data-label="fig:beta"}](beta_100.eps "fig:"){width="0.48\linewidth"} ![Ratios $\beta_t$; see (\[eq:beta\]) for the definition of these ratios.[]{data-label="fig:beta"}](beta_200.eps "fig:"){width="0.48\linewidth"} ![Ratios $\beta_t$; see (\[eq:beta\]) for the definition of these ratios.[]{data-label="fig:beta"}](beta_400.eps "fig:"){width="0.48\linewidth"} ![Ratios $\beta_t$; see (\[eq:beta\]) for the definition of these ratios.[]{data-label="fig:beta"}](beta_800.eps "fig:"){width="0.48\linewidth"}
### Solving problems of very large scale
In Table \[table:large\_scale\], we show numerical results for accelerated smoothing techniques (with $\alpha=3$, $\kappa=10^{-12}$, and the same duality gap checking procedure as above) when applied to randomly generated instances of (\[eq:min\_max\_eig\]) that are of very large scale. Using accelerated smoothing techniques, we are able to solve approximately instances of (\[eq:min\_max\_eig\]) involving matrices of size $12'800\times 12'800$ in about $8$ hours and $40$ minutes on average. Clearly, this performance would be out of reach for original smoothing techniques.
**Accelerated smoothing techniques applied to large-scale instances of (\[eq:min\_max\_eig\])**\
[|l||cccc|]{} $n$ & $1'600$ & $3'200$ & $6'400$ &$12'800$ CPU time \[sec\] & $158$ & $791$ & $4'566$ & $31'240$ Average \# of iterations that are required in practice & $13$ & $13$ & $13$ & $13$ Average \# of iterations that are required in theory & $23'315$ & $24'386$ & $25'411$ & $26'397$
\
Acknowledgments {#acknowledgments .unnumbered}
===============
We gratefully thank Yurii Nesterov and Hans-Jakob Lüthi for many helpful discussions. This research is partially funded by the Swiss National Fund.
Proof of Theorem \[thm:I\_t\] {#sec:proof_opt_first_order}
=============================
Choose $T\in{\mathbb N}_0$ and let the sequences $\left(x_t\right)_{t=0}^{T+1}$, $\left(u_t\right)_{t=0}^{T+1}$, $\left(z_t\right)_{t=0}^T$, $\left(\hat{x}_t\right)_{t= 1}^{T+1}$, $\left(\gamma_t\right)_{t=0}^{T+1}$, $\left({\Gamma}_t\right)_{t= 0}^{T+1}$, $\left(\tau_t\right)_{t=0}^T$, and $\left(L_t\right)_{t=0}^T$ be generated by Algorithm \[alg:opt\_first\_order\]. Recall that Inequality $(\mathcal{I}_t)$ holds for $0\leq t\leq T$ if $${\Gamma}_tf(u_t)+\sum_{k=0}^{t-1}\left(L_{k+1}-L_k\right)\left(d_Q(z_{k+1})-\frac{1}{2}\left\|z_k-\hat{x}_{k+1}\right\|_{{\mathbb R}^n}^2 \right)\leq \psi_t, \tag{$\mathcal{I}_t$}$$ where $$\psi_t:=\min_{x\in Q}\left\lbrace
\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), x-x_k\rangle \right) + L_td_Q(x) \right\rbrace.$$ By its definition (see Algorithm \[alg:opt\_first\_order\]), the element $z_t\in Q$ is the minimizer to the above optimization problem, which allows us to rewrite $\psi_t$ as: $$\psi_t=\sum_{k=0}^{t}\gamma_k\left(f(x_k)+\langle \nabla f(x_k), z_t-x_k\rangle \right)+L_td_Q(z_t).$$
We show by induction that Inequality $(\mathcal{I}_t)$ holds for any $0\leq t\leq T$.
\[lem:basic\_step\] Inequality $(\mathcal{I}_0)$ holds, that is, we have $\gamma_0f(u_0)\leq \psi_0$.
**Proof:** We apply the definition of $u_0$ (see Algorithm \[alg:opt\_first\_order\]), Inequality (\[eq:lower\_bound\_dgf\]), the condition on $\gamma_0$ saying that $\gamma_0\in(0,1]$, and Theorem 2.1.5 in [@nesterovintrodlectures] in order to justify the following relations: $$\begin{aligned}
\psi_0&:=&\min_{x\in Q}\left\lbrace
\gamma_0\left(f(x_0)+\langle \nabla f(x_0, x-x_0\rangle \right) +L_0d_Q(x)\right\rbrace\cr
&=&\gamma_0\left(f(x_0)+\langle \nabla f(x_0), u_0-x_0\rangle\right) +L_0d_Q(u_0) \cr
&\geq&\gamma_0\left(f(x_0)+\langle \nabla f(x_0), u_0-x_0\rangle \right) +\frac{L_0}{2}\left\|u_0-x_0\right\|_{{\mathbb R}^n}^2 \cr
&\geq&\gamma_0\left( f(x_0)+\langle \nabla f(x_0), u_0-x_0\rangle+\frac{L_0}{2}\left\|u_0-x_0\right\|_{{\mathbb R}^n}^2 \right)\cr
&\geq& \gamma_0f(u_0).\end{aligned}$$
${}$
------------------------------------------------------------------------
Let us verify the inductive step.
\[lem:inductive\_step\] Let $0\leq t\leq T-1$. If Inequality $(\mathcal{I}_t)$ holds, also $(\mathcal{I}_{t+1})$ is true.
**Proof:** Let $0\leq t\leq T-1$ and assume that $(\mathcal{I}_t)$ holds. We make the following two definitions: $$\begin{aligned}
\chi_t&:=&\sum_{k=0}^{t-1}\left(L_{k+1}-L_k\right)\left(d_Q(z_{k+1})-\frac{1}{2}\left\|z_k-\hat{x}_{k+1}\right\|_{{\mathbb R}^n}^2 \right)\in{\mathbb R},\cr
s_{t}&:=&\sum_{k=0}^t\gamma_k\nabla f(x_k)\in {\mathbb R}^n.
$$ In addition, we define the linear function: $$\begin{aligned}
l_t:Q\rightarrow {\mathbb R}:x\mapsto l_t(x)=\sum_{k=0}^t\gamma_k\left( f(x_k)+\left\langle \nabla f(x_k),x-x_k\right\rangle\right).\end{aligned}$$ Choose $x\in Q$. The definition of $z_t$ implies: $$\begin{aligned}
\label{eq:z_k_minimizer}
0\leq \left\langle L_t\nabla d_Q(z_t)+\sum_{k=0}^t \gamma_k\nabla f(x_k), x-z_t\right\rangle= \left\langle L_t\nabla d_Q(z_t)+s_t, x-z_t\right\rangle.\end{aligned}$$ As the Inequality $(\mathcal{I}_t)$ holds and as the function $f$ is convex, we have: $$\begin{aligned}
\psi_t &\geq& {\Gamma}_t f(u_t)+\chi_t \geq {\Gamma}_t \left(f(x_{t+1})+\left\langle \nabla f(x_{t+1}),u_t-x_{t+1}\right\rangle \right)+\chi_t.\end{aligned}$$ This implies: $$\begin{aligned}
\hspace{-1cm}\psi_t +\gamma_{t+1}\left(f(x_{t+1})+\left\langle \nabla f(x_{t+1}),x-x_{t+1}\right\rangle\right) \geq {\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),x-z_t\right\rangle+\chi_t,\end{aligned}$$ where we use the relations ${\Gamma}_{t+1}={\Gamma}_t+\gamma_{t+1}$ and $$\begin{aligned}
{\Gamma}_t(u_t-x_{t+1})+\gamma_{t+1}(x-x_{t+1})&=&{\Gamma}_tu_t-{\Gamma}_{t+1}x_{t+1}+\gamma_{t+1}x\cr
&=&{\Gamma}_tu_t-{\Gamma}_{t+1}\left(\tau_tz_t+(1-\tau_t)u_t\right)+\gamma_{t+1}x\cr
&=&{\Gamma}_tu_t-{\Gamma}_{t+1}\left(\frac{\gamma_{t+1}}{{\Gamma}_{t+1}}z_t+\frac{{\Gamma}_t}{{\Gamma}_{t+1}}u_t\right)+\gamma_{t+1}x\cr
&=&\gamma_{t+1}(x-z_t).\end{aligned}$$ Combining the above inequality with the fact that $\psi_t=L_td_Q(z_t)+l_t(z_t)$ and with (\[eq:z\_k\_minimizer\]), we observe: $$\begin{aligned}
&&\hspace{-1cm} L_td_Q(x)+l_{t+1}(x)\cr&=&L_td_Q(x)+l_t(x)+\gamma_{t+1}\left(f(x_{t+1})+\left\langle \nabla f(x_{t+1}),x-x_{t+1}\right\rangle\right)\cr
&=&L_t V_{z_t}(x)+\psi_t+\left\langle L_t\nabla d_Q(z_t)+s_t,x-z_t\right\rangle+\gamma_{t+1}\left(\left\langle \nabla f(x_{t+1}),x-x_{t+1}\right\rangle+f(x_{t+1})\right)\cr
&\geq& L_t V_{z_t}(x)+\psi_t+\gamma_{t+1}\left(f(x_{t+1})+\left\langle \nabla f(x_{t+1}),x-x_{t+1}\right\rangle\right)\cr
&\geq& L_t V_{z_t}(x)+{\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),x-z_t\right\rangle+\chi_t.\end{aligned}$$ With $\vartheta_t^{(1)}:= \left(L_{t+1}-L_t\right)d_Q(z_{t+1})$, we thus get: $$\begin{aligned}
\psi_{t+1}&:=&\min_{x\in Q}\left\lbrace
L_{t+1}d_Q(x)+l_{t+1}(x) \right\rbrace\cr
&=&L_{t+1}d_Q(z_{t+1})+l_{t+1}(z_{t+1})\cr
&=&\vartheta_t^{(1)}+L_td_Q(z_{t+1})+l_{t+1}(z_{t+1})\cr
&\geq&\vartheta_t^{(1)}+\min_{x\in Q}\left\{L_td_Q(x)+l_{t+1}(x)\right\}\cr
&\geq&\vartheta_t^{(1)}+\min_{x\in Q}\left\{L_t V_{z_t}(x)+{\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),x-z_t\right\rangle+\chi_t\right\}.\end{aligned}$$ Let $\vartheta_t^{(2)}:=\frac{1}{2}\left(L_t-L_{t+1}\right)\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2$. Using the construction rule for $\hat{x}_{t+1}$ and the fact that the inequality $V_z(x)\geq {\left\|}x-z{\right\|}_{{\mathbb R}^n}^2/2$ holds for any $x\in Q$ and $z\in Q^o$ (this relation follows from the strong convexity of $d_Q$), we obtain: $$\begin{aligned}
\psi_{t+1}&\geq&\vartheta_t^{(1)}+L_t V_{z_t}(\hat{x}_{t+1})+{\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),\hat{x}_{t+1}-z_t\right\rangle+\chi_t\cr
&\geq&\vartheta_t^{(1)}+\frac{L_t}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2+{\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),\hat{x}_{t+1}-z_t\right\rangle+\chi_t\cr
&=&\vartheta_t^{(1)}+\vartheta_t^{(2)} + \frac{L_{t+1}}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2+{\Gamma}_{t+1}f(x_{t+1})+\gamma_{t+1}\left\langle \nabla f(x_{t+1}),\hat{x}_{t+1}-z_{t}\right\rangle+\chi_t\cr
&=&\vartheta_t^{(1)}+\vartheta_t^{(2)}+\chi_t + {\Gamma}_{t+1}\left( \frac{L_{t+1}}{2{\Gamma}_{t+1}}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2+f(x_{t+1})+\tau_t\left\langle \nabla f(x_{t+1}),\hat{x}_{t+1}-z_t\right\rangle\right).\end{aligned}$$ As $\tau_t^2\leq {\Gamma}_{t+1}^{-1}$ and as $x_{t+1}-\tau_tz_t=(1-\tau_t)u_t=u_{t+1}-\tau_t\hat{x}_{t+1}$, this inequality yields to: $$\begin{aligned}
\psi_{t+1}&\geq&\vartheta_t^{(1)}+\vartheta_t^{(2)} +\chi_t + {\Gamma}_{t+1}\left( \frac{L_{t+1}\tau_t^2}{2}\left\|z_t-\hat{x}_{t+1}\right\|_{{\mathbb R}^n}^2+f(x_{t+1})+\tau_t\left\langle \nabla f(x_{t+1}),\hat{x}_{t+1}-z_t\right\rangle\right)\cr
&=&\vartheta_t^{(1)}+\vartheta_t^{(2)}+\chi_t + {\Gamma}_{t+1}\left( \frac{L_{t+1}}{2}\left\|u_{t+1}-x_{t+1}\right\|_{{\mathbb R}^n}^2+f(x_{t+1})+\left\langle \nabla f(x_{t+1}),u_{t+1}-x_{t+1}\right\rangle\right).\end{aligned}$$ It remains to apply (\[eq:adapted\_L\_cond\_opt\]): $$\begin{aligned}
\psi_{t+1}&\geq&\vartheta_t^{(1)}+\vartheta_t^{(2)} + {\Gamma}_{t+1}f(u_{t+1})+\chi_t\cr
&=&\sum_{k=0}^{t}\left(L_{k+1}-L_k\right)\left(d_Q(z_{k+1})-\frac{1}{2}\left\|z_k-\hat{x}_{k+1}\right\|_{{\mathbb R}^n}^2 \right)+ {\Gamma}_{t+1}f(u_{t+1}).\end{aligned}$$
${}$
------------------------------------------------------------------------
[^1]: Corr. author. Institute for Operations Research, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland,michel.baes@ifor.math.ethz.ch.
[^2]: Institute for Operations Research, ETH Zürich, Rämistrasse 101, 8092 Zürich, Switzerland,michael.buergisser@ifor.math.ethz.ch.
|
---
abstract: 'We study, via combinatorial enumeration, the probability of $k$-hop connection between two nodes in a wireless multi-hop network. This addresses the difficulty of providing an exact formula for the scaling of hop counts with Euclidean distance without first making a sort of mean field approximation, which in this case assumes all nodes in the network have uncorrelated degrees. We therefore study the mean and variance of the number of $k$-hop paths between two vertices $x,y$ in the random connection model, which is a random geometric graph where nodes connect probabilistically rather than deterministically according to a critical connection range. In the example case where Rayleigh fading is modelled, the variance of the number of three hop paths is in fact composed of four separate decaying exponentials, one of which is the mean, which decays slowest as $\norm{x-y}\to\infty$. These terms each correspond to one of exactly four distinct sub-structures with can form when pairs of paths intersect in a specific way, for example at exactly one node. Using a sum of factorial moments, this relates to the path existence probability. We also discuss a potential application of our results in bounding the broadcast time.'
author:
- 'Alexander P. Kartun-Giles, Sunwoo Kim, [^1]'
---
[Jayaprakasam : Multi-objective Beampattern Optimization in Collaborative Beamforming via NSGA-II]{}
Introduction
============
We want to know what bounds we can put on the distribution of the graph distance, measured in hops, between two transceivers in a random geometric network, given we know their Euclidean separation. We might also ask, if we know the graph distance, what bounds can we put on the Euclidean distance? Routing tables already provide the shortest hop count to any other node, so this information alone is theoretically able to provide a quick, low power Euclidean distance estimate. These statistics can then be used to locate nodes via multilateration, by first placing anchors with known co-ordinates at central points in the network. They can also bound packet delivery delay, or bound the runtime of broadcast over an unknown topology [@mao2010; @diaz2016].
This question of low power localisation is important in the industrial application of wireless communications. But, in its own right, the question of relating graph to Euclidean distance is fundamental to the application of *stochastic geometry* more generally, which is commonly used to calculate the macroscopic interference profile of e.g. an ultra-dense network, in terms of the microscopic details of the transmitter and receiver positions.
It is known that, in the connectivity regime of the random geometric graph, that as the system size goes to infinity, there exists a path of the shortest possible length between any pair of vertices with probability one. Relaxing the limit, and only requiring the graph to have a ‘giant’ connected component, the graph distance is only ever a constant times the Euclidean distance (scaled by the connection range), i.e. that paths do not wander excessively from a theoretical straight line [@diaz2016].
But the probability of $k$-hop connection, which is a natural question in mesh network theory, is not well understood. At least we can say, according to what has just been said, that for the smallest possible $k$, a path of that length will exist with probability one if the graph is connected with probability one. But since this regime requires expected vertex degrees to go to infinity, we naturally ask what can be said in sparser networks.
Routes into this problem are via mean field models. This abstracts the network as essentially non-spatial, wherein two nearby vertices have uncorrelated degree distributions. In fact, nearby vertices often share the same neighbours [@ta2007; @mao2011]. Corrections to this can be made via *combinatorial enumeration*, and we detail this novel idea in the rest of this article. We count the *number* of paths of $k$-hops which join two distant vertices, and ask what proportion of the time this number is zero. Using the following relation between the factorial moments of $\sigma_{k}(\norm{x-y})$, which is the number of $k$-hop paths between $x$ and $y$ at Euclidean distance $\norm{x-y}$ in some random geometric graph: $$\begin{aligned}
P(\sigma_{k}=t) = \frac{1}{t!}\sum_{i \geq 0} \frac{(-1)^{i}}{i!}\mathbb{E}\sigma_{k} (\sigma_{k}-1) \dots (\sigma_{k}-t-i+1),\end{aligned}$$ we are able to deduce the probability of at least one $k$-hop path $$\begin{aligned}
P(\sigma_{k}>0) = 1 - \sum_{i \geq 0}\frac{(-1)^{i}}{i!}\mathbb{E}\left[\left(\sigma_{k}\right)_{i}\right]\end{aligned}$$ where $\left(\sigma_{k}\right)_{i}$ is the descending factorial. The partial sums alternatively upper and lower bound this path-existence probability. So what are these factorial moments? In this article, we are able to deduce, in the case of the random connection model where Euclidean points are linked probabilistically according to some fading model, the first two. This requires closed forms for the mean and variance of $\sigma_{k}$ in terms of the connection function, and other system parameters such as node density. The remaining moments are then accessible via a recursion relation, which we intend to detail in a later work, but is currently out of reach.
This rest of this paper is structured as follows. In Section \[sec:rcm\] we introduce the random connection model, and summarise our results concerning these moments. We then discuss the background to this problem, and related work in both engineering and applied mathematics. In two sections which follow, we then derive the mean and variance for the non-trivial case $k=3$, and the mean also for general $k \in \mathbb{N}$, showing how the variance is a sum of four terms. We also provide numerical corroboration, including of an approximation to the probability that there exist zero $k$-hop paths for each $k \in \mathbb{N}$ in terms of a sum of factorial moments. We finally conclude in Section \[sec:conclusion\].
Summary of main results {#sec:rcm}
=======================
The *Random Connection Model* is a graph $\mathcal{G}_{H}=(\mathcal{Y},E)$ formed on a random subset $\mathcal{Y}$ of $\mathbb{R}^{d}$ by adding an edge between distinct pairs of $\mathcal{Y}$ with probability $H(\norm{x-y})$, where $H: \mathbb{R}^{+} \to \left[0,1\right]$ is called the *connection function*, and $\norm{x-y}$ is Euclidean distance. Often $\mathcal{Y}$ is a Poisson point process of intensity $\rho \mathrm{d}x$, with $\mathrm{d}x$ Lesbegue measure on $\mathbb{R}^{d}$. By *k-hop path* we mean a non-repeating sequence of $k$ adjacent edges joining two different vertices $x,y$ in the vertex set of $\mathcal{G}_{H}$. Since we only add edges between distinct pairs of $\mathcal{Y}$, vertices do not connect to themselves in what follows. This forbids paths of two hops becoming three hops simply by connecting vertices to themselves at some point along the path. See e.g. Fig. \[fig:allpaths\], which shows an example case for $k=3$. We also consider the practically important case of Rayleigh fading [@sklar1997] where, with $\beta > 0$ a parameter and $\eta > 0$ the path loss exponent, the connection function, with $\norm{x-y}>0$, is given by $$\begin{aligned}
\label{e:1}
H(\norm{x-y}) = \exp\left(-\beta\norm{x-y}^{\eta}\right)\end{aligned}$$ and is otherwise zero. This choice is discussed in e.g. Section 2.3 of [@giles2016]. Note that we refer to *nodes* when discussing actual communication devices in a wireless network, and *vertices* when discussing their associated graphs directly.
Using the usual definition, a path, which consists of a sequence of *hops*, is a *trail* in which all vertices are distinct. Note that a trail is a walk in which all edges a distinct, and a walk is an alternating series of vertices and edges. So in these results we have no loops or self-loops, and all the paths are distinguishable from each other even if only on a single edge.
We now detail our main results.
\[p:khopexpectationgeneral\] Take a general connection function $H: \mathbb{R}^{+} \to [0,1]$. Define a new Poisson point process $\mathcal{Y}^{\star}$ which is $\mathcal{Y}$ conditioned on containing two specific points $x,y \in \mathbb{R}^{d}$ at Euclidean distance $\norm{x-y}$. Consider those two vertices $x,y$ in the vertex set of the random geometric graph $\mathcal{G}_{H}=(\mathcal{Y}^{\star},E)$, and set $x=z_0,y=z_k$. Then, in $\mathcal{G}_{H}$, the expected number of $k$-hop paths starting at $x$ and terminating at $y$ is $$\begin{aligned}
\label{e:khopexpectationgeneral}
\mathbb{E}\sigma_{k} = \rho^{k-1}\int_{\mathcal{V}^{dk-d}}\mathrm{d}z_{1} \dots\mathrm{d}z_{k-1}\prod_{i=0}^{k-1}H\left(\norm{z_{i}-z_{i+1}}\right).\end{aligned}$$
\[p:kexpectation\] Take $H$ from Eq. \[e:1\], and $d,\eta=2$ so that we consider free space propagation in two dimensions. Define a new Poisson point process $\mathcal{Y}^{\star}$ as in Theorem \[p:khopexpectationgeneral\], and the random geometric graph $\mathcal{G}_{H}=(\mathcal{Y}^{\star},E)$. Then, in $\mathcal{G}_{H}$, the expected number of $k$-hop paths starting at $x$ and terminating at $y$ is $$\begin{aligned}
\label{e:khopexpectation}
\mathbb{E}\sigma_{k} = \frac{1}{k}\left(\frac{\rho \pi}{\beta}\right)^{k-1}\exp\left(\frac{-\beta \norm{x-y}^{2}}{k}\right).\end{aligned}$$
\[p:threehopvariance\] Take $H$ from Eq. \[e:1\], and $d,\eta=2$ so that we consider free space propagation in two dimensions. Define a new Poisson point process $\mathcal{Y}^{\star}$ as in Theorem \[p:khopexpectationgeneral\], and the random geometric graph $\mathcal{G}_{H}=(\mathcal{Y}^{\star},E)$. Then, in this graph, the variance of the number of $k$-hop paths starting at $x$ and terminating at $y$ is $$\begin{gathered}
\label{e:threevar}
\mathrm{Var}\left(\sigma_{3}\right) = \mathbb{E}\sigma_{3} + \frac{\pi^3\rho^3}{\beta^3}\left(\frac{1}{4}\exp{\left( \frac{-\beta\norm{x-y}^2}{2}\right)}\right.\\\left. +\frac{1}{6}\exp{\left( \frac{-3\beta\norm{x-y}^2}{4}\right)}\right)
+\frac{\pi^2\rho^2}{8\beta^2}\exp{\left( -\beta\norm{x-y}^2\right)}.\end{gathered}$$
\[t:moments\] In $\mathcal{G}_{H}$, as discussed, the probability that a three-hop path exists between nodes at Euclidean separation $\norm{x-y}$ satisfies $$\begin{aligned}
P(\sigma_{3}>0) \geq 1 - \left[ 2\mathbb{E}\sigma_{3} - (\mathbb{E}\sigma_{3})^2 - \mathrm{Var}(\sigma_{3}) \right]\end{aligned}$$ and these statistics are known in closed form according to our results above in terms of the point process density and model parameters, given the nodes are at a specific Euclidean separation.
![Example of the random connection model bounded inside a rectangle. All three hop paths between the two nodes with thick borders are highlighted. The Euclidean separation between $x$ and $y$ is 3 units taking $\beta=1$ and an expected $\rho=1$ node per unit area, $\sigma_{3}=5$, and $\mathbb{E}\sigma_{3}=2.36$ and $\mathrm{Var}(\sigma_{3})=9.95$, according to Eqs. \[e:khopexpectation\] and \[e:threevar\].[]{data-label="fig:allpaths"}](tall5)
Background and Related Work {#sec:introduction}
===========================
We detail the historical background, and review what is known about this problem up to now. We also review the non-traditional random connection model, and highlight the applications of this theory in the developing field of mesh networks.
Historical Background in Wireless Communications
------------------------------------------------
This problem has been around since the letter by S. A. G. Chandler in 1989 [@chandler1989]. Approximations are given to the probability that a path of $k$ hops will exist between $x$ and $y$ at $\norm{x-y}$ in a random geometric graph on a homogeneous Poisson point process. Similar attempts, concerning the slightly different problem of deducing the probability that two randomly selected stations are able to connect in one or two hops, with numerical corroboration, are presented in Bettstetter and Eberspacher [@bettstetter2003]. More generally, given deterministic connection radius $r_{0}$, mean-field models are presented in Ta, Mao and Anderson [@ta2007]. They find a recursive formula for the probability of connection in $k$-hops or fewer. The contribution is of practical interest as an upper bound, though becomes inaccurate for large $k$ as the assumptions become unrealistic. It also weakens as the point process intensity gets large.
Later, Mao, Zhang and Anderson consider the problem in the case of probabilistic connection [@mao2010]. Again, an upper bound is presented, which is accurate for $k<3$. Again, for dense networks and/or large $k$, the bound is inaccurate. The concern, raised by the authors, is that a mean field approximation, which simply ignores the increased likelihood of a nodes sharing neighbours when they are close, does not yield much of an advance on this problem. It is beyond the scope of this article to quantify the effects this would bring about on network performance.
Similar Advances in Applied Probability Theory
----------------------------------------------
From a pure mathematical viewpoint, most of the work related to this problem has been focused on studying upper bounds on the graph distance in terms of the Euclidean distance. As a key early result, Ellis, Martin and Yan showed that there exists some large constant $K$ such that for every $r_{0} \geq r_{c}$, and writing $d_{\text{Graph}}(x,y)$ for graph distance and $d_{\text{Euclidean}}(x,y)$ for Euclidean distance, for every pair of vertices $$\begin{aligned}
\label{e:linearbound}
d_{Graph}(x,y) \leq K d_{Euclidean}(x,y) / r_{0}\end{aligned}$$ This was extended by Bradonjic et al. for the supercritical percolation range of $r_{0}$, given $d_{Euclidean}(x,y) = \Omega(\log^{7/2} n / r_{0}^2)$, i.e. given the Euclidean distance is sufficently large [@bradonjic2010]. Friedrich, Sauerwald and Stauffer improved this to $d_{Euclidean}(x,y) = \Omega(\log n/r_{0})$. They also proved that if $r_{0}(n) = o(r_{c})$, with $r_{c}$ the critical radius for asymptotic connectivity with probability one, asymptotically almost surely there exist pairs of vertices with $d_{Euclidean}(x,y) \leq 3r_{0}(n)$ and $d_{Graph}(x,y) = \Omega(\log n / r^{2})$, i.e. that the linear bound of Eq. \[e:linearbound\] does not hold in the subconnectivity regime.
Most recently, the article of Díaz, Mitsche, Perarnauand and Pérez-Giménez presents a rigorous proof of the fact that, in the connectivity regime, a path of the shortest possible length given the finite communication radius $r_{0}$ [@diaz2016] exists between a distant pair with probability one. This is equivalent to $K = 1 + o(1)$ asymptotically almost surely in Eq. \[e:linearbound\], given $r_{0} = \omega(r_{c})$.
The Random Connection Model
---------------------------
In the random subgraph of the complete graph on $n$ nodes obtained by including each of its edges independently with probability $p \sim \log n/n$, the probability that this graph is disconnected but free of isolated nodes tends to zero [@bollobas2001; @penrose2013]. The *random connection model* considers a random subgraph of the complete graph this time on a collection of nodes in a $d$-dimensional metric space [@iyer2015; @mao2010; @mao2011; @mao2013]. The edge are added independently, but with probability $H: \mathbb{R} \to \left[0,1\right]$, where in the infinite space case $\int_{\mathbb{R}^{d}}H(x)\mathrm{d}x<\infty$ so that the expected vertex degrees are infinite only when the density is infinite. When the space is bounded, the graph is known as *soft random geometric graph* [@penrose2016; @cef2012; @giles2016; @penrosebook].
In the language of theoretical probability theory, the connectivity threshold [@penrose1997; @mao2011; @mao2013; @iyer2015] goes as follows. Take a Poisson point process $\mathcal{Y} \subset [0,1]^{d}$ of intensity $\lambda(n)\mathrm{d}x$, $\mathrm{d}x$ Lesbegue measure on $\mathbb{R}^{d} $, and $\left(\lambda\left(n\right)\right)_{n\in\mathbb{N}}$ an increasing $(0,\infty)$-valued sequence which goes to $\infty$ with $n$. Take the measurable function $H: \mathbb{R}^{+} \to [0,1]$ to be the probability that two nodes are joined by an edge. Then, as $\lambda\left(n\right) \to \infty$ along this sequence, in any limit where the expected number of isolated nodes converges to a positive constant $\alpha < \infty$, i.e. $$\label{e:degree}
\lambda \int_{\left[0,\sqrt{n}\right]^{2}}\exp\left(-\lambda\int_{\left[0,\sqrt{n}\right]^{2}} H\left(\norm{x-y}\right)\mathrm{d}y\right)\mathrm{d}x \to \alpha,$$ their number converges to a Poisson distribution with mean $\alpha$, see Theorem 3.1 in Penrose’s recent paper [@penrose2016]. The connection probability then follows, as before, from the probability that the graph is free of isolated vertices, given some conditions on the rate of growth of $H$ with $n$, see e.g. [@penrose2016; @cef2012] for the case of random connection in a confined geometry, or non-convex geometry [@giles2016], or for the random connection model [@mao2011]. Put simply, with any dense limit $\lambda(n) \to \infty$, the vertex degrees diverge, making isolation rare, and so vertices have degree zero independently. They are therefore a homogeneous Poisson point process in space. Once this has occurred, all that is required is the expected number of isolated vertices $\alpha \to 0$. This is sufficient for connectivity, because large clusters merge, which is the main part of the proof.
The recent consensus is that spatial dependence between the node degrees, which form a Markov random field [@clifford1990] in the deterministic, finite range case, appears to preclude exact description of a map between distances, given by some norm, and the space of distributions describing the probability of $k$-hop connection between two nodes of known displacement, see e.g. Section 1 of [@ta2007]. We believe, however, that the number $\sigma_{k}(\norm{x-y})$ of $k$-hop paths may have a probability generating function similar to a $q$-series common in other combinatorial enumeration problems [@goulden2004]. In fact, the first author has demonstrated that this is indeed the case under deterministic connection in one dimension, proving that $\mathbb{E}q^{\sigma_{k}}$ is a random $q$-multinomial coefficient [@giles2017]. This is also studied in a single dimension recently in vehicular networks [@knight2017].
Impacts in Wireless Communications
----------------------------------
Bounds on the distribution of the number of hops between two points in space, for example, has been a recent focus of many researchers interested in the statistics of the number of hops to e.g. a sink in a wireless sensor network, or gateway-enabled small cell in an ultra-dense deployment of non-enabled smaller cells [@dulman2006; @mao2011; @ta2007; @ge2016], since it relates to data capacity in e.g. multihop communication with infrastructure support [@ng2010; @zemlianov2005], route discovery [@bettstetter2003; @perevalov2005] and localisation [@nguyen2015]. We now detail other important examples of ongoing research where new insight on this problem will prove useful.
Firstly, consider the problem of broadcast. Broadcasting information from one node to eventually all other nodes in a network is a classic problem at the interface of applied mathematics and wireless communications. The task is to take a message available at one node, and by passing it away from that node, and then from its neighbours, make the message available to all the nodes in either the least time, or using the least energy, or using the fewest number of transmissions. If the nodes form the state space of a Markov chain, with links weighted with transmission probabilities, rapid mixing of this chain implies fast broadcast [@elsasser2007].
When the network considered is random and embedded in space, the problem is this: given a graph $\mathcal{G}_{H}$, a source node is selected. This source node has a message to be delivered to all the other nodes in the network. Nodes not within range of the source must receive the message indirectly, via multiple hops. The model most often considered in the literature on broadcasting algorithms is synchronous. All nodes have clocks whose ticks measure time steps, or *rounds*. A broadcasting operation is a sequence $(T)_{i \leq T}$, where each element is a set of nodes that act as transmitters in round $i$. The execution time is the number of rounds required before all nodes hear the message, which is the length $T$ of the broadcast sequence. See the review of Peleg for the case where collision avoidance is also considered [@peleg2007].
Let $T_{p}$ be the number of rounds required for the message to be broadcast to all nodes with probability $p$. Then, since every broadcasting algorithm requires at least $\max \{ \log_{2} n, \text{diam}\left(\mathcal{G}\right) \}$, see [@elsasser2007 Section 2], the algorithm is called *asymptotically optimal* if $T_{1-1/N} = \mathcal{O}\left(\log n + \text{diam}\left(\mathcal{G}\right)\right)$. This relates the broadcast time on *random* graph, measured in rounds, to its diameter, i.e. the length in hops of longest path between any pair of nodes, given the path is geodesic. The running times are in fact often bounded in other ways by the diameter. The diameter for sufficiently distant nodes in e.g. connected graphs, is never more than a constant times the Euclidean distance, and for sufficiently dense graphs, is precisely the ceiling of the Euclidean distance [@diaz2016]. For more general limits, however, the diameter remains important, and so by providing a relation between graph distance and Euclidean distance, in a finite domain the broadcast time can be adequately bounded by our results.
Also, consider distance estimation. Hop counts between nodes in geometric networks can give a quick and energy efficient estimate of Euclidean distance. These estimates are made more accurate once this relation is well determined. This is often done numerically [@nguyen2015]. Given the graph distance, demonstrating how bounds on the Euclidean distance can work to lower power consumption in sensor network localisation is an important open problem. Using the multiplicity of paths can also assist inter-point distance estimates, though this research is ongoing, since one needs a path statistic with exceptionally low variance for this task to avoid errors. Either way, knowing the length in hops of geodesics given the Euclidean separation is essential.
Finally, we highlight packet delivery delay statistics and density estimation. Knowing the delay in packet transfer over a single hop is a classic task in ad hoc networks. With knowledge of the total number of hops, such as its expectation, one can provide the statistics of packet delivery delays given the relation of graph to Euclidean distance. This can help bound delivery delay in e.g. the growing field of delay tolerant networking, by adjusting network parameters accordingly.
The expected number of $k$-hop paths {#sec:exp}
====================================
Consider a point process $\mathcal{X}$ on some space $\mathcal{V}$. If it is assumed that $x \in \mathcal{V}$ and $x \in \mathcal{X}$, what is true of the remaining points $\mathcal{X} \setminus \{x\}$? The Poisson point process has the property that when fixing a point, the remaining points are still a point process, and of the original intensity. This is *Slivnyak’s theorem*, and it characterises the Poisson process, see e.g. Proposition 5 in [@jagers1973]. The relevance of the following lemma [@penrose2016; @penrose20162] from stochastic geometry [@haenggi2009; @elsawy2017] is now framed.
\[l:mecke\] Let $t \in \mathbb{N}$. For any measurable real valued function $f$ defined on the product of $(\mathbb{R}^{d})^{t} \times \mathcal{G}$, where $\mathcal{G}$ is the space of all graphs on finite subsets of $[0,1]^{d}$, given a connection function $H$, the following relation holds $$\begin{gathered}
\label{e:meckeformula}
\mathbb{E}\sum_{X_{1},\dots,X_{t} \in \mathcal{Y}}^{\neq}f\left(X_{1},\dots,X_{t},\mathcal{G}_{H}\left(\mathcal{Y}\setminus \{X_{1},\dots,X_{t}\}\right)\right) \\ = n^{t}\int_{\left[0,1\right]^{d}}\mathrm{d}x_{1}\dots\int_{\left[0,1\right]^{d}}\mathrm{d}x_{t}\mathbb{E}f\left(x_1,\dots,x_{t},\mathcal{G}_{H}\left(\mathcal{Y}\right)\right)\end{gathered}$$ where $\mathcal{Y}\subset[0,1]^{d}$, $\mathbb{E}\norm{\mathcal{Y}}=n$, and $\sum^{\neq}$ means the sum over all *ordered* $t$-tuples of distinct points in $\mathcal{Y}$.
To clarify, note that $\{a,b\}$ and $\{b,a\}$ are distinct *ordered* 2-tuples, but indistinct *unordered* 2-tuples.
In the case $t=2$ with $$\label{e:indicator1}
f\left(u,v,\mathcal{G}_{H}\left(\mathcal{Y}\right)\right) =: \mathbf{1}\{u \leftrightarrow v\}$$ a Bernoulli variate with parameter $H(\norm{u-v})$, then $$\label{e:meckeintegrand1}
\mathbb{E}f\left(u,v,\mathcal{G}_{H}\left(\mathcal{Y}\right)\right) = H\left(\norm{u-v}\right)$$ where the expectation is over all graphs $\mathcal{G}_{H}\left(\mathcal{Y}\right)$. These indicator functions are important for dealing with the existence of edges between points of $\mathcal{Y}$. We note this for clarity, but it is not required for what follows. The proof of Lemma \[l:mecke\] is obtained by conditioning on the number of points of $\mathcal{Y}$. Firstly, $$\begin{gathered}
\mathbb{E}\sum_{X_{1},\dots,X_{m} \in \mathcal{Y}}^{\neq}f\left(X_{1},\dots,X_{m},\mathcal{G}_{H}\left(\mathcal{Y}\setminus \{X_{1},\dots,X_{m}\}\right)\right) \nonumber \\
=\sum_{t=m}^{\infty}\left(\frac{e^{n}n^{t}}{t!}\right)\left(t\right)_{m}\int_{[0,1]^{d}}\mathrm{d}x_{1}\dots \\ \dots \int_{[0,1]^{d}}\mathrm{d}x_{t}f\left(x_1 \dots x_{m},\mathcal{G}_{H}\left(\{x_{m+1}, \dots , x_{t}\}\right)\right)\end{gathered}$$ where $(n)_{k}=n(n-1)\dots(n-k-1)$ is the descending factorial. Bring the $m$-dimensional integral over positions of vertices in the $m$-tuple outside the sum, $$\begin{gathered}
n^{m}\int_{[0,1]^{d}}\mathrm{d}x_{1} \dots \int_{[0,1]^{d}}\mathrm{d}x_{m} \sum_{t=m}^{\infty}\left(\frac{e^{n}n^{t-m}}{(t-m)!}\right) \\ \times \int_{[0,1]^{d}}\mathrm{d}y_{1}\dots \\ \dots\int_{[0,1]^{d}}\mathrm{d}y_{t-m}f\left(x_1 \dots x_{m},\mathcal{G}_{H}\left(\{y_{1}, \dots , y_{t-m}\}\right)\right), \nonumber\end{gathered}$$ and change variables such that $r=t-m$, such that $$\begin{gathered}
n^{m}\int_{[0,1]^{d}}\mathrm{d}x_{1} \dots \int_{[0,1]^{d}}\mathrm{d}x_{m} \sum_{r=0}^{\infty}\left(\frac{e^{n}n^{r}}{r!}\right)\int_{[0,1]^{d}}\mathrm{d}y_{1}\dots \\ \dots \int_{[0,1]^{d}}\mathrm{d}y_{r}f\left(x_1 \dots x_{m},\mathcal{G}_{H}\left(\{y_{1}, \dots , y_{r}\}\right)\right). \\ = n^{m}\int_{\left[0,1\right]^{d}}\mathrm{d}x_{1}\dots\int_{\left[0,1\right]^{d}}\mathrm{d}x_{m}\mathbb{E}f\left(x_1,\dots,x_{m},\mathcal{G}_{H}\left(\mathcal{Y}\right)\right) \nonumber\end{gathered}$$ as required.
We know provide a general formula for the expected number of $k$-hop paths between $x,y \in V$.
Define a new Poisson point process $\mathcal{Y}^{\star}$ conditioned on containing two specific points $x,y \in \mathbb{R}^{d}$ at Euclidean distance $\norm{x-y}$ and set $x=z_0,y=z_k$. In a similar manner to Eq. \[e:indicator1\], define the *path-existence function* $g$ to be the following product $$\label{e:g}
g\left(z_1,\dots,z_{k-1},\mathcal{G}_{H}\left(\mathcal{Y}^{\star}\right)\right) = \prod_{i=0}^{k-1}\mathbf{1}\{z_{i} \leftrightarrow z_{i+1} \}$$ where the indicator is defined in Eq. \[e:indicator1\]. The expected value of this function is then just the product of the connection probabilities $H$ of the inter-point distance along the sequence $z_0,\dots,z_{k}$, i.e. $$\label{e:pathfunction}
\mathbb{E}g(z_1,\dots,z_{k-1},\mathcal{G}_{H}\left(\mathcal{Y}^{\star}\right)) = \prod_{i=0}^{k-1}H\left(\norm{z_{i}-z_{i+1}}\right)$$ From the Mecke formula $$\begin{gathered}
\label{e:khopmecke}
\mathbb{E}\sum_{X_1,\dots,X_{k-1} \in \mathcal{Y}^{\star}}^{\neq} g\left(X_1,\dots,X_{k-1},\mathcal{G}_{H}\left(\mathcal{Y}\setminus \{X_{1},\dots,X_{k-1}\}\right)\right) \\ = \rho^{k-1}\int_{\mathbb{R}^{dk-d}} \mathbb{E}g\left(z_1,\dots,z_{k-1}\right)\mathrm{d}z_1 \dots \mathrm{d}z_{k-1}\end{gathered}$$ and with Eq. \[e:pathfunction\] replacing the integrand on the right hand side, the proposition follows.
We now expand on the practically important situation where vertices connect with probability given by Eq. \[e:1\].
Using Eq. \[e:khopmecke\] with $H$ taken from Eq. \[e:1\], and in the case $d,\eta=2$, we have $$\begin{gathered}
\mathbb{E}\sum_{X_1,\dots,X_{k-1} \in \mathcal{Y}^{\star}}^{\neq} g\left(X_1,\dots,X_{k-1},\mathcal{G}_{H}\left(\mathcal{Y}^{\star} \setminus X_1,\dots,X_{k-1}\right)\right)\\=
\rho^{k-1}\int_{-\infty}^{\infty}\dots\int_{-\infty}^{\infty} \mathrm{d}z_{1_{x}}\mathrm{d}z_{1_{y}} \dots \mathrm{d}z_{\left(k-1\right)_{x}}\mathrm{d}z_{\left(k-1\right)_{y}} \\ \times \exp\left({-\beta\left(z_{1_{x}}^{2}+\dots+\left(\norm{x-y}-z_{\left(k-1\right)_{x}}^{2}\right)+z_{\left(k-1\right)_{y}}^{2}\right)}\right) \nonumber\end{gathered}$$ which, due to the addition of terms in the exponent, factors into a product of integrals, to be performed in sequence. Each is in $d=2$ variables.
For example, in the case of $k=2$ hops, we have $$\begin{gathered}
\mathbb{E}\sum_{X_1 \in \mathcal{Y}^{\star}} g\left(X_1,\mathcal{G}_{H}\left(\mathcal{Y}^{\star} \setminus X_1\right)\right) = \rho \int_{-\infty}^{\infty} \mathrm{d}z_{1_{x}} \int_{-\infty}^{\infty} \mathrm{d}z_{1_{y}} \dots \\ \times \exp{\left(-\beta \left( z_{1_{x}}^{2} + z_{1_{y}}^{2} + \left(||x-y|| - z_{1_{x}}\right)^{2} + z_{1_{y}}^{2} \right) \right)} \nonumber\end{gathered}$$ which, by expanding the exponent becomes $$\begin{aligned}
&& \rho \int_{-\infty}^{\infty}\exp{\left(-2\beta z_{1_{y}}^{2} \right)}\mathrm{d}z_{1_{y}} \int_{-\infty}^{\infty} \exp{\left(-\beta \left( z_{1_{x}}^{2} + \left(||x-y|| - z_{1_{x}}\right)^{2} \right) \right)}\mathrm{d}z_{1_{x}} \nonumber\\ &=& \frac{ \rho \pi}{8 \beta} \exp{\left(\frac{-\beta \norm{x-y}^2}{2} \right)}\left[\text{Erf}\left(\frac{\left(2z_{1_{x}}-\norm{x-y}\right)\sqrt{\beta}}{\sqrt{2}}\right)\right]^{\infty}_{-\infty} \left[\text{Erf}\left(\sqrt{2 \beta} z_{1_{y}}\right)\right]^{\infty}_{-\infty}\nonumber \\ &=& \frac{\rho\pi}{2 \beta} \exp{\left(\frac{-\beta \norm{x-y}^2}{2} \right)} \nonumber\end{aligned}$$ where the second integral on the right hand side of the second line can be performed by completing the square, and then substituting to obtain a Gaussian integral (which integrates to an error function). Due to the limits, the integrals evalaute in turn exactly for each $k$. Comparing these results, determined one by one, demonstrates the general form of Eq. \[e:khopexpectation\].
The variance for $k=3$ {#sec:var}
======================
In this section we consider the variance of the number of paths of three sequential edges.
A similar technique to the one implemented here is used to derive the asymptotic variance of the number of edges in the random geometric graph $G$ defined in Section \[sec:rcm\], see e.g. Section 2 of [@penrose20162].
The proof now follows. Consider $\mathbb{E}\sigma_{3}^{2}\left(\norm{x-y}\right)$. This is the expected number of ordered *pairs* of three hop paths between the fixed vertices $x$ and $y$. There are three non-overlapping contributions, $$\begin{aligned}
\label{e:threeterms}
\sigma_{3}^{2} = \Sigma_{0} + \Sigma_{1} + \Sigma_{2},\end{aligned}$$ where for $i=0,1,2$ the integer $\Sigma_{i}$ denotes the number of ordered pairs of three hop paths with $i$ vertices in common. Taking $g$ from Eq. \[e:g\], we can quickly evaluate the term $\Sigma_{0}$, which is the following sum over ordered 4-tuples of points in $\mathcal{Y}^{\star}$, $$\begin{aligned}
\Sigma_{0} = \sum_{V,W,X,Y \in \mathcal{Y}^{\star}}^{\neq}g\left(V,W\right)g\left(X,Y\right).\end{aligned}$$ The Mecke formula implies that $$\begin{aligned}
\label{e:sigmazero}
\mathbb{E}\Sigma_{0} = \rho^{4}\int_{\mathbb{R}^{8}}\mathbb{E}\left(g\left(z_1,z_2\right)g\left(z_3,z_4\right)\right)\mathrm{d}z_1\mathrm{d}z_2\mathrm{d}z_3\mathrm{d}z_4,\end{aligned}$$ and since, according to Eq. \[e:khopmecke\], we have $$\begin{aligned}
\mathbb{E}\sigma_{3}=\rho^{2}\int_{\mathbb{R}^{4}}\mathbb{E}g\left(z_1,z_2\right)\mathrm{d}z_1\mathrm{d}z_2\end{aligned}$$ then $\mathbb{E}\Sigma_{0}=\left(\mathbb{E}\sigma_{3}\right)^2$, which cancels with a term in the definition of the variance $\mathrm{Var}(\sigma_{3}) = \mathbb{E}(\sigma_{3}^2)-\left(\mathbb{E}(\sigma_{3})\right)^{2}$, such that we have the following simpler expression for the variance, based on Eq. \[e:threeterms\] and Eq. \[e:sigmazero\], $$\begin{aligned}
\label{e:vardecomposition}
\mathrm{Var}(\sigma_{3})= \mathbb{E}\Sigma_1 + \mathbb{E}\Sigma_2.\end{aligned}$$ Now, $\mathrm{Var}(\sigma_{3})$ will follow from a careful evaluation of $\Sigma_1$ and $\Sigma_2$. The first of these, $\Sigma_{1}$, can be broken down into two separate contributions, denoted $\Sigma_{1(1)}$ and $\Sigma_{1(2)}$.
Dealing first with $\Sigma_{1(1)}$, notice the left panel of Fig. \[fig:paths1\], which shows an intersecting pair of paths in $\mathcal{G}_{H}\left(\mathcal{Y}^{\star}\right)$ which share a single vertex $U$ which is itself connected by an edge to $y$. Many triples of points in $\mathcal{Y}^{\star} \setminus \{x,y\}$ display this property.
We want a sum of indictor functions which counts the number of pairs of three-hop paths which intersect at a single vertex, in precisely the manner of the left panel of Fig. \[fig:paths1\]. In the following double sum, the function $g(A,B)$ indicates that vertices $A$ and $B$ are on a three hop path $x \leftrightarrow A \leftrightarrow B \leftrightarrow y$. Look with care at the limits of the sum. They are set up to only count pairs of paths (2-tuples) which intersect this very specific way: $$\begin{aligned}
\label{e:s1}
\Sigma_{1(1)} = \sum_{U \in \mathcal{Y}^{\star}} \sum_{W,Z \in \mathcal{Y}^{\star} \setminus \{U\} }^{\neq}g\left(U,W\right)g\left(U,Z\right),\end{aligned}$$ Via the Mecke formula, and with $U$ the position vector of the shared vertex, this can be written as an integral: $$\begin{gathered}
\label{e:s1int}
\rho\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right) \mathbb{E}\left[\left(\sum_{X \in \mathcal{Y}^{\star} \setminus \{U\} }\mathbf{1}\{U \leftrightarrow X\}\mathbf{1}\{X \leftrightarrow y\}\right)_{2}\right]\mathrm{d}U \\ + \rho\int_{\mathbb{R}^{d}}H\left(\norm{y-U}\right) \mathbb{E}\left[\left(\sum_{X \in \mathcal{Y}^{\star} \setminus \{U\} }\mathbf{1}\{U \leftrightarrow X\}\mathbf{1}\{X \leftrightarrow x\}\right)_{2}\right]\mathrm{d}U\end{gathered}$$ with $(a)_{2}=a(a-1)$. This descending factorial counts distinct pairs of a set with cardinality $a$. If there are $9$ two hops paths, there are $9(8)=72$ pairs of paths which are not paths paired with themselves. Also, there are two terms in Eq. \[e:s1int\] because we can exchange $x$ and $y$, and get another structure on a single triple of point which must be counted as part of $\Sigma_{1}$. For the first term, three hop paths diverge initially from each other, then unite at $U$ to hop in unison to $y$. The others hop first in union to $U$, then diverge to meet again finally at $y$.
![image](s11) ![image](s123) ![image](s212)
The descending factorials in the two terms in Eq. \[e:s1int\] are just $(\sigma_{2}\left(\norm{U-x}\right))_{2}$ and $(\sigma_{2}\left(\norm{U-y}\right))_{2}$ respectively, i.e. with $m$ taking both $x$ and $y$, and remembering that with $\Pi \sim \mathrm{Po}(\lambda)$ (i.e. distributed as a Poisson variate) then $\mathbb{E}(\Pi)_{2}=\mathbb{E}\left(\Pi^2\right)-\mathbb{E}\left(\Pi\right)=\lambda^{2}$, then $$\begin{aligned}
\label{e:squareofexpectation}
\mathbb{E}\left[\left(\sum_{X \in \mathcal{Y}^{\star} \setminus \{U\} }\mathbf{1}\{U \leftrightarrow X\}\mathbf{1}\{X \leftrightarrow m\}\right)_{2}\right] = \left(\mathbb{E}\sigma_2\left(\norm{U-m}\right)\right)^{2}.\end{aligned}$$ since the term inside the bracket on the left hand side is the number of two hop paths between $U$ and $m$, which is Poisson with expectation $\mathbb{E}\sigma_2\left(\norm{U-m}\right)$.
Eq. \[e:squareofexpectation\] is simply the number of distinct pairs of two-hop paths from $U$ to $x$ (or $U$ to $y$), so we now have $$\begin{gathered}
\label{e:s1gen}
\Sigma_{1(1)}=\rho^{3}\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right) \left(\int_{\mathbb{R}^{d}}H\left(\norm{U-z}\right)H\left(\norm{z-y}\right)\mathrm{d}z\right)^{2}\mathrm{d}U \\ + \rho^{3}\int_{\mathbb{R}^{d}}H\left(\norm{y-U}\right) \left(\int_{\mathbb{R}^{d}}H\left(\norm{U-z}\right)H\left(\norm{z-x}\right)\mathrm{d}z\right)^{2}\mathrm{d}U\end{gathered}$$ and for the case of Rayleigh fading taking $d,\eta=2$, Eq. \[e:s1gen\] evaluates to $$\begin{aligned}
\label{e:s1rayleigh}
\Sigma_{1(1)} = \frac{\pi^3\rho^3}{4\beta^3}\exp{\left( \frac{-\beta\norm{x-y}^2}{2}\right)},\end{aligned}$$ which appears as the second term in Eq. \[e:threevar\].
Now consider $\Sigma_{1(2)}$. This is designed to count pairs of paths which share a single vertex, but in a different way to $\Sigma_{1(1)}$ . This new sort of intersection structure is depicted in the middle panel of Fig. \[fig:paths1\]. Consider with care, and in relation to this middle panel, the following sum over triples of points: $$\label{e:s12}
\sum_{U \in \mathcal{Y}^{\star}} \sum_{Z \in \mathcal{Y}^{\star} \setminus \{W\} }\mathbf{1}\{x \leftrightarrow Z\}\mathbf{1}\{Z \leftrightarrow U\}\mathbf{1}\{U \leftrightarrow y\}\sum_{W \in \mathcal{Y}^{\star} \setminus \{Z\}}\mathbf{1}\{x \leftrightarrow U\}\mathbf{1}\{U \leftrightarrow W\}\mathbf{1}\{W \leftrightarrow y\}. \nonumber$$ This should count the contribution $\Sigma_{1(2)}$.
![image](firstgraphic)
In a similar manner to the evaluation of $\Sigma_{1(1)}$, the two inner sums are in fact just counting the number of two hop paths between $x$ and $U$, and also $U$ and $y$, then pairing them with each other, this time including self pairs so there is no descending factorial. The Mecke formula gives the expectation as the following integral: $$\begin{gathered}
\label{e:s12int}
\mathbb{E}\Sigma_{1(1)}=\rho\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right)H\left(\norm{U-y}\right) \\ \times \mathbb{E}\left[\sum_{Z \in \mathcal{Y}^{\star} \setminus \{W\} }\mathbf{1}\{x \leftrightarrow Z\}\mathbf{1}\{Z \leftrightarrow U\}\sum_{W \in \mathcal{Y}^{\star} \setminus \{Z\}}\mathbf{1}\{U \leftrightarrow W\}\mathbf{1}\{W \leftrightarrow y\}\right]\mathrm{d}U \\ = \rho\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right)H\left(\norm{U-y}\right) \\ \times \mathbb{E}\left[\sum_{Z \in \mathcal{Y}^{\star} \setminus \{W\} }\mathbf{1}\{x \leftrightarrow Z\}\mathbf{1}\{Z \leftrightarrow U\}\right]\mathbb{E}\left[\sum_{W \in \mathcal{Y}^{\star} \setminus \{Z\}}\mathbf{1}\{U \leftrightarrow W\}\mathbf{1}\{W \leftrightarrow y\}\right]\mathrm{d}U\end{gathered}$$ since the two sums are independent. The far right hand side of Eq. \[e:s12int\] simplifies to $$\begin{aligned}
\label{e:s1int2} \rho\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right)H\left(\norm{U-y}\right) \mathbb{E}\left(\sigma_{2}\left(\norm{x-U}\right)\right)\mathbb{E}\left(\sigma_{2}\left(\norm{U-y}\right)\right)\mathrm{d}U.\end{aligned}$$ $\Sigma_{1(2)}$ in terms of a general connection function is therefore $$\begin{gathered}
\label{e:s12gen}
\mathbb{E}\Sigma_{1(2)}=\rho^{3}\int_{\mathbb{R}^{d}}H\left(\norm{x-U}\right) H\left(\norm{U-y}\right)\\ \times \left(\int_{\mathbb{R}^{d}}H\left(\norm{x-z}\right)H\left(\norm{z-U}\right)\mathrm{d}z\right) \\ \times \left(\int_{\mathbb{R}^{d}}H\left(\norm{U-z}\right)H\left(\norm{z-y}\right)\mathrm{d}z\right)\mathrm{d}U\end{gathered}$$ and for the case of Rayleigh fading taking $d,\eta=2$, the third term on the right hand side of Eq. \[e:threevar\] is $$\begin{aligned}
\label{e:s12rayleigh}
\mathbb{E}\Sigma_{1(2)} = \frac{1}{6}\exp{\left( \frac{-3\beta\norm{x-y}^2}{4}\right)}\end{aligned}$$ by integrating the product of exponentials.
There are two more terms, $\Sigma_{2(1)}$ and $\Sigma_{2(2)}$. These both correspond to pairs of paths which share two vertices. Firstly, $\Sigma_{2(1)}$ refers to pairs of paths which share two vertices and all their edges, and so $$\begin{aligned}
\label{e:s21rayleigh}
\mathbb{E}\Sigma_{2(1)}=\mathbb{E}\sigma_{3}\end{aligned}$$ since there is a pair of paths for each path, specifically the self-pair. Secondly, $\Sigma_{2(2)}$ refers to pairs which share all their vertices, but not all their edges. This pairing is depicted in the right panel of Fig \[fig:paths1\]. For this term, we use the following sum, in a similar manner to Eqs. \[e:s1int\] and \[e:s12\] $$\begin{aligned}
\label{e:s22}
\Sigma_{2(2)} = \sum_{Z,W \in \mathcal{Y}^{\star}}\mathbf{1}\{x \leftrightarrow Z\}\mathbf{1}\{Z \leftrightarrow W\}\mathbf{1}\{W \leftrightarrow y\}\mathbf{1}\{x \leftrightarrow W\}\mathbf{1}\{W \leftrightarrow Z\}\mathbf{1}\{Z \leftrightarrow y\}\end{aligned}$$ Now, the shared edge indicator is $\mathbf{1}\{Z \leftrightarrow W\}$, which appears twice, and so, in a similar manner to the other terms where $W$ and $Z$ are the $d$-dimensional position vectors of the nodes as well as there labels, $$\begin{gathered}
\label{e:s22int}
\mathbb{E}\Sigma_{2(2)} = \rho^{2}\int_{\mathbb{R}^{d}}H\left(\norm{x-Z}\right)H\left(\norm{Z-W}\right) \\ H\left(\norm{W-y}\right)H\left(\norm{x-W}\right)H\left(\norm{Z-y}\right)\mathrm{d}W\mathrm{d}Z.\end{gathered}$$ Only once all five links form do these pairs appear, so they are rare, and $\mathbb{E}\Sigma_{2(2)}$ is relatively small. Note that extensive counts of pairs of paths with this property can be an indication of proximity, a point we expand upon in Section \[sec:discussion\]. Finally, therefore, the last term in Eq. \[e:threevar\] is $$\begin{aligned}
\label{e:s22rayleigh}
\Sigma_{2(2)} = \frac{\pi^2\rho^2}{8\beta^2}\exp{\left(-\beta\norm{x-y}^2\right)}\end{aligned}$$ via evaluating Eq. \[e:s22int\], and via Eq. \[e:vardecomposition\] and then Eqs. \[e:s1rayleigh\], \[e:s12rayleigh\], \[e:s21rayleigh\] and \[e:s22rayleigh\], the theorem follows.
We numerically corroborate these formulas via Monte Carlo simulations, the results of which are presented in the top-left panel of Fig. \[fig:exp1\].
Discussion {#sec:discussion}
==========
![*Top*: Both plots take $\beta,\norm{x-y}=1$, though the lower curve in the lower figure takes $\beta=3/2$. We approximate the numerically obtained path existence at various point process densities (solid curves) with sums of factorial moments of $\sigma_{k}$, also numerically obtained (dots), for order $3,4$ and $5$. *Bottom*: Numerically obtained path existence for three hops. We approximate with the sum of 80 factorial moments in each case (dots). We have the analytical points for orders $0,1$ i.e. in terms up to the variance. When the paths are rare, the approximations work better with only a few moments available. These are, as far as we are aware, the most accurate approximations available in the literature.[]{data-label="fig:moments"}](twohopplot "fig:")\
![*Top*: Both plots take $\beta,\norm{x-y}=1$, though the lower curve in the lower figure takes $\beta=3/2$. We approximate the numerically obtained path existence at various point process densities (solid curves) with sums of factorial moments of $\sigma_{k}$, also numerically obtained (dots), for order $3,4$ and $5$. *Bottom*: Numerically obtained path existence for three hops. We approximate with the sum of 80 factorial moments in each case (dots). We have the analytical points for orders $0,1$ i.e. in terms up to the variance. When the paths are rare, the approximations work better with only a few moments available. These are, as far as we are aware, the most accurate approximations available in the literature.[]{data-label="fig:moments"}](prob5 "fig:")
The main point of discussion here is how to relate the moments of the path count $\sigma_{k}$ to the point probabilities, such as the probability exactly zero paths exist. This is in fact the classic moment problem of mathematical analysis initiated by Thomas J. Stieltjes in $1894$. From the theoretical point of view, the problem has been widely studied and solved many years ago. Various famous mathematicians have contributed, including P. Chebyshev, A. Markov, J. Shohat, M. Frechet, H. Hamburger, M. Riesz, F. Hausdorff and M. Krein [@gavriliadis2009].
We use a relation between the factorial moments of the path count distribution and the point probabilities, as stated in the introduction $$\begin{aligned}
P(\sigma_{k}=t) = \frac{1}{t!}\sum_{i \geq 0} \frac{(-1)^{i}}{i!}\mathbb{E}\sigma_{k} (\sigma_{k}-1) \dots (\sigma_{k}-t-i+1)\end{aligned}$$ we can deduce the probability of a $k$-hop path $$\begin{aligned}
P(\sigma_{k}>0) = 1 - \sum_{i \geq 0}\frac{(-1)^{i}}{i!}\mathbb{E}\left[\left(\sigma_{k}\right)_{i}\right]\end{aligned}$$ where $\left(\sigma_{k}\right)_{i}$ is the descending factorial. The partial sums alternatively upper and lower bound this path-existence probability. We show this in Fig. \[fig:moments\], both for the case $k=2$ and the first non-trivial case $k=3$, since the $\sigma_{k}$ is no longer Poisson for all $k>2$. Adding more moments increases the order of the approximation to our desired probability $P(\sigma_{k}>0)$.
We are unable to deduce similar formulas for $k>3$ at this point, but the theory is precisely the same, and does not appear to present any immediate problems. The higher factorial moments, which require knowledge of $\mathbb{E}\sigma_{k}^{3}$ and beyond, are similarly accessible. What is not yet understood is how to form a recursion relation on the moments, which should reveal a “general theory” of their form. As such, it may be possible to give as much information as required about $P(\sigma_{k}>0)$ for all $k$. We leave the development of this to a later paper, but highlight that this would give the best possible bounds on the diameter of a random geometric graph in terms of a general connection problem. It remains a key open problem to describe analytically the $r$th factorial moment of $\sigma_{k}$.
We also have the scaling of the mean and variance of the path count distribution in terms of the node density $\rho$ and the path length $k$. With $\beta,\norm{x-y}$ fixed, the expected number of paths is $\mathcal{O}(\rho^{k})$, while the variance appears to be $\mathcal{O}(\rho^{k+1})$. We have only verified this for $k=3$. We numerically obtain the probability mass of $\sigma_{3}$ in the top-right panel of Fig. \[fig:exp1\] by generating $10^{5}$ random graphs and counting all three hop paths between two extra vertices added at fixed at distance $\norm{x-y}$ taking $\rho=2$ for $\beta=0.7,0.5$ and $0.3$. Plotted for comparison is the mass of a Poisson distribution with mean given by Eq. \[e:khopexpectation\], which is a spatially independent test case.
It is beyond the scope of this article to analyse the implications this new approximation method will have on e.g. low power localisation in wireless networks, broadcasting, or other areas of application, as discussed in the introduction. We defer this to a later study.
Conclusion {#sec:conclusion}
==========
In a random geometric graph known as the random connection model, we derived both the mean and variance of the number of $k$-hop paths between two nodes $x,y$ at displacement $\norm{x-y}$, on condition that $k\in[1,3]$. We also provided details of an example case whenever Rayleigh fading statistics are observed, which is important in applications. This shows how the variance of the number of paths is in fact composed of four terms, no matter what connection function is used. This provides an approximation to the probability that a $k$-hop path exists between distant vertices, and provides technique via summing factorial moments for formulating an accurate approximation in general, which is a sort of correction to a mean field model. This works toward addressing a recent problem of Mao and Anderson [@mao2010]. Are results can for example be applied in the industrially important field of connectivity based localisation, where internode distances are estimated without ranging with ulta-wideband sensors, as well as in mathematical problems related to bounding the broadcast time over unknown topologies.
Acknowledgements {#acknowledgements .unnumbered}
================
This work is supported by Samsung Research Funding and Incubation Center of Samsung Electronics under Project Number SRFC-IT-1601-09. The first author also acknowledges support from the EPSRC Institutional Sponsorship Grant 2015 *Random Walks on Random Geometric Networks*. Both authors wish to thank Mathew Penrose, Kostas Koufos, David Simmons, Leo Laughlin, Georgie Knight, Orestis Georgiou, Carl Dettmann, Justin Coon and Jon Keating for helpful discussions.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
G. Mao, Z. Zhang, and B. Anderson, “[Probability of k-Hop Connection under Random Connection Model]{},” *Communications Letters, IEEE*, vol. 14, no. 11, pp. 1023–1025, 2010.
J. Díaz, D. Mitsche, G. Perarnau, and X. Pérez-Giménez, “On the relation between graph distance and euclidean distance in random geometric graphs,” *Advances in Applied Probability*, vol. 48, no. 3, p. 848–864, 2016.
X. Ta, G. Mao, and B. Anderson, “[On the Probability of k-hop Connection in Wireless Sensor Networks]{},” *IEEE Communications Letters*, vol. 11, no. 9, 2007.
G. Mao and B. D. Anderson, “[On the Asymptotic Connectivity of Random Networks under the Random Connection Model]{},” *[INFOCOM, Shanghai, China]{}*, p. 631, 2011.
B. Sklar, “Rayleigh fading channels in mobile digital communication systems. Part I: Characterization,” *IEEE Communications Magazine*, vol. 35, no. 7, 1997.
A. P. Giles, O. Georgiou, and C. P. Dettmann, “[Connectivity of Soft Random Geometric Graphs over Annuli]{},” *The Journal of Statistical Physics*, vol. 162, no. 4, p. 1068–1083, 2016.
S. Chandler, “[Calculation of Number of Relay Hops Required in Randomly Located Radio Network]{},” *Electronics Letters*, vol. 25, no. 24, pp. 1669–1671, 1989.
C. Bettstetter and J. Eberspacher, “Hop distances in homogeneous ad hoc networks,” in *The 57th IEEE Semiannual Vehicular Technology Conference (VTC), 2003*, vol. 4, 2003, pp. 2286–2290.
M. Bradonjić, R. Elsässer, T. Friedrich, T. Sauerwald, and A. Stauffer, *Efficient Broadcast on Random Geometric Graphs*, 2010.
B. Bollobás, *Random Graphs*, 2nd ed., ser. Cambridge Studies in Advanced Mathematics.1em plus 0.5em minus 0.4emCambridge University Press, 2001.
M. Penrose, “[Random graphs: The Erdős-Rényi model]{},” in *Talk at Oberwolfach Workshop on Stochastic Analysis for Poisson Point Processes, February 2013, available at the homepage of the author*, 2013.
S. K. Iyer, “[Connecting the Random Connection Model]{},” *preprint arXiv:1510.05440*, 2015.
G. Mao and B. D. O. Anderson, “Connectivity of large wireless networks under a general connection model,” *IEEE Transactions on Information Theory*, vol. 59, no. 3, pp. 1761–1772, 2013.
M. D. Penrose, “[Connectivity of Soft Random Geometric Graphs]{},” *The Annals of Applied Probability*, vol. 26, no. 2, pp. 986–1028, 2016.
J. Coon, C. Dettmann, and O. Georgiou, “[Full Connectivity: Corners, Edges and Faces]{},” *The Journal of Statistical Physics*, vol. 147, no. 4, pp. 758–778, 2012.
M. D. Penrose, *Random Geometric Graphs*.1em plus 0.5em minus 0.4emOxford University Press, 2003.
——, “[The Longest Edge of the Random Minimal Spanning Tree]{},” *The Annals of Applied Probability*, vol. 7, no. 2, pp. 340–361, 1997.
P. Clifford, “[Markov Random Fields in Statistics]{},” in *[Disorder in Physical Systems: A Volume in Honour of John M. Hammersley]{}*, [Geoffrey Grimmett and Dominic Welsh]{}, Ed.1em plus 0.5em minus 0.4em Cambridge University Press, 1990.
I. P. Goulden and D. M. Jackson, *Combinatorial Enumeration*.1em plus 0.5em minus 0.4emDover Publications, 2004.
A. P. Kartun-Giles, G. Knight, O. Georgiou, and C. P. Dettmann, “The electrical resistance of a one-dimensional random geometric graph,” *In preparation*, 2017.
G. Knight, A. P. Kartun-Giles, O. Georgiou, and C. P. Dettmann, “Counting geodesic paths in 1-d vanets,” *IEEE Wireless Communications Letters*, vol. 6, no. 1, pp. 110–113, 2017.
S. Dulman, M. Rossi, P. Havinga, and M. Zorzi, “On the hop count statistics for randomly deployed wireless sensor networks,” *Int. J. Sen. Netw.*, vol. 1, no. 1/2, pp. 89–102, Sep. 2006.
X. Ge, S. Tu, G. Mao, C. X. Wang, and T. Han, “[5G Ultra-Dense Cellular Networks]{},” *IEEE Wireless Communications*, vol. 23, no. 1, pp. 72–79, 2016.
S. C. Ng and G. Mao, “Analysis of k-hop connectivity probability in 2-d wireless networks with infrastructure support,” in *2010 IEEE Global Telecommunications Conference GLOBECOM 2010*, 2010, pp. 1–5.
A. Zemlianov and G. de Veciana, “Capacity of ad hoc wireless networks with infrastructure support,” *IEEE Journal on Selected Areas in Communications*, vol. 23, no. 3, pp. 657–667, 2005.
E. Perevalov, R. Blum, A. Nigara, and X. Chen, “Route discovery and capacity of ad hoc networks,” in *GLOBECOM ’05. IEEE Global Telecommunications Conference, 2005.*, vol. 5, 2005, pp. 6 pp.–2740.
C. Nguyen, O. Georgiou, and Y. Doi, “[Maximum Likelihood Based Multihop Localization in Wireless Sensor Networks]{},” in *Proceedings. IEEE ICC, London, UK*, 2015.
R. Els[ä]{}sser and T. Sauerwald, *Broadcasting vs. Mixing and Information Dissemination on Cayley Graphs*, 2007, pp. 163–174.
D. Peleg, “Time-efficient broadcasting in radio networks: A review,” in *International Conference on Distributed Computing and Internet Technology*, 2007, pp. 1–18.
P. Jagers, “On palm probabilities,” *Zeitschrift f[ü]{}r Wahrscheinlichkeitstheorie und Verwandte Gebiete*, vol. 26, no. 1, pp. 17–32, 1973.
M. D. Penrose, “Lectures on random geometric graphs,” in *Random Graphs, Geometry and Asymptotic Structure*, N. Fountoulakis and D. Hefetz, Eds. 1em plus 0.5em minus 0.4emCambridge University Press, 2016, pp. 67–101.
M. Haenggi, J. Andrews, F. Baccelli, O. Dousse, and M. Franceschetti, “Stochastic geometry and random graphs for the analysis and design of wireless networks,” *Selected Areas in Communications, IEEE Journal on*, vol. 7, pp. 1029–1046, 2009.
H. ElSawy, A. Sultan-Salem, M. S. Alouini, and M. Z. Win, “Modeling and analysis of cellular networks using stochastic geometry: A tutorial,” *IEEE Communications Surveys Tutorials*, vol. 19, no. 1, pp. 167–203, 2017.
P. Gavriliadis and G. Athanassoulis, “Moment information for probability distributions, without solving the moment problem, ii: Main-mass, tails and shape approximation,” *Journal of Computational and Applied Mathematics*, vol. 229, no. 1, pp. 7 – 15, 2009. \[Online\]. Available: <http://www.sciencedirect.com/science/article/pii/S037704270800513X>
[^1]: Alexander P. Kartun-Giles and Sunwoo Kim are with the Department of Electronic Engineering, Hanyang University, Seoul, South Korea.
|
---
abstract: 'Cultural activity is an inherent aspect of urban life and the success of a modern city is largely determined by its capacity to offer generous cultural entertainment to its citizens. To this end, the optimal allocation of cultural establishments and related resources across urban regions becomes of vital importance, as it can reduce financial costs in terms of planning and improve quality of life in the city, more generally. In this paper, we make use of a large longitudinal dataset of user location check-ins from the online social network WeChat to develop a data-driven framework for cultural planning in the city of Beijing. We exploit rich spatio-temporal representations on user activity at cultural venues and use a novel extended version of the traditional latent Dirichlet allocation model that incorporates temporal information to identify latent patterns of urban cultural interactions. Using the characteristic typologies of mobile user cultural activities emitted by the model, we determine the levels of demand for different types of cultural resources across urban areas. We then compare those with the corresponding levels of supply as driven by the presence and spatial reach of cultural venues in local areas to obtain high resolution maps that indicate urban regions with lack of cultural resources, and thus give suggestions for further urban cultural planning and investment optimisation.'
author:
- Xiao Zhou
- Anastasios Noulas
- Cecilia Mascoloo
- Zhongxiang Zhao
title: Discovering Latent Patterns of Urban Cultural Interactions in WeChat for Modern City Planning
---
Introduction
============
The opportunity to enjoy cultural and entertainment activities is an essential element of urban life. Cultural spending is a considerable fraction of the annual budget of a city, as it is a key catalyst of social life and an important quality of life indicator in urban environments. Typically, in megacities like Beijing, London or New York, the financial resources allocated to support cultural events and related urban development (e.g. museum construction and maintenance) is in the order of tens of millions of dollars annually [@London]. Furthermore, in today’s inter-connected world, the opportunity to experience a diverse set of cultural activities is amplified; citizens equipped with mobile devices can utilise a wide range of mobile applications and services that inform them on on-going social and cultural events, as well as on the best areas in the city to explore culture. In this setting, urban culture explorers are also mobile users who emit digital traces of where and when they are participating in cultural events. The new window onto the cultural life of a city, opened by the availability of novel sources of mobile user data, paves the way to the development of new monitoring technologies of urban cultural life. This can power evidence-based cultural policy design and optimise urban planning decisions. As an example, by tracking the interactions of mobile users with cultural venues across space and time, fine-grained indicators of areas in the city where there is lack, or excessive supply, of cultural resources can be devised.
In this paper, we exploit a large set of spatio-temporal footprints of users in the online social network WeChat to obtain patterns of urban cultural interactions in the city of Beijing. We first devise a latent Dirichlet allocation (LDA) [@blei2003latent] based method that takes as input check-ins at venues over time to identify clusters of mobile users with common cultural profile patterns. Having identified the geographic spread of check-in activity of each user cluster, we then estimate the primary locations of users in a cluster, in terms of home or work, and evaluate the degree of accessibility to cultural venue resources for a user. Next, we empirically demonstrate how the supply-demand balance of cultural services in a city can be highly skewed and we pinpoint in cartographic terms the areas in the urban territory where supply could improve through appropriate investment. In more detail, we make the following contributions:
- **Cultural patterns extraction from check-in data:** We obtain raw representations of time-stamped check-in data at WeChat venues and use those as input to a novel extended version of the standard LDA model that takes into account temporal information (TLDA) on when venues of certain cultural categories are visited. We evaluate the performance of the model with a novel metric of coherence between top cultural venue categories observed in a cluster of users and the time periods of activity that are characteristic to each pattern. Using the metric we optimise the TLDA model and identify the presence of six latent cultural patterns in Beijing, each of which bears characteristic spatio-temporal manifestations of user activity at cultural venues. The description of the TLDA model and the corresponding data representations are described in Section \[sec:patterns\].
- **Determining cultural demand patterns of users spatially:** Having obtained the latent cultural patterns through the TLDA model, we then exploit the frequencies of user check-ins across space to determine the *levels of demand* of cultural activities in different areas of the city. In this context, we present POPTICS, a user-personalised version of the OPTICS algorithm [@ankerst1999optics], used here to identify clusters of user activity hotspots. These primary locations of user activity are the means to quantifying demand levels for cultural resources spatially. Overall, the output of this process corresponds to a set of heat maps depicting the intensity levels of user activity geographically for each of the cultural pattern emitted by the TLDA model. We consider such intensity levels to reflect user driven demand of cultural resources geographically and present our results in Section \[sec:demand\].
- **Identification of areas that lack cultural offering:** In addition to obtaining spatial descriptions of the demand levels for each cultural pattern observed in the city, we determine the *supply levels* of cultural resources using the spatial distribution of cultural venues and users’ check-ins belonging to each TLDA pattern as input. For each region in the city, we obtain a demand-supply ratio (DSR), high values of which are indicative of an area lacking cultural venues, whereas low values suggest oversupply of cultural establishments in a region. In Section \[sec:experiments\] we generate precision maps of such supply and demand patterns for each cultural pattern and demonstrate how users who live in high-DSR neighbourhoods but adhere to a specific cultural pattern travel longer distances in the city to access the resources they are interested in. The latter is an indication of how lack of resources in an area translates to larger travel distance for its residents.
In summary, the methodology put forward in the present work paves the way to novel data driven urban cultural planning schemes that dynamically adapt to the profiles of residents active in city regions. Such schemes exploit the rich characteristics of new digital data sources and have the potential to improve over planning decisions that currently tend to rely solely on residential population density and are agnostic to both user preferences and fine grained temporal signatures of user behaviour.
Related Work
============
The flourishing of location technology services (LTS) [@hasan2014urban; @bauer2012talking] and the distinct advantage of topic modeling [@blei2003latent] in uncovering latent patterns have encouraged researchers to employ them as data sources and method, respectively, in large-scale urban human mobility studies. From the perspective of the users, [@jiang2015author] proposed a topic-based model to recommend personalized points of interest (POIs) for tourists. [@yuan2013we] leveraged topic modeling to learn lifestyles for individuals based on their digital footprints and social links. [@kurashima2013geo] established a geo-topic model for location recommendation with the consideration of activity area of users. [@yin2013lcars] proposed a recommender system for both venues and events according to personal preferences. From urban computing perspective, [@bauer2012talking] analysed the comments published alongside Foursquare check-ins to detect the topics in neighbourhoods. [@yuan2012discovering] utilised human mobility flows between POIs to identify land use functions for urban regions.
Even though topic modeling has been shown as an effective unsupervised approach to discovering latent mobility patterns in existing literature, it still has some limitations. Firstly, urban activities are time-sensitive that citizens show strong time preferences for different activities [@hasan2014urban; @bauer2012talking]. However, these temporal behavioural patterns in urban daily life can hardly be captured by topic models as classical topic modeling does not have time factors integrated[@chen2017effective]. Some existing works considering temporal characteristics did not modify the model structure fundamentally [@bauer2012talking], leaving to the interactions between urban activities and temporal features largely untapped. Secondly, evaluation methods are missing in the application of topic modeling to human mobility study. The number of topics was either selected without testing [@hasan2015location; @bauer2012talking; @yuan2013we] or by using traditional evaluation methods in text mining cases directly [@hu2013spatio]. Thirdly, few published research findings obtained through topic modeling can be used in practical urban planning. And finally, current publications focus on mining the patterns of all types of urban activities in general, without looking at particular subgroups of people or urban venues.
Our research is different from existing works in the following aspects: i) We integrate time parameter with standard topic model to gain insights into temporal behavioural patterns along with cultural activities of people. ii) We device a novel evaluation method for the temporal topic model to value its performance quantitatively. iii) We detect the heterogeneity on various cultural demand and supply levels in the city environment. To the best of our knowledge, this is the first methodological work for urban cultural patterns mining using large-scale location services data in practical urban cultural planning.
![image](1Research_F)
Approach at a Glance
====================
The research framework outlined in Figure \[fig:framework\] includes two main phases: cultural patterns extraction, followed by spatial distribution of cultural patterns. These two phases as a whole provide an integrated approach to optimising the cultural resources allocation in cities and refining the urban cultural planning scheme by using LTS data as input. To this aim, we propose three models tailored to the characteristics of spatio-temporal social network data.
[**Cultural Patterns Extraction**]{}: Cultural patterns extraction begins with raw check-in data processing. We build the tuple $ (u,v,t) $ to denote each qualified check-in record, indicating user $ u $ visited venue category $ v $ at time $ t $. The whole set of check-ins formatted in this way are stored in a data cube $(U,V,T)$ displaying the check-in history of users in the city before being sent to the TLDA model as input data. At this stage, temporal coherence value (TCV) measurement is designed to evaluate the performance of TLDA and choose the optimal input parameter $ K $, which denotes the number of latent patterns $ Z $. The TLDA model applied in this research enables us to essentially group $ U $, $ T $, and $ V $ into $ K $ cultural patterns and discover the associations between users, time, and various cultural activities.
[**Spatial Distribution of Cultural Patterns**]{}: Building on the output of TLDA, we further devise the POPTICS algorithm and the demand-supply interaction (DSI) model to get a general picture of how cultural check-ins and venues belonging to different patterns distribute spatially in the city, so as to uncover the gap between user-side demand and venue-side supply for different cultural patterns. Our primary assumption here is that cultural demand of users and supply capability of cultural venues are heterogeneous across regions that by mapping the demand-supply ratio, we can infer whether a type of cultural resources is insufficient or over-supplied relatively in the city.
Datasets {#dataset}
========
WeChat[^1] is a mobile social application launched by Tencent in January 2011 and has currently become the most popular mobile instant messaging application dominating the Chinese market. According to the latest data report published by the WeChat team at the Tencent Global Partners Conference [@WeChat17], by the end of September 2017, it had 902 million average daily logged in users sending 38 billion messages in total every day. WeChat’s ’Moments’ function, which is an equivalent of Facebook’s timeline feature, allows users to share their status or anything of interest via photos, text, videos, or web links with their contacts. When a user posts on Moments, a time stamp will be generated automatically. Additionally, the user has the option to share their current place from a list of pre-selected locations nearby in WeChat. This real-time location-based service provided by ’Moments’ depicts the routine lives of users at a fine-grained spatio-temporal scale, and provides a precious dataset that enables us to discover urban activity patterns.
**Advantages of WeChat Dataset.** For our analysis, the WeChat dataset possesses a number of natural advantages:
- *High population coverage levels in cities.* According to a survey by Tencent in September 2015, 93% of the population in the first-tier cities in China[^2] were WeChat users. As for the selected case study in this research, Beijing has 21,136,081 monthly active users[^3] according to our statistics, making up 97.4% of the residents[^4] in September 2017. This high popularity makes the observation of users’ mobility patterns through the lens of WeChat a reliable proxy to the real mobility of Beijing residents.
- *Wide age distribution of users.* Compared with many other social media services, the WeChat penetration among middle-aged and senior users is relatively high. Although people born in the 80s and 90s are still the major groups, the monthly active WeChat users between 55-70 years old in September 2017 were approximately 50 million. In other words, WeChat is a representative social media data source reflecting a wide range of age groups in the general population.
- *Private circle visiblity.* With an in-group design, the social circle of a user on WeChat is mainly comprised of relatives, friends, and colleagues who have a close relationship with him in life. Additionally, WeChat empowers the user with the right to control over exactly who has access to each single post on his ’Moments’. This powerful feature creates a secure and private environment that encourages WeChat users to communicate freely and share their check-ins.
**Moments Check-in Data.** The main dataset employed for this study comes from the anonymized logs of complete WeChat Moments posting activities. We collected all check-in records with venue information provided in Beijing during the four months of October 2016, January, April, and July 2017. In total, there were 56,239,429 check-ins created by 9,517,175 users at 2,428,182 venues. For each check-in, information about user ID, time stamp, coordinates, and POI category is provided. Through the lens of this dataset, we are thus able to observe who visited where at what time for what purpose. We obtained IRB approval in the University of Cambridge to work on the data for the purpose of this paper.
Cultural Patterns Extraction {#sec:patterns}
============================
In this section, we present how urban cultural patterns can be extracted from Moments check-in records. Before discussing the issue in a more depth, we first state the meaning of culture and culture-related terms for the purpose of this work.
**Cultural Venues.** Cultural venues are defined as urban places of arts, media, sports, libraries, museums, parks, play, countryside, built heritage, tourism and creative industries, following the line set by the Office of the Deputy Prime Minister in Regeneration through Culture, Sport and Tourism [@zhou2017cultural; @ODPM99]. Based on this definition, 37 categories of WeChat cultural venues are selected in this research.
**Cultural Check-ins.** The check-in activities taking place at the cultural venues are called cultural check-ins.
**Cultural Fans.** To classify individual cultural patterns, we only focus on users who have a certain minimum level of cultural check-ins during the observation time and call them cultural fans.
In the following subsections, we illustrate the necessity of considering temporal factors in urban cultural patterns mining, before introducing the TLDA model which integrates temporal characteristics with a particular subgroup of cultural activities, followed by the introduction of a novel evaluation method for the TLDA.
Temporal Factors Extraction
---------------------------
In Figure \[fig:temporal\] we show temporal cultural check-in distribution of the four selected months in Beijing. The purpose of creating these heat maps in a calendar format is to present the cultural check-in frequency by date and hour corresponding to four seasons chronologically. In each subfigure, the date is plotted along the horizontal axis with hour appearing on the vertical axis to unveil cultural visiting patterns associated with temporal factors, which will later be explored in further depth.
It can be seen from Figure \[fig:temporal\] that hourly and weekly cultural visiting patterns are both significant in general. For all the seasons, the least likely hours for cultural check-in creation is during the night, from 0 to 6am. After that, the hourly frequency of cultural check-ins increases gradually and stays at a relatively high level during the daytime. On a daily basis, two peak periods can be recognized, among which, the highest one lasts for around seven hours from 10am to 4pm while a lower peak appears between 7pm to 9pm. As for weekly patterns, we can find that the check-in frequency is significantly higher for weekends than weekdays.
![image](2Temporal)
It is also noticeable that hourly and weekly patterns are more evident in April and July, while less regular in October and January. This observation can be explained by comparing our calendar heat maps with the public holidays in China. At the beginning of October 2016, we can see a dramatically high cultural check-in frequency for six continuous days, when people have one week off for the National Day. It is the longest holiday after the Chinese New Year, and is also called ’Golden Week’ for people to reunite with families and take trips. Then the second graph follows and presents the situation in the ’New Year’ month. It can be discovered that both the first cells corresponding to the New Year’s Day (01/01/2017) and the Spring Festival (28/01/2017) have distinctively higher values compared to the rest. In addition, during the week before the Chinese New Year, the number of cultural check-ins is much smaller than that in other weeks. Moreover, on 27/01/2017, the day before the Lunar New Year, the check-in frequency is particularly low, forming a sharp contrast with the following Spring Festival week. This is because typically, Chinese people prefer to stay at home with families before the New Year’s Eve, waiting for the coming new year, but would like to hang out with friends in the next few days.
Temporal Latent Dirichlet Allocation
------------------------------------
The classical LDA [@blei2003latent] is a hierarchical Bayesian model which has been shown as an effective unsupervised learning method in discovering structural daily routines [@huynh2008discovery; @hasan2014urban; @farrahi2011discovering; @sun2017discovering]. However, the original LDA approach is built based on the ’bag-of-words’ assumption [@hasan2014urban], which means that it only considers the number of times each word appears in a document, without involving any temporal consideration [@chen2017effective]. According to the observations made in the previous subsection, the periodicity of cultural check-ins in different levels of temporal granularity is so obvious that we chose to explicitly incorporate it to the LDA model. We thus propose temporal latent Dirichlet allocation (TLDA) based on [@chen2017effective] in this paper. The TLDA is an extended version of LDA that integrates time factors into the original model, so as to uncover multiple associations between users, urban activities, and their corresponding temporal characteristics. The graphical model representation of TLDA is shown in Figure \[fig:TLDA\], where the lower part integrated by red arrows is the addition of TLDA. In the figure, circles represent parameters and the meaning of which are described in Table \[tab:notation\].
Symbol Description
------------------ -------------------------------------------------------
$ \alpha $ Dirichlet prior over the pattern-user distributions
$\beta$ Dirichlet prior over the venue-pattern distributions
$\gamma$ Dirichlet prior over the pattern-time distributions
$\theta _{u}$ pattern distribution of user $ u $
$ \varphi _{z} $ venue distribution of pattern $ z $
$ \phi_{t} $ pattern distribution of time $ t $
$ z_{ut} $ pattern of venue category of user $ u $ at time $ t $
$ v_{ut} $ venue category of user $ u $’s check-in at time $ t $
: Notation and Description[]{data-label="tab:notation"}
The framework of the TLDA model consists of four hierarchical layers, including a user layer, a time layer, a venue category layer, and a cultural latent pattern layer. The cultural pattern layer is the key layer which links the other three. As its predecessor, TLDA is also a generative model, the goal of which is to find the best set of latent variables (cultural patterns) that can explain the observed data (cultural check-ins by users) [@hasan2014urban]. To generate a cultural venue category, the pattern distribution of the corresponding user is sampled from a prior Dirichlet distribution parameterized by $\alpha$, $\theta _{u}\sim Dir(\alpha$). In a similar way, the pattern distribution of time is sampled from a prior Dirichlet distribution parameterized by $\gamma$. Based on these, the pattern assignment $Z_{ut}$ of the venue category is drawn from a multinomial distribution $Z_{ut}\sim Multi(\theta _{u},\phi_{t}$).
$$P(z_{ut}|\alpha ,\gamma)=\sum_{\theta _{u}}P ( z_{ut}| \theta _{u})P( \theta _{u}| \alpha) \sum_{\phi_{t}}P( z_{ut}| \phi_{t} )P ( \phi_{t}| \gamma )$$
Then, the venue category is generated by sampling $Vut\sim Multi(\varphi _z)$. $\varphi _z$ specifies the venue distribution of pattern $z$, which is drawn form a prior Dirichlet distribution parameterized by $\beta$.
$$P(v_{ut}|z_{ut}) = \sum_{\varphi _z}P(v_{ut}|\varphi _z)P(\varphi _z|\beta)$$
After that, we estimate the maximum likelihood of $v_{ut}$ of $u$ at time $t$ by summing up $\theta_{u}$, $\phi_{t}$ and $\varphi _{z}$, as shown in the following equation.
$$P(v_{ut}|\alpha,\beta,\gamma)=\sum_{\theta_{u}}\sum_{\phi_{t}}\sum_{\varphi _{z}}P(v_{ut},z_{ut},\theta_u,\phi _t,\varphi _z|\alpha ,\beta ,\gamma)$$
In the last step, we use the Gibbs sampling algorithm to estimate the probability distributions of pattern-user, venue-pattern, and pattern-time, respectively.
![Graphical Model for TLDA.[]{data-label="fig:TLDA"}](3Graphical)
Evaluation of TLDA
------------------
TLDA is an unsupervised learning method which requires a pre-specified number of patterns $K$. As the temporal layer is integrated into the TLDA, conventional LDA evaluation approaches cannot be used directly to the extended model. To handle this obstacle, we design temporal coherence value (TCV) to evaluate the performance of the TLDA model and to find the optimum number of cultural patterns in the city. Inspired from the coherence value (CV) measurement proposed in [@roder2015exploring], the TCV introduced in this paper is used to measure the coherence level between top cultural venue categories and time periods within each pattern, before averaging them to evaluate the overall performance of the TLDA model.
How the TCV works step-by-step is depicted in Algorithm 1. To run the model, we need three inputs: from the output of TLDA $TV$, we obtain 1) top venue categories $V^*$ and 2) top time periods $T^*$ in each cultural pattern; and 3) all the check-in activities $SW$. It should be stated that to eliminate the influence of variance between users’ check-in frequencies, the input $SW$ used in TCV is constructed by a sliding window which moves over the original check-ins of all the users. In the algorithm, we firstly define a segmentation $S_{set}^{one}$ for each top venue category $v^*$ in each pattern as equation \[eq:4\] shows. Here $S$ is used to denote the list of $S_{set}^{one}$. The total number of $S_{set}^{one}$ in $S$ is denoted by $Q$.
$$\label{eq:4}
S_{set}^{one}=\left \{(v^*,V^*,T^*)|v^*\in V^* \right \}$$
For each $S_{set}^{one}$, we calculate the *normalised pointwise mutual information* (NPMI) [@bouma2009normalized] for $v^*$-$T^*$ vector and $V^*$-$T^*$ vector, respectively. The $j$th element of the time vector $t^*_j$ and venue category $v^*$ has the NPMI:
$$\overrightarrow{w}(j) = NPMI(v^*,t_j^*)^{\tau}=\left ( \frac{log\frac{P(v^*,t_j^*)+\varepsilon}{P(v^*)\cdot P(t_j^*)}}{-log(P(v^*,t_j^*)+\varepsilon}\right )^{\tau }$$
where $P(v^*,t^*_j)$ is the probability of the co-occurrence of $v^*$ and $t^*_j$, while $P(v^*)$ and $P(t^*_j)$ mean the probabilities of $v^*$ and $t^*_j$ ($t_j^*\in T^*$), respectively. $ \varepsilon $ is added to avoid logarithm of zero, and an increase of $\tau$ gives higher NPMI values more weight. After calculating the NPMI value for each venue category according to the above formula, we aggregate them to obtain the $j$th element of the time vector of $V^*$ by the following equation.
$$\overrightarrow{W}(j) = \sum_{i=1}^{I}NPMI(v_i^*,t_j^*)^{\tau}$$
where $v_i^*$ represents the $i$th venue category in $V^*$.
Cosine similarity is then calculated between pairs of context vectors $\overrightarrow{w}_q$ and $\overrightarrow{W}_q$ to obtain the coherence score $m_q$ for each $S_{set}^{one}$ by formula \[eq:7\], before we average over all top venue categories in patterns to get the final TCV $\overline{m}$ for the model through equation \[eq:8\].
$$\label{eq:7}
m_q=cos(\overrightarrow{w_q},\overrightarrow{W_q})$$
$$\label{eq:8}
\overline{m}=\frac{\sum_{q=1}^{Q}m_q}{Q}$$
The higher the TCV score, the better the clustering result of the TLDA model.
\[FOR\][ForEach]{}\[1\][ \#1 ]{}
**Input:** $VT([V_1^*,V_2^*,...,V_K^*], [T_1^*,T_2^*,...T_K^*]), SW$\
**Output:** $\overline{m}$
$initialize$ $S=set()$ : : $S_{set}^{one}=\left \{(v^*,V^*,T^*)|v^*\in V^*\right \}$ $S \gets S+S_{set}^{one}$. : : $\overrightarrow{w}(j) = NPMI(v^*,t_j^*)^{\tau}$ $\overrightarrow{W}(j) = \sum_{i=1}^{I}NPMI(v_i^*,t_j^*)^{\tau}$ $m_q=cos(\overrightarrow{w_q},\overrightarrow{W_q})$
Refined Urban Cultural Planning {#sec:demand}
===============================
Different from current cultural planning frameworks, which mainly consider the population of urban areas when allocating cultural resources, we propose a refined urban cultural planning scheme based on the results of the TLDA. Our core viewpoint here is that urban regions are heterogeneous in terms of cultural demand and supply capability, which should not be treated uniformly. By employing the TLDA model, we are able to group users according to their cultural tastes, and cluster cultural venues based on their similarities derived from human mobility behaviours. Then, after aggregating the users and cultural facilities into urban regions, we can get an idea about how the cultural demand and supply are distributed spatially in the city for different cultural patterns. Moreover, through learning the supply-demand balance across regions, we can detect the areas where particular cultural services are needed, and we are thus able to provide city government with a priority list when the financial budget is compiled for culture-related planning.
Demand Range Determination
--------------------------
In this part we explore a way to determine the main activity range of individual users as reflected in their historical check-ins. More specifically, our aims are to detect valid visits for the user, map the active ranges of areas which he visits frequently, and finally, determine the centre and radius for these active ranges. Among existing clustering methods, we find OPTICS [@ankerst1999optics] a suitable approach for our problem. OPTICS is an algorithm for finding meaningful density-based clusters in spatial data [@kriegel2011density]. This method requires two parameters as input: the maximum radius to consider, and the least number of points to form a cluster. As check-in frequencies of users can vary greatly, setting a common number of minimum points for all users is inadequate. Considering this limitation, we propose a modified version as Algorithm 2 shows, named POPTICS that defines a different threshold for each user separately.
In POPTICS, we collect all the $N$ locations of check-ins $L_u = [l_1,l_2,\cdots,l_N]$ for each user $u$. Here places with more than one check-in records are counted repeatedly, as they are more important in the user’s life and should be given higher weights. An input parameter $ \eta $ is set to denote the percentage of a user’s total check-in locations $L_u$ being considered in the calculation of core distances. Here $ \eta $ is varied for different users. The core distance for location $l_i$ is defined as the Euclidean distance between $l_i$ and the $L_u \eta$-th nearest point to it, as shown in the following function.
$$CD(i)= min_{\eta}Dist(i,j) (j=1,2,3,...N)$$
After calculating the core distance, for location $l_o$, we define the reachability distance from $l_o$ to $l_i$ as:
$$RD(o,i)= max(CD(o),Dist(o,i))$$
According to the reachability distances, an ordered list of locations is generated. Then, to find meaningful cluster(s) of locations and detect outliers, a threshold of maximum reachability distance, $rd_{th}$ is set according to the score derived from formula \[eq:11\]. The lower the score, the better the chosen $rd_{th}$ is. The group of all valid points is denoted by $RD^*$ as equation \[eq:12\] shows.
$$\label{eq:11}
score(rd_{th})=std(RD^*)\frac{N}{len(RD^*)}$$
$$\label{eq:12}
RD^* = \left \{ rd_i|rd_i\in RD \quad \textrm{and} \quad rd_i< rd_{th}\right \}$$
**Input:** $L_u=[l_1,l_2,...,l_N], \eta $\
**Output:** cluster groups of locations $GL=[L_1^*,L_2^*,...,L_r^*]$\
cluster of locations $L_i^*=[l_{i1},l_{i2},...,l_{ig}]$
$\textbf{initialize}$ $CD$=$list()$, $RD$=$list(maxdis)$,$RD(0)$=$0$,\
$seeds$=${1,2,...,N}$, $ind$=$1$, $order$=$list()$,$GL$=$list()$,$tmp\_L$=$list()$ : $CD \gets CD+min_{\eta}Dist(i,j)$ : $seeds.move(ind)$ $order \gets order+ind$ : $cur\_rd \gets max(CD(ind),Dist(ind,ii)$ $RD(ii) \gets min(RD(ii),cur\_rd$ $ind \gets \left \{min-index(RD_{ii})|ii \in seeds\right \}$ $rd_{th} \gets min\ std(RD^*)\frac{N}{len(RD^*)}$ : : $tmp\_L \gets tmp\_L+l_{ii}$ : $GL \gets GL+tmp\_L$ $tmp\_L.clear()$ $GL \gets GL+tmp\_L$
Demand-Supply Interaction Model
-------------------------------
In this part we display the demand-supply interaction model (DSI). Through the TLDA model, each user $u$ is labelled as a member of a particular cultural pattern $z$. Through POPTICS, the active centre $ \mu $ and radius $r$ of each user are determined. Also, the sub active range of locations belonging to pattern $z$ can be drawn, which has the same centre $ \mu $ and a smaller pattern radius $r_{u_z}$. These results allow us to link users with the urban areas, and thus give us an indication of the demand levels of different cultural types in urban regions. The assumption here is that for a certain user $u$ from a cultural group $z$, his demand for this type of cultural service is highest at the active centre $\mu$, and decays as the distance increasing until $r_{u_z}$. The attenuation pattern is depicted by a Gaussian function. For a point $x$ within user $u$’s pattern range $r_{u_z}$ in the city, the demand influence it gets from $u$ can be obtained by:
$$d_{u_z}(x)=Norm(x,\mu ,r_{u_z})=\frac{1}{\sqrt{2\pi r_{u_z}^2}}exp\left ( -\frac{(x-\mu )^2}{2r_{u_z}^2}\right )$$
The total demand in terms of pattern $z$ for area $x$ is the aggregation of influences from all users in pattern $z$.
$$D_z(x) = \sum_{u_{z}}d_{u_{z}}(x)$$
Next, we turn our focus to the supply of patterns. For each venue category $v$ in pattern $z$, we calculate the supply capability of $v$ spatially. If a user $u$ once created check-in(s) at venue $v$, then the centre of the user $\mu $ is covered by the service range of venue $v$. We find all the users who had check-ins at $v$, calculate the distances between their centres with $v$. The average of the distances is set as the standard deviation for the attenuation distribution, and denoted by $\sigma$. Based on this assumption, the supply capability of cultural pattern $z$ contributed by venue $v$ in area $x$ can be obtained through:
$$s_{v_z}(x)=Norm(x,v_z,\sigma_{v_z})=\frac{1}{\sqrt{2\pi {\sigma_{v_z}}^2}}exp\left ( -\frac{(x-v_z)^2}{2\sigma_{v_z} ^2}\right )$$
The total supply level of area $x$ in the city in terms of pattern $z$ can be achieved by the following equation.
$$S_z(x) = \sum_{v_{z}}s_{v_{z}}(x)$$
We then define a metric called demand-supply ratio (DSR) to capture the desirability level of a certain type of cultural service $z$ in urban areas as:
$$DSR_z(x)=\frac{D_z(x)}{S_z(x)}$$
The higher the DSR is, the greater the need of particular cultural facilities, and the higher the priority of the area in the proposed urban cultural planning scheme.
Experiments {#sec:experiments}
===========
Until now, we have provided a holistic framework for urban cultural studies from extracting spatio-temporal cultural patterns to refining cultural planning for the city. Next, we will employ WeChat Moments data described in Section \[dataset\] and use Beijing as a case to present how these models can be applied jointly in practice.
Data Preprocessing
------------------
We firstly filter cultural fans based on users with at least 20 check-ins at cultural venues during the observation time. After this procedure, our dataset shows that there are 1,082 cultural venues grouped in 37 categories, and 324,809 cultural check-ins created by 18,234 cultural fans in Beijing during the selected four months. Besides a venue category label, we also represent a temporal label for each cultural check-in with three levels of identifiers, including month of year, day of week, and hour of day. Following this form of expression, a user’s check-in history can be represented as (User3, ((Concert hall, JulFri20), (Golf, OctSun10), (Yoga, AprFri18)), for example. A collection of all the cultural fans’ check-ins constitutes the whole corpus, which is the input data for our analysis.
Cultural Patterns Extraction for Beijing
----------------------------------------
We first run the TLDA model with the optimum number of patterns $K$ given by TCV. We adopt 7 numbers from 3 to 9 as candidates, run the TLDA for 100 iterations each, and get their respective average TCV scores as shown in Figure 4. We can see that 6 gives best performance, suggesting that cultural behaviours in Beijing should be classified into six groups based on their categorical and temporal characteristics. We then select 6 as the value of $K$ and run the TLDA model again to extract cultural patterns for the city. The main output resulting from this process are three matrices: pattern-user matrix, pattern-time matrix, and venue-pattern matrix. These matrices provide us the probability distribution of users, time periods, and venue categories over 6 patterns, respectively. These outputs as a whole tell us which group of people prefer to do what type of cultural activities at what time.
![TCV Scores for Different Number of Patterns.](4TCV)
![Probabilities of Venue Categories over Patterns.[]{data-label="fig:PanVn"}](5PandVn)
Figure \[fig:PanVn\] presents the probabilities of cultural venue categories over patterns. The colour of a cell indicates the probability a category belongs to a certain cultural pattern. As we can observe, 37 cultural venue categories can be clustered separately into six cultural groups based on which pattern has the highest probability. The name of the category is printed in black as a top venue if its probability is higher than 0.1. Otherwise, it is considered not a typical category for the pattern and is coloured in grey. We also compute the cosine similarity between each pair of venue categories and show the results in Figure 6. As it can be seen in the figure, the clustering result of venues is desirable in the sense that all the within-group similarities are higher than 0.9, while most of the inter-group similarities are lower than 0.1.
![Cosine Similarity between Venue Categories.](6WordSim)
![Comparison between TLDA and LDA (Venues).](7Com_TLDA)
To further evaluate the clustering performance of the model, we compare the CV scores [@roder2015exploring] of TLDA with that of LDA. Since the LDA model does not contain the temporal part extended by TLDA, only the venue category clustering is evaluated. Again, we run the analysis iteratively with $K$ setting as 3-9 and present the results in Figure 7. We can see that the TLDA model outperforms LDA in all the seven cases. This result indicates that the TLDA model not only enriches LDA by considering temporal features, it is also superior to classical LDA by generating more coherent topics. In addition to venue category information, TLDA also provides us another point of view to learn about cultural patterns temporally. The temporal characteristics of the cultural patterns are displayed and compared in Figure 8. For comparison, values are shown in percentage to present the degrees to which time periods are representative for the patterns. To present the hourly patterns more clearly, we group 24 hours into five slots, which are morning (6-11am), noon (11am-14pm), afternoon (14-19pm), evening (19-24pm), and night (0-6am).
![image](8Temp)
The results in Figure 5 and Figure 8 together reveal the key characteristics of the six cultural patterns detected in Beijing. We can observe that pattern one is composed of people who love travelling and prefer wide open space. They do not like visiting scenic spots and parks during winter very much, perhaps due to harsher weather conditions. Compared to other groups, the first pattern has the highest percentage of activity in the morning and the lowest during nighttime. This finding can be linked with what we can expect from real life experience, that parks in Beijing are always full of people doing morning exercises. Music fans make up the majority of pattern 2. For this group of people, their cultural visiting frequencies during the four seasons are rather balanced. However, on weekends and evening, they present considerably higher probability of being active when compared to their counterparts. This can be explained by the fact that concerts are usually being scheduled and attended in the evening hours. The third pattern are nature lovers who like plants and animals in particular. This group of people present hourly sensitive features as they prefer visiting cultural places during the daytime to evening or at night. Then, the forth pattern corresponds to museum lovers. This cultural group are particularly active in summer and in the afternoons. Moreover, they have the lowest percentage of activity on Monday compared to other patterns. This phenomenon can probably be explained by the fact that many museums are closed on Mondays. The fifth group are sports fans, especially swimming enthusiasts. These people have the highest percentage of nighttime activity. The last group of people are gym lovers with a single top cultural venue category becoming prominent here with an extremely high percentage of 0.99. Spring and summer time is the most popular period for them to exercise. They do not like going to the gym in the morning, preferring evenings in most cases. Additionally, even though weekends make the greatest contributions to almost all the cultural patterns, they are not prominent in the case of pattern 6.
Refined Cultural Planning for Beijing
-------------------------------------
After uncovering the cultural patterns, we map how the demand and supply levels of each pattern are distributed spatially, and calculate the demand-supply ratio for urban areas. In this part of analysis, we divide the city into 400m by 400m grids aligned with the latitude and longitude dimensions. Each cell is called an area, and the centroid of which is used to represent the cell’s demand-supply balance.
From the demand respective, we begin with the application of the POPTICS algorithm to find centres of activity range for six groups of cultural fans in their daily lives based on all the check-ins they created previously. Then, we collect a particular subgroup of cultural check-ins for each cultural fan according to the pattern he belongs to. We find his influential radius $ r_{u_z} $, and calculate the demand value he contributes to his surrounding areas based on equation (13). After the need of all users in a certain pattern group are aggregated by formula (14), the overall demand for each cultural pattern can be obtained as the first row in Figure 9 suggests. With respect to the supply side, we calculate the supply capability of each area in terms of various cultural patterns according to equations (15) and (16). The supply levels categorised by patterns across the city are visualised in the middle row subfigures, followed by the final demand-supply ratio output shown in the last row in Figure 9. For the first two rows, the darker the colour, the higher the demand or supply level; while for the DSR, red and blue represent high and low ratios, respectively. As can be observed from the figure, the demand for different cultural patterns is distributed in a similar manner spatially. The highest demand areas cluster in the urban area between the 2nd and 5th ring road, while some hotspots shown in suburbs areas like Yanqing, Huairou, and Miyun Districts. When we look at the supply level, six patterns present a more heterogeneous behaviour. Although a general pattern can be discovered that the cultural supply capabilities show a decreasing trend from the city centre to the suburbs, this inequality is less obvious in patterns 1, 4 and 5. From our final results of DSR, we can find the inner city inside the former city walls (Xicheng and Dongcheng) is in great need of cultural services of pattern 1 and 5, like parks, swimming pools, and exhibition centres. For pattern 2 and 3, the demand-supply gaps of related cultural facilities are relatively equal within the city, while for the last pattern, the need for gym service is relatively greater in outer suburbs.
Through the demand-supply analysis above using the DSI model, we get priority lists of urban areas in terms of different types of cultural services according the levels of need. To validate our model, we calculate the Pearson correlation between the DSR value and the average distance users need to travel for a particular kind of cultural services. The correlation coefficients for six patterns are presented in Figure 10. From this figure, we can see that five patterns show high positive correlations except pattern one. The distinctive result observed in pattern one can probably be explained by its top venue categories. As an ancient city, many of the scenic spots in Beijing are historical relics, the locations of which are not decided by modern urban planning. The overall result of the correlation analysis suggests that users from areas in great needs of a type of cultural services generally have to travel longer distances to be served. It further indicates that the facilities in the users’ surrounding areas are not enough to fulfill their needs, and thus provides evidence for our results that services in high priority areas detected by the models indeed are insufficient compared to their counterparts.
![image](9DSI)
![Correlation between DSR and Travel Distance.](10Correlation)
Conclusion
==========
In this paper we have proposed a data-driven framework for urban cultural planning. The framework exploits a time-aware topic model to identify latent patterns of urban cultural interactions. Using then a density-based algorithm named POPTICS, we identify the primary locations of activity of mobile users and couple this with the TLDA output to generate cartographic representations indicative of the demand-supply balance for cultural resources in the city. We evaluate our approach using implicit user feedback, demonstrating how user active in areas that lack cultural establishments bear larger transportation costs to access cultural resources. Besides urban policy makers, the findings of this resesarch can also provide suggestions to business owners on the opening hours, and to citizens on neighbourhood characteristics in the city. Overall, we demonstrate how the new generation of datasets emerging through modern location-based systems can provide an edge in city planning as they offer rich views on urban mobility dynamics and allow for the development of population adaptive frameworks that move beyond static representations of area-level population densities.
Acknowledgements
================
We would like to thank Tencent for hosting Xiao Zhou and providing her with access to the datasets for this study. The first author acknowledges the financial support co-funded by the China Scholarship Council and the Cambridge Trust.
[^1]: http://www.wechat.com/en
[^2]: Beijing, Guangzhou, Shanghai and Shenzhen
[^3]: users who have logged into WeChat within the month
[^4]: The permanent population of Beijing is 21.907 million by the end of 2017 according to Beijing Municipal Bureau of Statistics. http://www.bjstats.gov.cn/tjsj/yjdsj/rk/2017
|
---
address: |
$^{1}$ College of Physics, Hebei Advanced Thin Films Laboratory, Hebei Normal University, Shijiazhuang 050024, Hebei, China; 1543916410@qq.com (M.Y.)\
$^{2}$ Physics Department, Shijiazhuang University, Shijiazhuang 050035, Hebei, China
---
Introduction
============
Artificially prepared magnetic nanostructures have been forming the basic components of nanodevices in modern information industry for decades[@Leeuw_RPP_1980; @Bauer_RMP_2005]. Various magnetization textures therein provide the abundant choices of defining zeros and ones in binary world. Among them, domain walls (DWs) are the most common ones which separate magnetic domains with interior magnetization pointing to different directions[@XiongG_2005_Science; @Parkin_2008_Science_a; @Parkin_2008_Science_b; @Koopmans_2012_nanotech; @Thomas_JAP_2012; @Parkin_2015_nonotech]. In magnetic nanostrips with rectangular cross sections, numerical calculations confirm that there exists a critical cross-section area[@McMichael_IEEE_1997; @Thiaville_JMMM_2005]. Below (above) it, transverse (vortex) walls dominate. For nanodevices based on DW propagation along strip axis with high integral level, strips are thin enough so that only transverse DWs (TDWs) appear. Their velocity under external driving factors (magnetic fields, polarized electronic currents, etc.) determines the response time of nanodevices based on DW propagation. In the past decades, analytical, numerical and experimental investigations on TDW dynamics have been widely performed and commercialized to a great extent[@Walker_JAP_1974; @Ono_Science_1999; @XiongG_nmat_2003; @Erskine_nmat_2005; @Tretiakov_PRL_2008; @Erskine_PRB_2008; @jlu_EPL_2009; @YanP_AOP_2009; @SZZ_PRL_2010; @Tatara_JPDAP_2011; @Berger_PRB_1996; @Slonczewski_JMMM_1996; @ZhangSF_PRL_2004; @Ono_PRL_2004; @Erskine_PRL_2006; @Hayashi_PRL_2006; @YanP_APL_2010]. However, seeking ways to further increase TDW velocity, thus improve the devices’ response performance, is always the pursuit of both physicists and engineers.
Besides velocity, fine manipulations of DW structure are also essential for improving the device performance. In the simplest case, a TDW with uniform azimuthal distribution, which is generally called a planar TDW (pTDW), is of the most importance. Historically the Walker ansatz[@Walker_JAP_1974] provides the first example of pTDW, however its tilting attitude is fully controlled by the driving field or current density (in particular, lying within easy plane in the absence of external driving factors) thus can not be freely adjusted. In the past decades, several strategies[@Thiaville_nmat_2003; @Kim_APL_2007; @Bryan_JAP_2008; @jlu_JAP_2010] have been proposed to suppress or at least postpone the Walker breakdown thus makes TDWs preserve traveling-wave mode which has a high mobility (velocity versus driving field or current density). The nature of all these proposals is to destroy the two-fold symmetry in the strip cross section, thus is equivalent to a transverse magnetic field (TMF), no matter it’s built in or extra. In 2016, the “velocity-enhancement" effect of uniform TMFs (UTMFs) on TDWs in biaxial nanostrips has been thoroughly investigated[@jlu_PRB_2016]. It turns out that UTMFs can considerably boost TDWs’ propagation meanwhile inevitably leaving a twisting in their azimuthal planes. However for applications in nanodevices with high density, the twisting is preferred to be erased to minimize magnetization frustrations and other stochastic fields. In 2017, optimized TMF profiles with fixed strength and tunable orientation are proposed to realize pTDWs with arbitrary tilting attitude[@limei_srep_2017]. Dynamical analysis on these pTDWs reveals that they can propagate along strip axis with higher velocities than those without TMFs. However, there are several remaining problems: the rigorous analytical pTDW profile (thus TMF distribution) is still lacking, the pTDW width can not be fully controlled and the real experimental setup is challenging.
In this work, we engineer pTDWs with arbitrary tilting attitude in biaxial magnetic nanostrips by tailoring TMF profiles with uniform orientation but tunable strength distribution. For statics, the well-tailored TMF profile manipulates pTDW with arbitrary tilting attitude, clear boundaries and controllable width. In particular, these pTDWs are robust again disturbances which are not too abrupt. For axial-field-driven dynamics with TMFs comoving, pTDWs will acquire higher velocity than Walker’s ansatz predicts.
Model and Preparations
======================
![Sketch of biaxial magnetic nanostrip under consideration. ($\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z$) is the global Cartesian coordinate system in real space: $\mathbf{e}_z$ is along strip axis, $\mathbf{e}_x$ is in the thickness direction and $\mathbf{e}_y=\mathbf{e}_z\times\mathbf{e}_x$. $k_1(k_2)$ is the total magnetic anisotropy coefficient in easy (hard) axis. ($\mathbf{e}_{\mathbf{m}},\mathbf{e}_{\theta},\mathbf{e}_{\phi}$) forms the local spherical coordinate system associated with the magnetization vector $\mathbf{M}$ (blue arrow with magnitude $M_s$, polar angle $\theta$ and azimuthal angle $\phi$). The total external field has two components: axial driving field with magnitude $H_1$ and TMF with constant tilting attitude $\Phi_{\perp}$ and tunable magnitude $H_{\perp}(z,t)$.](Fig1.eps){width="10"}
We consider a biaxial magnetic nanostrip with rectangular cross section, as depicted in Figure 1. The $z$ axis is along strip axis, the $x$ axis is in the thickness direction and $\mathbf{e}_y=\mathbf{e}_z\times\mathbf{e}_x$. The magnetic energy density functional of this strip can be written as,
$$\label{Energy_Density}
\mathcal{E}_{\mathrm{tot}}[\mathbf{M},\mathbf{H}_{\mathrm{ext}}]=-\mu_0 \mathbf{M}\cdot\mathbf{H}_{\mathrm{ext}}-\frac{k_1}{2}\mu_0 M_z^2+\frac{k_2}{2}\mu_0 M_x^2+J\left(\nabla\mathbf{m}\right)^2,$$
in which $\mathbf{m}\equiv \mathbf{M}/M_s$ with $M_s$ being the saturation magnetization. The magnetostatic energy density has been described by quadratic terms of $M_{x,y,z}$ via three average demagnetization factors $D_{x,y,z}$[@Aharoni_JAP_1998] and thus been absorbed into $k_{1,2}$ as $k_1=k_1^0+(D_y-D_z)$ and $k_2=k_2^0+(D_x-D_y)$[@jlu_EPL_2009; @jlu_JAP_2010; @jlu_PRB_2016], where $k_{1,2}^0$ are the magnetic crystalline anisotropy coefficients. The external field $\mathbf{H}_{\mathrm{ext}}$ has two components: the axial driving field $\mathbf{H}_{\parallel}\equiv H_1 \mathbf{e}_z$ and the TMF with general form
$$\label{TMF_general}
\mathbf{H}_{\perp}=H_{\perp}(z,t)\left[\cos\Phi(z,t)\mathbf{e}_x+\sin\Phi(z,t)\mathbf{e}_y\right].$$
The time evolution of $\mathbf{M}(\mathbf{r},t)$ is described by the Landau-Lifshitz-Gilbert (LLG) euqation[@Gilbert_IEEE_2004] as
$$\label{LLG_vector}
\frac{\partial \mathbf{m}}{\partial t}=-\gamma \mathbf{m} \times \mathbf{H}_{\mathrm{eff}}+\alpha \mathbf{m}\times \frac{\partial \mathbf{m}}{\partial t},$$
where $\alpha$ phenomenologically describes magnetic damping strength, $\gamma>0$ is the absolute value of electron’s gyromagnetic ratio and $\mathbf{H}_{\mathrm{eff}}=-\left(\delta\mathcal{E}_{\mathrm{tot}}/\delta\mathbf{M}\right)/\mu_0$ is the effective field.
When system temperature is far below Curie point, the saturation magnetization $M_s$ of magnetic materials can be viewed as constant. Thus $\mathbf{M}(\mathbf{r},t)$ is fully described by its polar angle $\theta(\mathbf{r},t)$ and azimuthal angle $\phi(\mathbf{r},t)$. In addition, for thin enough nanostrips (where TDWs dominate) the inhomogeneity in cross section can be ignored thus make them become quasi one-dimensional (1D) systems ($\mathbf{r}\rightarrow z$). Then reasonably one has $(\nabla\mathbf{m})^2\equiv(\nabla_z\mathbf{m})^2=(\theta')^2+\sin^2\theta(\phi')^2$ in which a prime means spatial derivative to $z$. After the transition from the global Cartesian coordinate system ($\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z$) to the local spherical coordinate system ($\mathbf{e}_{\mathbf{m}},\mathbf{e}_{\theta},\mathbf{e}_{\phi}$), the effective field $\mathbf{H}_{\mathrm{eff}}$ reads
\[H\_eff\] $$\begin{aligned}
\mathbf{H}_{\mathrm{eff}}&=H_{\mathrm{eff}}^{\mathbf{m}}\mathbf{e}_{\mathbf{m}}+H_{\mathrm{eff}}^{\theta}\mathbf{e}_{\theta}+H_{\mathrm{eff}}^{\phi}\mathbf{e}_{\phi}, \\
H_{\mathrm{eff}}^{\mathbf{m}}&=H_1\cos\theta+H_{\perp}(z,t)\sin\theta\cos\left[\Phi_{\perp}(z,t)-\phi\right]+k_1 M_s-M_s\sin^2\theta\left(k_1+k_2\cos^2\phi\right) \nonumber \\
& \quad \quad -\frac{2J}{\mu_0 M_s}(\theta'^2+\sin^2\theta\phi'^2)^2, \\
H_{\mathrm{eff}}^{\theta}&=-H_1\sin\theta+H_{\perp}(z,t)\cos\theta\cos\left[\Phi_{\perp}(z,t)-\phi\right]-M_s\sin\theta\cos\theta\left(k_1+k_2\cos^2\phi\right) \nonumber \\
& \quad \quad +\frac{2J}{\mu_0 M_s}(\theta''-\sin\theta\cos\theta\phi'^2)\equiv -\mathcal{B}, \\
H_{\mathrm{eff}}^{\phi}&=H_{\perp}(z,t)\sin\left[\Phi_{\perp}(z,t)-\phi\right]+k_2 M_s\sin\theta\sin\phi\cos\phi+\frac{2J}{\mu_0 M_s}\frac{1}{\sin\theta}\left(\sin^2\theta\cdot\phi'\right)'\equiv \mathcal{A}.
\end{aligned}$$
Put it back into Eq. (\[LLG\_vector\]), the vectorial LLG equation turns to its scalar counterparts,
\[LLG\_scalar\_v1\] $$\begin{aligned}
(1+\alpha^2)\dot{\theta}/\gamma &=\mathcal{A}-\alpha\mathcal{B}, \\
(1+\alpha^2)\sin\theta\dot{\phi}/\gamma &=\mathcal{B}+\alpha\mathcal{A},
\end{aligned}$$
or equivalently
\[LLG\_scalar\_v2\] $$\begin{aligned}
\dot{\theta}+\alpha\sin\theta\dot{\phi} &=\gamma\mathcal{A}, \\
\sin\theta\dot{\phi}-\alpha\dot{\theta} &=\gamma\mathcal{B},
\end{aligned}$$
where a dot means time derivative. These equations are all what we need for our work is this paper.
Results
=======
In this section, we present in details how to engineer pTDWs with arbitrary tilting attitude by properly tailoring TMF profile along strip axis. As mentioned in Section \[introduction\], here we fix the TMF orientation (thus $\Phi_{\perp}(z,t)\equiv\Phi_0$) and allow its strength tunable along strip axis, which is much easier to realize in real experiments. Both statics and axial-field-driven dynamics of pTDWs will be systematically investigated.
Statics {#Statics}
-------
From the roadmap of field-driven DW motion in nanostrips[@jlu_EPL_2009], in the absence of axial driving fields a TDW will finally evolve into its static configurations ($\dot{\theta}=\dot{\phi}=0$) under time-independent TMFs ($H_{\perp}(z,t)\equiv H_{\perp}(z)$). For Eq. (\[LLG\_scalar\_v2\]) this means $\mathcal{A}=\mathcal{B}=0$. In the absence of any TMF ($H_{\perp}(z)\equiv 0$), the static TDW is a pTDW lying in easy plane with the well-known Walker’s profile[@Walker_JAP_1974],
$$\label{Static_profile_noTMF}
\theta(z)=2\arctan e^{\eta\frac{z-z_0}{\Delta_0}},\quad \phi(z)\equiv n\pi/2,$$
where $\Delta_0\equiv \sqrt{2J/(\mu_0 k_1 M_s^2)}$ is the pTDW width, $z_0$ is the wall center, $\eta=+1(-1)$ denotes head-to-head (tail-to-tail) pTDWs and $n=+1(-1)$ is the wall polarity (sign of $\langle m_y\rangle$). However, if we want to realize a static pTDW with arbitrary tilting attitude, i.e. $\phi(z)\equiv \phi_{\mathrm{d}}$, well-tailored position-dependent TMF profile must be exerted.
### Boundary condition
As the first step, we need the boundary condition of this pTDW, which means the magnetization orientation in the two domains at both ends of the strip. Without losing generality, our investigations are performed for head-to-head walls and $0<\phi_{\mathrm{d}}<\pi/2$. In the two domains, the orientation of magnetization should be uniform, meaning that the azimuthal angle satisfies $\phi(z)\equiv\phi_{\mathrm{d}}$, while the polar angle in the left (right) domain takes the value of $\theta_{\mathrm{d}}$ ($\pi-\theta_{\mathrm{d}}$). Meantime, the TMF strength should be constant ($H_{\perp}(z)\rightarrow H_{\perp}^{\mathrm{d}}$) in these two domains. Then $\mathcal{A}=\mathcal{B}=0$ becomes
\[Statics\_AandBeq0\_in2domains\] $$\begin{aligned}
H_{\perp}^{\mathrm{d}}\sin(\phi_{\mathrm{d}}-\Phi_0) &=k_2 M_s\sin\theta_{\mathrm{d}}\sin\phi_{\mathrm{d}}\cos\phi_{\mathrm{d}}, \\
H_{\perp}^{\mathrm{d}}\cos(\Phi_0-\phi_{\mathrm{d}}) &=M_s\sin\theta_{\mathrm{d}}(k_1+k_2\cos^2\phi_{\mathrm{d}}).
\end{aligned}$$
The solution to the above equation set provides the TMF profile in the two domains as
$$\label{Static_profile_withTMF_in2domains}
\Phi_0=\arctan\left(\frac{k_1}{k_1+k_2}\cdot\tan\phi_{\mathrm{d}}\right), \quad H_{\perp}^{\mathrm{d}}=H_{\perp}^{\mathrm{max}}\cdot\sin\theta_{\mathrm{d}},$$
with
$$\label{Static_profile_withTMF_Hperp_max}
H_{\perp}^{\mathrm{max}}=M_s\sqrt{k_1^2\sin^2\phi_{\mathrm{d}}+(k_1+k_2)^2\cos^2\phi_{\mathrm{d}}}.$$
Eq. (\[Static\_profile\_withTMF\_in2domains\]) indicates that in both domains, TMF should be farther away from the easy plane than the magnetization. Meanwhile, the existence condition of the pTDW ($\theta_{\mathrm{d}}\ne\pi/2$) requires that TMF strength in domains has an upper limit,
$$\label{Static_profile_withTMF_Hperp.lt.HperpMax}
H_{\perp}^{\mathrm{d}}<H_{\perp}^{\mathrm{max}}.$$
### Static pTDW profile
Note that we have fixed TMF orientation to be $\Phi_0$, therefore in pTDW region $\mathcal{A}=\mathcal{B}=0$ becomes
\[Statics\_AandBeq0\_inpTDW\_v1\] $$\begin{aligned}
0&=H_{\perp}(z)\sin\left(\Phi_0-\phi\right)+k_2 M_s\sin\theta\sin\phi\cos\phi+\frac{2J}{\mu_0 M_s}\frac{1}{\sin\theta}\left(\sin^2\theta\cdot\phi'\right)', \\
\frac{2J}{\mu_0 M_s}\theta''&=-H_{\perp}(z)\cos\theta\cos\left(\Phi_0-\phi\right)+M_s\sin\theta\cos\theta\left(k_1+k_2\cos^2\phi\right)+\frac{2J}{\mu_0 M_s}\sin\theta\cos\theta\phi'^2.
\end{aligned}$$
Since we are considering pTDWs with uniform tilting attitude $\phi(z)\equiv\phi_{\mathrm{d}}$, then the above equations become
\[Statics\_AandBeq0\_inpTDW\_v2\] $$\begin{aligned}
H_{\perp}(z)\sin\left(\phi_{\mathrm{d}}-\Phi_0\right)&=k_2 M_s\sin\theta\sin\phi_{\mathrm{d}}\cos\phi_{\mathrm{d}}, \\
\frac{2J}{\mu_0 M_s}\theta''&=-H_{\perp}(z)\cos\theta\cos\left(\Phi_0-\phi_{\mathrm{d}}\right)+M_s\sin\theta\cos\theta\left(k_1+k_2\cos^2\phi_{\mathrm{d}}\right).
\end{aligned}$$
Combing Eqs. (\[Statics\_AandBeq0\_in2domains\]a) and (\[Statics\_AandBeq0\_inpTDW\_v2\]a), one has
$$\label{Statics_Hperp_profile_v1}
H_{\perp}(z)=\frac{H_{\perp}^{\mathrm{d}}}{\sin\theta_{\mathrm{d}}}\cdot\sin\theta(z)=H_{\perp}^{\mathrm{max}}\cdot\sin\theta(z).$$
Putting it back into Eq. (\[Statics\_AandBeq0\_inpTDW\_v2\]b) and considering Eq. (\[Statics\_AandBeq0\_in2domains\]b), it turns out that
$$\label{Statics_theta_2ndDerivative}
\frac{2J}{\mu_0 M_s}\theta''=\frac{\sin\theta\cos\theta}{\sin\theta_{\mathrm{d}}}\left[M_s\sin\theta_{\mathrm{d}}(k_1+k_2\cos^2\phi_{\mathrm{d}})-H_{\perp}^{\mathrm{d}}\cos(\Phi_0-\phi_{\mathrm{d}})\right]=0,$$
which means $\theta(z)$ is linear in pTDW region,
$$\label{Statics_theta}
\theta(z)=C_1+C_2\cdot(z-z_0),$$
where $z_0$ is the pTDW center.
![Illustration of pTDW profile with arbitrary titling attitude $\phi_{\mathrm{d}}$, controllable width $\Delta$ and linear polar angle distribution from $\theta_{\mathrm{d}}$ to $\pi-\theta_{\mathrm{d}}$. The color chart indicates the variation of $M_z$ component along strip axis from $\cos\theta_{\mathrm{d}}$ in the left domain to $-\cos\theta_{\mathrm{d}}$ in the right domain.](Fig2.eps){width="10"}
It is worth noting that in nearly all existing literatures, the boundary between “domains" and “domain walls" in nanostrips is not clear (or abrupt) since $\theta(z)$ and $\phi(z)$ and their derivatives are all continuous there. However, Eqs. (\[Statics\_Hperp\_profile\_v1\]) to (\[Statics\_theta\]) provide us an opportunity to realize a pTDW with clear boundary and tunable width, as depicted in Figure. 2. In summary, under the following TMF distribution
$$\label{Statics_Hperp_profile_v2}
H_{\perp}(z)=
\left\{
\begin{array}{cc}
H_{\perp}^{\mathrm{d}}, & z<z_0-\frac{\Delta}{2} \\
H_{\perp}^{\mathrm{max}}\cdot\sin\left\{\theta_{\mathrm{d}}+\frac{\pi-2\theta_{\mathrm{d}}}{\Delta}\left[z-\left(z_0-\frac{\Delta}{2}\right)\right]\right\}, & z_0-\frac{\Delta}{2}<z<z_0+\frac{\Delta}{2} \\
H_{\perp}^{\mathrm{d}}, & z>z_0+\frac{\Delta}{2} \\
\end{array}
\right., \quad \Phi_{\perp}(z)\equiv\Phi_0,$$
a pTDW with the following profile will emerge in the nanostrip,
$$\label{Statics_pTDW_profile}
\theta_0(z)=
\left\{
\begin{array}{cc}
\theta_{\mathrm{d}}, & z<z_0-\frac{\Delta}{2} \\
\theta_{\mathrm{d}}+\frac{\pi-2\theta_{\mathrm{d}}}{\Delta}\left[z-\left(z_0-\frac{\Delta}{2}\right)\right], & z_0-\frac{\Delta}{2}<z<z_0+\frac{\Delta}{2} \\
\pi-\theta_{\mathrm{d}}, & z>z_0+\frac{\Delta}{2} \\
\end{array}
\right., \quad \phi_0(z)\equiv\phi_{\mathrm{d}}.$$
Interestingly, the above pTDW has the following features: (i) an arbitrary tilting attitude $\phi_{\mathrm{d}}$. (ii) a fully controllable width $\Delta$ and (iii) two clear boundaries ($z_0\pm\Delta/2$) with the two adjacent domains. Note that the magnetization and TMF at $z_0\pm\Delta/2$ are both continuous, but $\nabla_z\mathbf{m}$ is not. This inevitably leads to a finite jump of exchange energy density right there.
However, the pTDW has a critical width $\Delta_{\mathrm{c}}$ under which the entire strip has lower magnetic energy compared with the single-domain state under the UTMF with strength $H_{\perp}^{\mathrm{d}}$ and orientation $\Phi_0$. To see this, we integrate $\mathcal{E}_{\mathrm{tot}}^{\mathrm{pTDW}}-\mathcal{E}_{\mathrm{tot}}^{\mathrm{domain}}$ over the entire strip and thus
$$\label{E_difference}
\Delta E=\frac{k_1 \mu_0 M_s^2}{2}\cdot\left[\left(\Delta_0\right)^2(\pi-2\theta_{\mathrm{d}})^2\frac{1}{\Delta}-\frac{\sin 2\theta_{\mathrm{d}}+(\pi-2\theta_{\mathrm{d}})\cos 2\theta_{\mathrm{d}}}{2(\pi-2\theta_{\mathrm{d}})}\left(1+\frac{k_2}{k_1}\cos^2\phi_{\mathrm{d}}\right)\Delta\right].$$
Obviously, there exists a critical pTDW width
$$\label{Delta_c}
\Delta_{\mathrm{c}}\equiv\Delta_0\cdot\left(1+\frac{k_2}{k_1}\cos^2\phi_{\mathrm{d}}\right)^{-\frac{1}{2}}\cdot \kappa(\theta_{\mathrm{d}}),\quad \kappa(\theta_{\mathrm{d}}) \equiv \sqrt{\frac{2(\pi-2\theta_{\mathrm{d}})^3}{\sin 2\theta_{\mathrm{d}}+(\pi-2\theta_{\mathrm{d}})\cos 2\theta_{\mathrm{d}}}}.$$
As $H_{\perp}^{\mathrm{d}}\rightarrow H_{\perp}^{\mathrm{max}}$, by defining $\frac{H_{\perp}^{\mathrm{d}}}{H_{\perp}^{\mathrm{max}}}=1-\epsilon$ we have $\theta_{\mathrm{d}}=\arcsin\frac{H_{\perp}^{\mathrm{d}}}{H_{\perp}^{\mathrm{max}}}\approx\frac{\pi}{2}-\sqrt{2\epsilon}$, thus $\sin 2\theta_{\mathrm{d}}\approx 2\sqrt{2\epsilon}$, $\cos 2\theta_{\mathrm{d}}\approx -1+4\epsilon$ and $\pi-2\theta_{\mathrm{d}}\approx 2\sqrt{2\epsilon}$. Putting all these approximations back into $\kappa$ in Eq. (\[Delta\_c\]), we finally get $\kappa\rightarrow 2$ which leads to a finite critical pTDW $\Delta_{\mathrm{c}}$. As a result, we can always make the pTDW energetically preferred by setting $\Delta>\Delta_{\mathrm{c}}$ (thus $\Delta E<0$).
### Stability analysis {#Stability_analysis}
To make the explorations on statics complete and self-consistent, we need to perform stability analysis on the pTDW profile in Eq. (\[Statics\_pTDW\_profile\]). For simplicity, the variations on $\theta(z)$ and $\phi(z)$ are processed separately. In the first step, $\phi(z)\equiv\phi_0$ is fixed (thus $\dot{\phi}\equiv 0$) and suppose the polar angle departs from its static profile as
$$\label{Stability_statics_theta_0}
\theta=\theta_0+\delta\theta.$$
Putting it back into Eq. (\[LLG\_scalar\_v2\]b), by noting that $\dot{\phi}_0=0$ and $\dot{\theta}_0=0$, one has
$$\label{Stability_statics_theta_1}
\sin\theta\dot{\phi}-\alpha\dot{\theta} =\gamma\mathcal{B} \Rightarrow \frac{\alpha}{\gamma}\frac{\partial(\delta\theta)}{\partial t}=-\mathcal{B}.$$
On the other hand, in pTDW region $\theta_0$ satisfies Eq. (\[Statics\_AandBeq0\_inpTDW\_v2\]b). After performing series expansion on $\mathcal{B}$ around $\theta_0$ and preserving up to linear terms of $\delta\theta$, we finally get
$$\label{Stability_statics_theta_2}
\frac{\alpha}{\gamma}\frac{\partial(\delta\theta)}{\partial t}\approx \left[-M_s\cos^2\theta_0(k_1+k_2\cos^2\phi_0)+\frac{2J}{\mu_0 M_s}\frac{(\delta\theta)''}{\delta\theta} \right] \cdot\delta\theta.$$
Obviously, when
$$\label{Stability_statics_theta_3}
\left|\frac{(\delta\theta)''}{\delta\theta}\right|<\frac{\cos^2\theta_0(1+k_2\cos^2\phi_0/k_1)}{(\Delta_0)^2},$$
$\delta\theta$ fades out as times goes by. This implies that when the variation $\delta\theta$ is not too abrupt, $\theta_0$ is stable. In fact, most variations satisfy this demand. For example, both tiny global translations along $z-$axis and slight local variations proportional to $z-z_0$ make $(\delta\theta)''\equiv 0$ thus assure the stability around $\theta_0$.
In the second step, we keep $\theta(z)\equiv\theta_0$ and let the azimuthal angle varies as follows
$$\label{Stability_statics_phi_0}
\phi=\phi_0+\delta\phi.$$
Substituting it into Eq. (\[LLG\_scalar\_v2\]a), by recalling that $\dot{\theta}_0=0$ and $\dot{\phi}_0=0$, we have
$$\label{Stability_statics_phi_1}
\dot{\theta}+\alpha\sin\theta\dot{\phi} =\gamma\mathcal{A} \Rightarrow \frac{\alpha}{\gamma}\frac{\partial(\delta\phi)}{\partial t}=\frac{\mathcal{A}}{\sin\theta_0}.$$
Remember in pTDW region $\theta_0$ and $\phi_0$ satisfy Eq. (\[Statics\_AandBeq0\_inpTDW\_v2\]a). By performing series expansion on $\mathcal{A}$ about $\phi_0$ and at most keeping linear terms of $\delta\theta$, one has
$$\label{Stability_statics_phi_2}
\frac{\alpha}{\gamma}\frac{\partial(\delta\phi)}{\partial t}\approx \left[-M_s(k_1+k_2\sin^2\phi_0)+\frac{2J}{\mu_0 M_s}\frac{2\cot\theta_0\cdot\theta'_0\cdot(\delta\phi)'+(\delta\phi)''}{\delta\phi} \right] \cdot\delta\phi.$$
Similarly, if $\delta\phi$ does not varies too abruptly, that is
$$\label{Stability_statics_phi_3}
\left|\frac{2\cot\theta_0\cdot\theta'_0\cdot(\delta\phi)'+(\delta\phi)''}{\delta\phi}\right|<\frac{1+k_2\sin^2\phi_0/k_1}{(\Delta_0)^2},$$
the pTDW is stable around $\phi_0$, which confirms the feasibility of engineering pTDWs in magnetic nanostrips. In particular, tiny global rotations around $z-$axis or slight local twistings proportional to $z-z_0$ will not drive pTDW away from its static profile shown in Eq. (\[Statics\_pTDW\_profile\]).
### Numerical confirmations
To confirm the above theoretical analysis, we perform numerical simulations using the OOMMF micromagnetics package[@OOMMF]. In our simulations, the nanostrip is 5 nm thick, 100 nm wide and 1 $\mu$m long, which is quite common in real experiments. The three average demagnetization factors are: $D_x=0.00661366$, $D_y=0.07002950$ and $D_z=0.92335684$[@Aharoni_JAP_1998]. Magnetic parameters are as follows: $M_s=500$ kA/m, $J=40\times 10^{-12}$ J/m, $K_1=\mu_0 k_1^0 M_s^2/2=200$ kJ/m$^3$, $K_2=\mu_0 k_2^0 M_s^2/2=50$ kJ/m$^3$ and $\alpha=0.1$ to speed up the simulation. Throughout the entire calculation, the strip is discretized into $5\times5\times5$ nm$^3$ cells and all magnetic intensive quantities evaluated at each cell are the average of their continuous counterparts over the cell volume. In all figures, $z_0$ denotes the wall center which is the algebraic average of the central positions ($\phi(z)=\pi/2$) of each layer (row of cells with a certain $y$-coordinate). At last, the external TMF at each cell is the value from Eq. (\[Statics\_Hperp\_profile\_v2\]) at the cell center.
We aim to realize a pTDW with tilting attitude $\phi_{\mathrm{d}}\equiv\pi/4$ and boundary condition $\theta_{\mathrm{d}}\equiv\pi/6$ under the TMF profile in Eq. (\[Statics\_Hperp\_profile\_v2\]). To do this, firstly simple algebra provides us $\Delta_0=13.80$ nm (14.14 nm) when the demagnetization is (not) considered. Then the critical pTDW width $\Delta_{\mathrm{c}}=35.66$ nm (41.31 nm) for each case. Therefore we set the pTDW width as $\Delta=100$ nm to assure energetic preference. We have performed simulations for both cases in which magnetostatic effect is included or not. At each case, a standard head-to-head Néel wall with width 20 nm is generated at the strip center beforehand. After it relaxes to its stable profile, a time-independent TMF described by Eq. (\[Statics\_Hperp\_profile\_v2\]) is exerted onto each calculation cell of this strip. The magnetization texture then begin to evolve accompanied by the decreasing total magnetic energy due to the Gilbert damping process. We set the convergence strategy as $|\mathbf{m}\times\mathbf{H}_{\mathrm{tot}}|/M_s<10^{-7}$, which is accurate enough. The results are plotted in Figure 3(a) and 3(b), respectively.
In the simpler case, the pTDW profile under TMF distribution described in Eq. (\[Statics\_Hperp\_profile\_v2\]) with $\phi_{\mathrm{d}}\equiv\pi/4$, $\theta_{\mathrm{d}}\equiv\pi/6$ and $\Delta=100$ nm in the absence of demagnetization is plotted in Figure 3(a). The solid black and red lines are the analytical polar and azimuthal distributions from Eq. (\[Statics\_pTDW\_profile\]), respectively. The open circles are numerical data from OOMMF simulation. Clearly the planar nature of wall is reproduced very well. For polar angle, the linear behavior near pTDW center is unambiguous. While the discontinuity in polar angle derivative at pTDW border ($z_0\pm$ 50 nm) is weakened due to the inevitable “discretized sampling" of TMF at calculation cells during numerical simulations. In summary one may clearly see that the numerics and analytics fit very well.
![Comparisons between analytical (solid lines) and numerical (hollow symbols) pTDW profiles under TMF in Eq. (\[Statics\_Hperp\_profile\_v2\]) with $\phi_{\mathrm{d}}\equiv\pi/4$, $\theta_{\mathrm{d}}\equiv\pi/6$ and $\Delta=100$ nm: (a) without demagnetization, (b) with demagnetization. The magnetic parameters are as follows: $M_s=500$ kA/m, $J=40\times 10^{-12}$ J/m, $K_1=\mu_0 k_1^0 M_s^2/2=200$ kJ/m$^3$, $K_2=\mu_0 k_2^0 M_s^2/2=50$ kJ/m$^3$ and $\alpha=0.1$.](Fig3.eps){width="10"}
Then we switch on the magnetostatic interaction (demagnetization). Due to the complicated dipole-dipole interaction, the magnetization orientation in the strip cross section differs a little (not too much since the strip is thin enough). We then calculate the polar and azimuthal angles for three typical layer (rows of cells with the same $y-$coordinate): top, central and bottom. The resulting data are depicted in Figure 3(b) by different discrete hollow symbols: crosses, squares and triangles. It turns out that they overlap each other nicely and match the analytical profiles quite well. This not only reproves the validity of TMF in Eq. (\[Statics\_Hperp\_profile\_v2\]) for realizing pTDW in Eq. (\[Statics\_pTDW\_profile\]) under more complex situations, but also shows once again the feasibility of simplifying magnetostatic energy by local quadratic terms in thin enough nanostrips.
Axial-field-driven dynamics {#Dynamics}
---------------------------
From the roadmap of field-driven DW dynamics[@jlu_EPL_2009], an axial magnetic field is crucial for driving pTDWs to move along strip axis thus realizing bit-switchings in magnetic nanodevices based on them. We focus on the traveling-wave mode of pTDWs in which their profile is generalized directly from Eq. (\[Statics\_pTDW\_profile\]) by allowing $z_0$ to depend on time meantime leaving the rest unchanged. To preserve the pTDW profile, the TMF distribution is suggested to take the same form as in Eq. (\[Statics\_Hperp\_profile\_v2\]) but with the generalized $z_0$, which means that TMF moves along with the pTDW sharing the same velocity. In this section, the dynamics of these pTDWs are systematically investigated under two strategies: 1D collective coordinate model (1D-CCM)[@Tatara_JPDAP_2011] and 1D asymptotic expansion method (1D-AEM)[@jlu_PRB_2016; @limei_srep_2017; @Goussev_PRB_2013; @Goussev_Royal_2013]. As will be shown below, they provide the same result which confirms the feasibility of both approaches.
### 1D-CCM {#1D-CCM}
Historically, 1D-CCM plays important role in the exploration of TDW dynamics for both field-driven and current-driven cases. Generally it treats the center, tilting attitude and with of a DW as independent collective variables of the system Lagrangian or the resulting dynamical equations (i.e. LLG equation). The classical Walker ansatz (which is indeed a pTDW profile) in the absence of any TMFs is the first example and turns out to be the rigorous solution of LLG equation. In the presence of UTMFs, generally no rigorous solutions exist due to the mismatch between symmetries in different energy terms. In most theoretical works, pTDWs with quasi-Walker profiles are often proposed to mimic the real complicated magnetization distribution. However, in Section \[Statics\] it has been shown that the Walker ansatz is not the only choice that a pTDW can preceed. In this subsection, we provide the pTDW velocity with comoving TMF profile in the framework of 1D-CCM.
Before the main context, we want to point out that to preserve the planar feature of these walls, the strength of axial driving field should not be too high. To see this, we revisit the boundary condition in the two domains in the presence of axial driving field $H_1$. Note that although in pTDW region, $\mathbf{H}_{\mathrm{eff}}$ is not parallel with $\mathbf{m}$ (otherwise the wall will not move), however in both domains it holds since magnetization does not vary with time, hence $\mathcal{A}=\mathcal{B}=0$ therein. After redefining the polar and azimuthal angles of magnetization in the left domain as $\tilde{\theta}_{\mathrm{d}}$ and $\tilde{\phi}_{\mathrm{d}}$ ($\pi-\tilde{\theta}_{\mathrm{d}}$ and $\tilde{\phi}_{\mathrm{d}}$ in the right domain), one has
\[Dynamics\_AandBeq0\_in2domains\] $$\begin{aligned}
0&=H_{\perp}^{\mathrm{d}}\sin(\Phi_0-\tilde{\phi}_{\mathrm{d}})+k_2 M_s\sin\tilde{\theta}_{\mathrm{d}}\sin\tilde{\phi}_{\mathrm{d}}\cos\tilde{\phi}_{\mathrm{d}}, \\
0&=H_1\sin\tilde{\theta}_{\mathrm{d}}-H_{\perp}^{\mathrm{d}}\cos\tilde{\theta}_{\mathrm{d}}\cos(\Phi_0-\tilde{\phi}_{\mathrm{d}})+M_s\sin\tilde{\theta}_{\mathrm{d}}\cos\tilde{\theta}_{\mathrm{d}}(k_1+k_2\cos^2\tilde{\phi}_{\mathrm{d}}),
\end{aligned}$$
Obviously, only when $H_1\ll \min[H_{\perp}^{\mathrm{d}},M_s]$ one has $\tilde{\theta}_{\mathrm{d}}\approx\theta_{\mathrm{d}}$ and $\tilde{\phi}_{\mathrm{d}}\approx\phi_{\mathrm{d}}$. Then after the generalization of collective coordinate $z_0$ from constant to time-dependent, the pTDW in Eq. (\[Statics\_pTDW\_profile\]) is expected to move along strip axis under the comoving TMF in Eq. (\[Statics\_Hperp\_profile\_v2\]) with the velocity equal to $\mathrm{d}z_0/\mathrm{d}t$.
To determine wall velocity in traveling-wave mode, we perform time derivative of the pTDW profile which gives
$$\label{Dynamics_pTDW_profile_time_derivative}
\dot{\theta}(z,t)=
\left\{
\begin{array}{cc}
0, & z<z_0-\frac{\Delta}{2} \\
-\frac{\pi-2\theta_{\mathrm{d}}}{\Delta}\cdot\frac{\mathrm{d}z_0}{\mathrm{d}t}, & z_0-\frac{\Delta}{2}<z<z_0+\frac{\Delta}{2} \\
0, & z>z_0+\frac{\Delta}{2} \\
\end{array}
\right., \quad \dot{\phi}(z,t)\equiv 0.$$
From Eq. (\[LLG\_scalar\_v1\]b), the traveling-mode condition $\dot{\phi}(z,t)\equiv 0$ leads to $\mathcal{A}=-\mathcal{B}/\alpha$. Putting back into Eq. (\[LLG\_scalar\_v1\]a), it turns out that $-\alpha\dot{\theta}(z,t)/\gamma=\mathcal{B}$. Substituting Eq. (\[Dynamics\_pTDW\_profile\_time\_derivative\]) into it, one has
$$\label{Dynamics_velocity_v1}
\frac{\alpha}{\gamma}\cdot\frac{\pi-2\theta_{\mathrm{d}}}{\Delta}\cdot\frac{\mathrm{d}z_0}{\mathrm{d}t}= H_1\sin\theta-H_{\perp}(z,t)\cos\theta\cos\left(\Phi_0-\phi\right)+M_s\sin\theta\cos\theta\left(k_1+k_2\cos^2\phi\right)-\frac{2J}{\mu_0 M_s}\theta''.$$
Note that the generalized TMF configuration and the resulting pTDW profile still satisfy Eq. (\[Statics\_AandBeq0\_inpTDW\_v2\]b), thus eliminate the last three terms in the right hand side of the above equation. Then after integrating Eq. (\[Dynamics\_velocity\_v1\]) over the pTDW region, $z\in\left(z_0-\frac{\Delta}{2},z_0+\frac{\Delta}{2}\right)$, and noting that $\int_{z_0-\Delta/2}^{z_0+\Delta/2}1\mathrm{d}z=\Delta$, $\int_{z_0-\Delta/2}^{z_0+\Delta/2}\sin\theta\mathrm{d}z=2\Delta\cos\theta_{\mathrm{d}}/(\pi-2\theta_{\mathrm{d}})$, we finally get
$$\label{Dynamics_velocity_v2}
V_{\mathrm{a}}\equiv\frac{\mathrm{d}z_0}{\mathrm{d}t}=\frac{\gamma\Delta}{\alpha}\cdot\omega(\theta_{\mathrm{d}})\cdot H_1,\quad \omega(\theta_{\mathrm{d}})\equiv\frac{2\cos\theta_{\mathrm{d}}}{(\pi-2\theta_{\mathrm{d}})^2}.$$
Next we examine the asymptotic behavior of the boosting factor $\omega(\theta_{\mathrm{d}})$ when $H_{\perp}^{\mathrm{d}}\rightarrow H_{\perp}^{\mathrm{max}}$. Suppose again $\frac{H_{\perp}^{\mathrm{d}}}{H_{\perp}^{\mathrm{max}}}=1-\epsilon$, then $\cos\theta_{\mathrm{d}}\approx\sqrt{2\epsilon}$ and $\pi-2\theta_{\mathrm{d}}\approx 2\sqrt{2\epsilon}$. Putting them back into Eq. (\[Dynamics\_velocity\_v2\]), we finally have
$$\label{Dynamics_boosting_factor}
\omega(\theta_{\mathrm{d}})\approx\frac{1}{2\sqrt{2\epsilon}}\rightarrow +\infty,$$
as $\epsilon\rightarrow 0^+$. This confirms the boosting effect of these TMFs on axial propagation of pTDWs.
At last, stability analysis to dynamical pTDW profile under comoving TMFs takes the same format as static case and thus has been omitted for saving space. It turns out that for profile variations which are not too abrupt, the traveling-wave mode of pTDW is also stable. This is really important for potential commercial applications of these pTDWs.
### 1D-AEM {#1D-AEM}
Next we recalculate the pTDW velocity in traveling-wave mode with the help of 1D-AEM. In this approach, the dynamical behavior of pTDWs is viewed as the response of their static profiles to external stimuli. Therefore it is the manifestation of linear response framework in nanomagnetism and should be suitable for exploring traveling-wave mode of pTDWs under small axial driving fields. Note that the TMF distribution in Eq. (\[Statics\_Hperp\_profile\_v2\]) indicates that at the pTDW center TMF strength reaches $H_{\perp}^{\mathrm{max}}$ which is finite, thus we rescale the axial driving field and pTDW axial velocity simultaneously,
$$\label{1D_AEM_scaling}
H_1=\epsilon h_1,\quad V_{\mathrm{b}}=\epsilon v_{\mathrm{b}},$$
in which $\epsilon$ is a dimensionless infinitesimal. This means a slight external stimulus ($H_1$) will lead to a weak response of the system, that is, a slow velocity ($V_{\mathrm{b}}$) of pTDW axial motion. We concentrate on traveling-wave mode of pTDWs thus define the traveling coordinate
$$\label{1D_AEM_finiteTMF_xi}
\xi\equiv z-V_{\mathrm{b}} t=z-\epsilon v_{\mathrm{b}} t.$$
Meantime the TMF distribution takes the same one as in Eq. (\[Statics\_Hperp\_profile\_v2\]), except for the generalization of $z\rightarrow \xi$. As a result, the real solution of pTDW can be expanded as follows,
$$\label{1D_AEM_series_expansion}
\chi(z,t) = \chi_0(\xi)+\epsilon\chi_1(\xi)+O(\epsilon^2),\quad \chi=\theta(\phi),$$
where $\theta_0(\phi_0)$ denote the zeroth-order solutions and should be the static pTDW profile (will see later), while $\theta_1$ and $\phi_1$ are the coefficients of first-order corrections to zeroth-order solutions when $H_1$ is present. Putting them into the LLG equation (\[LLG\_scalar\_v2\]) and noting that $\partial \chi/\partial t=(-\epsilon v_{\mathrm{b}})\cdot\partial \chi/\partial \xi$, we have
\[1D\_AEM\_LLG\_scalar\_expansion\] $$\begin{aligned}
(-\epsilon v_{\mathrm{b}})\cdot\left(\frac{\partial \theta_0}{\partial \xi}+\alpha\sin\theta_0\frac{\partial \phi_0}{\partial \xi}\right)+ O(\epsilon^2) &=\gamma \mathcal{A}_0 +\gamma \mathcal{A}_1\cdot\epsilon + O(\epsilon^2), \\
(-\epsilon v_{\mathrm{b}})\cdot\left(\sin\theta_0\frac{\partial \phi_0}{\partial \xi}-\alpha\frac{\partial \theta_0}{\partial \xi}\right)+ O(\epsilon^2) &=\gamma \mathcal{B}_0 +\gamma \mathcal{B}_1\cdot\epsilon + O(\epsilon^2),
\end{aligned}$$
with
\[1D\_AEM\_LLG\_scalar\_expansion\_A0B0\] $$\begin{aligned}
A_0&=H_{\perp}(\xi)\sin(\Phi_0-\phi_0)+k_2 M_s \sin\theta_0\sin\phi_0\cos\phi_0+\frac{2J}{\mu_0 M_s}\left(2\cos\theta_0\frac{\partial\theta_0}{\partial\xi}\frac{\partial\phi_0}{\partial\xi}+\sin\theta_0\frac{\partial^2\phi_0}{\partial\xi^2}\right), \\
B_0&=-H_{\perp}(\xi)\cos\theta_0\cos(\Phi_0-\phi_0)-\frac{2J}{\mu_0 M_s}\frac{\partial^2\theta_0}{\partial\xi^2}+k_1 M_s \sin\theta_0\cos\theta_0\left[1+\frac{k_2}{k_1}\cos^2\phi_0+\Delta_0^2\left(\frac{\partial\phi_0}{\partial\xi}\right)^2\right],
\end{aligned}$$
and
$$\begin{aligned}
\label{1D_AEM_LLG_scalar_expansion_A1}
A_1&=&\mathbf{P}\theta_1+\mathbf{Q}\phi_1, \nonumber \\
\mathbf{P}&=&k_2 M_s \cos\theta_0\sin\phi_0\cos\phi_0+\frac{2J}{\mu_0 M_s}\left[2\frac{\partial\phi_0}{\partial\xi}\left(\cos\theta_0\frac{\partial}{\partial\xi}-\sin\theta_0\frac{\partial\theta_0}{\partial\xi}\right)+ \cos\theta_0\frac{\partial^2\phi_0}{\partial\xi^2}\right], \nonumber \\
\mathbf{Q}&=&-H_{\perp}(\xi)\cos(\Phi_0-\phi_0)+k_2 M_s\sin\theta_0\cos2\phi_0+\frac{2J}{\mu_0 M_s}\left(2\cos\theta_0\frac{\partial\theta_0}{\partial\xi}\frac{\partial}{\partial\xi}+\sin\theta_0\frac{\partial^2}{\partial\xi^2}\right),\end{aligned}$$
as well as
$$\begin{aligned}
\label{1D_AEM_LLG_scalar_expansion_B1}
B_1&=&h_1\sin\theta_0+\mathbf{R}\theta_1+\mathbf{S}\phi_1, \nonumber \\
\mathbf{R}&=&H_{\perp}(\xi)\sin\theta_0\cos(\Phi_0-\phi_0)-\frac{2J}{\mu_0 M_s}\frac{\partial^2}{\partial\xi^2}+k_1 M_s \cos2\theta_0\left[1+\frac{k_2}{k_1}\cos^2\phi_0+\Delta_0^2\left(\frac{\partial\phi_0}{\partial\xi}\right)^2\right], \nonumber \\
\mathbf{S}&=&-H_{\perp}(\xi)\cos\theta_0\sin(\Phi_0-\phi_0)+k_1 M_s \sin 2\theta_0\left(\Delta_0^2\frac{\partial\phi_0}{\partial\xi}\frac{\partial}{\partial\xi}-\frac{k_2}{k_1}\sin\phi_0\cos\phi_0\right).\end{aligned}$$
At the zeroth order of $\epsilon$, Eq. (\[1D\_AEM\_LLG\_scalar\_expansion\]) provides $\mathcal{A}_0=\mathcal{B}_0=0$. Combing with the definitions in Eq. (\[1D\_AEM\_LLG\_scalar\_expansion\_A0B0\]), its solution is just the pTDW profile in Eq. (\[Statics\_pTDW\_profile\]) except for the substitution of $z\rightarrow \xi$. This is not surprising since zeroth-order solution describes the response of system under “zero" stimulus which is just the static case.
However to obtain the pTDW velocity, we need to proceed to the first order of $\epsilon$. In particular, we have to deal with $\mathbf{R}$ and $\mathbf{S}$ to get the dependence of velocity ($v_{\mathrm{b}}$) on axial driving field ($h_1$). By partially differentiating $\mathcal{B}_0=0$ with respect to $\phi_0$, $\mathbf{S}$ can be simplified to
$$\label{1D_AEM_S_simplified}
\mathbf{S}=\Delta_0^2 k_1 M_s \sin 2\theta_0\left(\frac{\partial\phi_0}{\partial\xi}\frac{\partial}{\partial\xi}-\frac{\partial^2\phi_0}{\partial\xi^2}\right)\equiv 0$$
due to the planar nature of walls. On the other hand, the partial derivative of $\mathcal{B}_0=0$ with respect to $\theta_0$ helps to simplify $\mathbf{R}$ to
$$\label{1D_AEM_R_simplified_to_L}
\mathbf{R}=\frac{2J}{\mu_0 M_s}\left[-\frac{\partial^2}{\partial\xi^2}+\left(\frac{\partial\theta_0}{\partial\xi}\right)^{-1}\left(\frac{\partial^3\theta_0}{\partial\xi^3}\right)\right]\equiv \mathbf{L},$$
which is the 1D self-adjoint Schrödinger operator appeared in previous works[@jlu_PRB_2016; @limei_srep_2017; @Goussev_PRB_2013; @Goussev_Royal_2013]. Then Eq. (\[1D\_AEM\_LLG\_scalar\_expansion\_B1\]) rigorously turns to
$$\label{1D_AEM_L_theta1}
\mathbf{L}\theta_1=-h_1\sin\theta_0+(-v_{\mathrm{b}})\cdot\left(-\alpha\frac{\partial \theta_0}{\partial \xi}\right).$$
Again the “Fredholm alternative" requests the right hand side of the above equation to be orthogonal to the kernel of $\mathbf{L}$ (subspace expanded by $\partial\theta_0/\partial\xi$) for the existence of a solution $\theta_1$, where the inner product in Sobolev space is defined as $\langle f(\xi),g(\xi) \rangle\equiv\int_{\xi=-\infty}^{\xi=+\infty}f(\xi)\cdot g(\xi)\mathrm{d}\xi$. Noting that $\langle\frac{\partial\theta_0}{\partial\xi},\sin\theta_0 \rangle=2\cos\theta_{\mathrm{d}}$ and $\langle\frac{\partial\theta_0}{\partial\xi},\frac{\partial\theta_0}{\partial\xi} \rangle=(\pi-2\theta_{\mathrm{d}})^2/\Delta$, we finally get
$$\label{1D_AEM_velocity}
V_{\mathrm{b}}\equiv\frac{\mathrm{d}z_0}{\mathrm{d}t}=\frac{\gamma\Delta}{\alpha}\cdot\frac{2\cos\theta_{\mathrm{d}}}{(\pi-2\theta_{\mathrm{d}})^2}\cdot H_1,$$
which is the same as Eq. (\[Dynamics\_velocity\_v2\]) from 1D-CCM.
Discussion
==========
In Section \[Dynamics\] we point out that under axial driving fields, the pTDW velocity can be considerably increased due to the divergent behavior of the boosting factor $\omega(\theta_{\mathrm{d}})$ when $H_{\perp}\rightarrow H_{\perp}^{\mathrm{max}}$ (see Eq. (\[Dynamics\_boosting\_factor\])). Interestingly, the contribution of pTDW width, i.e. $\Delta$, is also an important boosting factor. From Eq. (\[Delta\_c\]) one has a finite critical pTDW width even when $H_{\perp}\rightarrow H_{\perp}^{\mathrm{max}}$. Therefore to further increase the pTDW velocity, broadening the pTDW width should also be effective.
Second, to realized pTDWs the “orientation-fixed" strategy proposed here has several advantages comparing with the “amplitude-fixed" one introduced before[@limei_srep_2017]: (i) the wall width can be freely tuned. (ii) the rigorous pTDW profile and the corresponding TMF distribution can be explicitly written out. (iii) the asymptotic behavior of the boosting factor in axial-field-driven case can be analytically explored. (iv) most importantly, the “orientation-fixed" strategy is much easier to realize in real experiments.
For example, the following procedure can be applied to realize a pTDW with center position $z_0$, width $\Delta$, tilting attitude $\phi_{\mathrm{d}}$ and boundary condition $\theta_{\mathrm{d}}(\pi-\theta_{\mathrm{d}})$. First a short and strong enough field or current pulse is exerted to induce a wall around $z_0$ and after a transient process it finally becomes static in easy plane with Walker’s profile. Then a series of ferromagnetic scanning tunneling microscope (STM) tips are placed along the wire axis with fixed tilting attitude $\Phi_0$ to produce a series of localized TMF pulsed. By arranging these tips with proper spacing and distance to strip, the envelope of these pulses is tuned to be the TMF profile in Eq. (\[Statics\_Hperp\_profile\_v2\]). The resulting static wall profile is the pTDW shown in Eq. (\[Statics\_pTDW\_profile\]). When driving by axial field, since the transient process prior to traveling-wave mode is short (picoseconds), the STM tips can be arrange to move at the velocity in Eq. (\[Dynamics\_velocity\_v2\]) so as to synchronize with the pTDW.
At last, our “orientation-fixed" strategy can be generalized to the cases where pTDW motion is induced by spin-polarized currents, spin waves or temperature gradient, etc. Similar discussions can be performed to realized these pTDWs with clear boundaries. Magnetic nanostrips bearing with these walls would serve as proving ground for developing new-generation nanodevices with fascinating applications.
Conclusions
===========
In this work, the “orientation-fixed" TMF profiles are adopted to realize pTDW with arbitrary tilting attitude in biaxial magnetic nanostrips. After solving the LLG equation, unlike the classical Walker ansatz we obtain a pTDW with clear boundaries with adjacent domains and linear polar angle distribution inside wall region. More interestingly, the wall width can be freely tuned for specific usages. With TMF profile synchronized along with, these pTDWs can propagate along strip axis with considerably high velocity (well above that from the Walker ansatz) when driven by axial magnetic fields. These results should provide new insights in developing fascinating new-generation magnetic nanodevices based on DW propagations in nanostrips.
[999]{} Leeuw, F.H.D.; Doel, R.V.D.; Enz, U. Dynamic properties of magnetic domain walls and magnetic bubbles. [*Reports on Progress in Physics*]{} [**1980**]{}, [*43*]{}, 689, doi:10.1088/0034-4885/43/6/001. Tserkovnyak, Y.; Brataas, A.; Bauer, G.E.W.; Halperin, B.I. Nonlocal magnetization dynamics in ferromagnetic heterostructures. [*Reviews of Modern Physics*]{} [**2005**]{}, [*77*]{}, 1375-1421, doi:10.1103/RevModPhys.77.1375.
Allwood, D.A.; Xiong, G.; Faulkner, C.C.; Atkinson, D.; Petit, D.; Cowburn, R.P. Magnetic domain-wall logic. [*Science*]{} [**2005**]{}, [*309*]{}, 1688-1692, doi:10.1126/science.1108813. Parkin, S.S.P.; Hayashi, M.; Thomas, L. Magnetic domain-wall racetrack memory. [*Science*]{} [**2008**]{}, [*320*]{}, 190-194, doi:10.1126/science.1145799. Hayashi, M.; Thomas, L.; Moriya, R.; Rettner, C.; Parkin, S.S.P. Current-controlled magnetic domain-wall nanowire shift register. [*Science*]{} [**2008**]{}, [*320*]{}, 209-211, doi:10.1126/science.1154587. Franken, J.H.; Swagten, H.J.M.; Koopmans, B. Shift registers based on magnetic domain wall ratchets with perpendicular anisotropy. [*Nature Nanotechnology*]{} [**2012**]{}, [*7*]{}, 499, doi:10.1038/nnano.2012.111. Münchenberger, J.; Reiss, G.; Thomas, A. A memristor based on current-induced domain-wall motion in a nanostructured giant magnetoresistance device. [*Journal of Applied Physics*]{} [**2012**]{}, [*111*]{}, 07D303, doi:10.1063/1.3671438. Parkin, S.; Yang, S.-H. Memory on the racetrack. [*Nature Nanotechnology*]{} [**2015**]{}, [*10*]{}, 195, doi:10.1038/nnano.2015.41.
McMichael, R.D.; Donahue, M.J. Head to head domain wall structures in thin magnetic strips. [*IEEE Transactions on Magnetics*]{} [**1997**]{}, [*33*]{}, 4167-4169, doi:10.1109/20.619698. Nakatani, Y.; Thiaville, A.; Miltat, J. Head-to-head domain walls in soft nano-strips: a refined phase diagram. [*Journal of Magnetism and Magnetic Materials*]{} [**2005**]{}, [*290-291*]{}, 750-753, doi:10.1016/j.jmmm.2004.11.355.
Schryer, N.L.; Walker, L.R. The motion of 180$^{\mathrm{o}}$ domain walls in uniform dc magnetic fields. [*Journal of Applied Physics*]{} [**1974**]{}, [*45*]{}, 5406-5421, doi:10.1063/1.1663252. Ono, T.; Miyajima, H.; Shigeto, K.; Mibu, K.; Hosoito, N.; Shinjo, T. Propagation of a magnetic domain wall in a submicrometer magnetic wire. [*Science*]{} [**1999**]{}, [*284*]{}, 468, doi:10.1126/science.284.5413.468. Atkinson, D.; Allwood, D.A.; Xiong, G.; Cooke, M.D.; Faulkner, C.C.; Cowburn, R.P. Magnetic domain-wall dynamics in a submicrometre ferromagnetic structure. [*Nature Materials*]{} [**2003**]{}, [*2*]{}, 85, doi:10.1038/nmat803. Beach, G.S.D.; Nistor, C.; Knutson, C.; Tsoi, M.; Erskine, J.L. Dynamics of field-driven domain-wall propagation in ferromagnetic nanowires. [*Nature Materials*]{} [**2005**]{}, [*4*]{}, 741, doi:10.1038/nmat1477. Tretiakov, O.A.; Clarke, D.; Chern, G.-W.; Bazaliy, Y.B.; Tchernyshyov, O. Dynamics of domain walls in magnetic nanostrips. [*Physical Review Letters*]{} [**2008**]{}, [*100*]{}, 127204, doi:10.1103/PhysRevLett.100.127204. Yang, J.; Nistor, C.; Beach, G.S.D.; Erskine, J.L. Magnetic domain-wall velocity oscillations in permalloy nanowires. [*Physical Review B*]{} [**2008**]{}, [*77*]{}, 014413, doi:10.1103/PhysRevB.77.014413. Wang, X.R.; Yan, P.; Lu, J. High-field domain wall propagation velocity in magnetic nanowires. [*Europhysics Letters*]{} [**2009**]{}, [*86*]{}, 67001, doi:10.1209/0295-5075/86/67001. Wang, X.R.; Yan, P.; Lu, J.; He, C. Magnetic field driven domain-wall propagation in magnetic nanowires. [*Annals of Physics*]{} [**2009**]{}, [*324*]{}, 1815-1820, doi:10.1016/j.aop.2009.05.004. Sun, Z.Z.; Schliemann, J. Fast domain wall propagation under an optimal field pulse in magnetic nanowires. [*Physical Review Letters*]{} [**2010**]{}, [*104*]{}, 037206, doi:10.1103/PhysRevLett.104.037206.
Shibata, J.; Tatara, G.; Kohno, H. A brief review of field- and current-driven domain-wall motion. [*Journal of Physics D: Applied Physics*]{} [**2011**]{}, [*44*]{}, 384004, doi:10.1088/0022-3727/44/38/384004.
Berger, L. Emission of spin waves by a magnetic multilayer traversed by a current. [*Physical Review B*]{} [**1996**]{}, [*54*]{}, 9353-9358, doi:10.1103/PhysRevB.54.9353. Slonczewski, J.C. Current-driven excitation of magnetic multilayers. [*Journal of Magnetism and Magnetic Materials*]{} [**1996**]{}, [*159*]{}, L1-L7, doi:10.1016/0304-8853(96)00062-5. Li, Z.; Zhang, S. Domain-wall dynamics and spin-wave excitations with spin-transfer torques. [*Physical Review Letters*]{} [**2004**]{}, [*92*]{}, 207203, doi:10.1103/PhysRevLett.92.207203. Yamaguchi, A.; Ono, T.; Nasu, S.; Miyake, K.; Mibu, K.; Shinjo, T. Real-space observation of current-driven domain wall motion in submicron magnetic wires. [*Physical Review Letters*]{} [**2004**]{}, [*92*]{}, 077205, doi:10.1103/PhysRevLett.92.077205. Beach, G.S.D.; Knutson, C.; Nistor, C.; Tsoi, M.; Erskine, J.L. Nonlinear domain-wall velocity enhancement by spin-polarized electric current. [*Physical Review Letters*]{} [**2006**]{}, [*97*]{}, 057203, doi:10.1103/PhysRevLett.97.057203. Hayashi, M.; Thomas, L.; Bazaliy, Y.B.; Rettner, C.; Moriya, R.; Jiang, X.; [*et al*]{}. Influence of current on field-driven domain wall motion in permalloy nanowires from time resolved measurements of anisotropic magnetoresistance. [*Physical Review Letters*]{} [**2006**]{}, [*96*]{}, 197207, doi:10.1103/PhysRevLett.96.197207. Yan, P.; Wang, X.R. Optimal time-dependent current pattern for domain wall dynamics in nanowires. [*Applied Physics Letters*]{} [**2010**]{}, [*96*]{}, 162506, doi:10.1063/1.3413951.
Nakatani, Y.; Thiaville, A.; Miltat, J. Faster magnetic walls in rough wires. [*Nature Materials*]{} [**2003**]{}, [*2*]{}, 521, doi:10.1038/nmat931. Lee, J.-Y.; Lee, K.-S.; Kim, S.-K. Remarkable enhancement of domain-wall velocity in magnetic nanostripes. [*Applied Physics Letters*]{} [**2007**]{}, [*91*]{}, 122513, doi:10.1063/1.2789176. Bryan, M.T.; Schrefl, T.; Atkinson, D.; Allwood, D.A. Magnetic domain wall propagation in nanowires under transverse magnetic fields. [*Journal of Applied Physics*]{} [**2008**]{}, [*103*]{}, 073906, doi:10.1063/1.2887918. Lu, J.; Wang, X.R. Motion of transverse domain walls in thin magnetic nanostripes under transverse magnetic fields. [*Journal of Applied Physics*]{} [**2010**]{}, [*107*]{}, 083915, doi:10.1063/1.3386468.
Lu, J. Statics and field-driven dynamics of transverse domain walls in biaxial nanowires under uniform transverse magnetic fields. [*Physical Review B*]{} [**2016**]{}, [*93*]{}, 224406, doi:10.1103/PhysRevB.93.224406. Li, M.; Wang, J.; Lu, J. General planar transverse domain walls realized by optimized transverse magnetic field pulses in magnetic biaxial nanowires. [*Scientific Reports*]{} [**2017**]{}, [*7*]{}, 43065, doi:10.1038/srep43065.
Aharoni, A. Demagnetizing factors for rectangular ferromagnetic prisms. [*Journal of Applied Physics*]{} [**1998**]{}, [*83*]{}, 3432-3434, doi:10.1063/1.367113.
Gilbert, T.L. A phenomenological theory of damping in ferromagnetic materials. [*IEEE Transactions on Magnetics*]{} [**2004**]{}, [*40*]{}, 3443-3449, doi:10.1109/TMAG.2004.836740.
Donahue, M.J.; Porter, D.G. OOMMF User’s Guide, Version 1.0, Interagency Report NISTIR 6376 (National Institute of Standards and Technology, Gaithersburg, MD, Sept 1999; http://math.nist.gov/oommf).
Goussev, A.; Lund, R.G.; Robbins, J.M.; Slastikov, V.; Sonnenberg, C. Fast domain-wall propagation in uniaxial nanowires with transverse fields. [*Physical Review B*]{} [**2013**]{}, [*88*]{}, 024425, doi:10.1103/PhysRevB.88.024425. Goussev, A.; Lund, R.G.; Robbins, J.M.; Slastikov, V.; Sonnenberg, C. Domain wall motion in magnetic nanowires: an asymptotic approach. [*Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science*]{} [**2013**]{}, [*469*]{}, 20130308, doi:10.1098/rspa.2013.0308.
|
---
abstract: 'Dynamic experiments with Al-W granular/porous composites revealed qualitatively different behavior with respect to shear localization depending on bonding between Al particles. Two-dimensional numerical modeling was used to explore the mesomechanics of the large strain dynamic deformation in Al-W granular/porous composites and explain the experimentally observed differences in shear localization between composites with various mesostructures. Specifically, the bonding between the Al particles, the porosity, the roles of the relative particle sizes of Al and W, the arrangements of the W particles, and the material properties of Al were investigated using numerical calculations. It was demonstrated in simulations that the bonding between the “soft” Al particles facilitated shear localization as seen in the experiments. Numerical calculations and experiments revealed that the mechanism of the shear localization in granular composites is mainly due to the local high strain flow of “soft” Al around the “rigid” W particles causing localized damage accumulation and subsequent growth of the meso/macro shear bands/cracks. The “rigid” W particles were the major geometrical factor determining the initiation and propagation of “kinked” shear bands in the matrix of “soft” Al particles, leaving some areas free of extensive plastic deformation as observed in experiments and numerical calculations.'
author:
- 'K.L. Olney'
- 'P.H. Chiu'
- 'C.W. Lee'
- 'V.F. Nesterenko'
- 'D.J. Benson'
title: 'Role of material properties and mesostructure on dynamic deformation and shear instability in Al-W granular composites'
---
Introduction
============
The dynamic behavior of granular/porous reactive materials has attracted significant attention due to its possible practical applications: reactive structural components, reactive fragments, etc.[@Ames; @davis; @holt] The performance requirements introduce challenging fundamental problems such as combining high strength with ability to undergo controlled bulk disintegration producing small sized reactive fragments.
The material structure which can meet this and other requirements should be tailored and optimized at the mesoscale to produce the desirable mechanical properties while still facilitating the release of chemical energy. Mesostructural parameters like particle size and morphology can affect the strength and shock-sensitivity. This can be seen in pressed explosives[@Siv; @Balzer] and in reactive materials like Al-PTFE composites.[@Mock] Additionally, mesoscale features like force-chains in granular energetic materials may also serve as ignition sites.[@foster; @Bard; @Roes]
Quasi-static, Hopkinson bar, and drop-weight experiments were performed for PTFE-Al-W and for Al-W composites[@Cai1; @Cai2; @Herbold; @Cai3; @Cai4; @Cai5; @Herbol1; @Dymat; @phaip] with different particle sizes of W, porosity, and morphology. W particles were used to increase sample density and generate the desirable mode of disintegration of the sample into small sized debris. Multi-material Eulerian hydrocode simulations of the dynamic tests for the various types of samples were used to elucidate the observed experimental results.[@Cai2; @Herbol1; @phaip]
The complexity of the dynamic behavior of granular/porous materials is caused by the significant role of the mesoscale parameters including the particle sizes of components, the formation of force chains, the morphology of particles, and the bonding between particles. In these materials, shear localization and fracture can be delayed by strain hardening mechanisms such as compaction which results in porosity reduction.[@Dymat; @phaip] At the same time, mesoscale fracture of brittle particles can act as a “softening” mechanism increasing macroshear instability.[@NesMey; @Shih] In this paper, the dynamic behavior of Al-W granular/porous composites, their susceptibility to shear localization, and their subsequent fracture at large strains was investigated using experiments and numerical simulations. Specifically, the bonding between the Al particles, the initial porosity, the relative particle sizes of Al and W, the arrangements of the W particles, and the constitutive behavior of Al was examined in Al-W granular/porous composites.
Experiments
===========
High density Al-W granular/porous composites were prepared from an elemental powder of Al (Alfa Aesar,-325 mesh), and W wires (A-M System, diameter of 200 $\mu$m and length of 4mm) using (a) cold isostatic pressing (CIPing) and (b) CIPing followed by vacuum encapsulation and subsequent hot isostatic pressing (HIPing) to create metal bonding between Al particles. All samples had the same mass ratios of the Al and W components (23.8% Al and 76.2% W, by weight, corresponding to a volume ratio of 69.0% Al and 31.0% W) with a theoretical density of 7.8 g/cm3. Mixtures of Al powders and W (short wires) were placed in a cylindrical stainless steel mandrel with moving pistons and encapsulated in a rubber jacket providing axial loading during pressurization in the CIP chamber. This allowed the preparation of samples with the exact sizes and cylindrical shape that were necessary for the strength measurements. All samples were CIPed at 345 MPa under room temperature for 5 min. Subsequent HIPing for some samples were carried out at 200 MPa at a temperature of 500 degree Celsius with a soaking time 20 min. The density of the samples was measured by the hydrostatic method. The samples had an average density of 6.8 g/cm$^3$ after CIPing and 7.5 g/cm$^3$ after CIPing and HIPing. Their dynamic behavior and fracture was investigated at a strain rate of 1000 1/s under drop weight tests (nominal velocity of the drop weight was 10 m/s) using a high velocity DYNATUP model 9250HV with an in-house modified anvil supporting the sample. The undeformed CIPed only, CIPed+HIPed, and solid Al6061-T6 samples are presented in Fig. \[fig:1\]. After the dynamic tests, shear macrocracks were well developed in the CIPed+HIPed samples (see Fig. \[fig:2\] (b)), while no shear macrocracks were observed in the CIPed only samples (see Fig. \[fig:2\] (a)) at similar global strains. In the CIPed only sample, Al particles were ejected from the outer regions of samples leaving areas of interconnected and mechanically locked W wires (see Fig. \[fig:2\] (a) and (e)). The microscopic images of the deformed samples representing areas at the vicinity of macroscopic shear localization, plastic flow and deformation of W and Al particles are presented in Fig. \[fig:3\]. It is evident that the bonding between the “soft” Al particles facilitated shear localization in CIPed+HIPed samples. As a result of localized plastic flow in the matrix of Al particles we have observed regions of heavily deformed and practically undeformed Al particles.
For comparison, dynamic tests were performed with “as is” Al6061-T6 and annealed Al6061-T6 (temperature 425C, 2.5 hours in vacuum) cylindrical samples under similar conditions of impact and similar or larger final strains in comparison with CIPed or CIPed+HIPed samples (see Fig. \[fig:2\]). The “as is” Al6061-T6 and annealed Al6061-T6 samples did not exhibit signs of shear localization at this level of strains.
![\[fig:1\] (Color online) Initial CIPed (a) and CIPed+HIPed Al-W samples (b) and a sample of solid 6061-T6 (c). Sample diameters and heights are given in Table \[tab:4\]. Mesostructures of CIPed and CIPed+HIPed samples before dynamic deformation are presented in (d) and (e) respectively. ](fig1.pdf)
Sample Height \[cm\] Diameter \[cm\]
------------------ --------------- -----------------
Ciped only 1.45 1.58
Ciped + Hiped 1.39 1.54
Solid Al 6061-T6 1.25 1.18
: \[tab:4\] Initial dimensions of experimental sample
![\[fig:2\] (Color online) Deformed shapes of the CIPed (a) and CIPed+HIPed samples (b). For comparison, the deformed annealed Al-6061-T6 samples and Al-6061-T6 are presented correspondingly in (c) and (d). A high speed camera photo (e) captures the behavior of Al fragments as they are ejected from the CIPed sample during a dynamic test. Arrows are added to show areas where the Al particles on the outer area of the CIPed sample have been ejected (a) and the shear bands that form in the CIPed + HIPed samples (b). ](fig2.pdf)
![\[fig:3\] (Color online) Microstructure of deformed CIPed+HIPed samples (a) and (b) illustrating a region of largely undeformed Al particles neighboring heavily deformed and fractured areas of Al particles with different arrangements of less deformed W rods. ](fig3.pdf)
----------- ------------------- ---------------
Particle Diameter % of Material
$\mu$m
-325 mesh 32 50
15 40
4 10
Fine 40 -
Coarse 200 -
----------- ------------------- ---------------
: \[tab:1\] Particle sizes and distributions
-------------- ---------- ---------- -----------
Sample in Volume Volume % Initial
Coresponding Fraction Fraction Porosity
Figure Al W
1.a 0.72 0.28 8.3
1.b 0.74 0.26 0.0
2 0.67 0.33 8.5
-------------- ---------- ---------- -----------
: \[tab:2\] Material volume fractions and porosity in each of the simulated samples
Numerical Modeling and Discussion
=================================
A number of sample properties were examined to see how they influenced shear instability and localization during a dynamic test. The role of the metallic bonding was examined in an attempt to understand a difference between dynamic behavior of CIPed samples (no metallic bonding between Al particles) and CIPed+HIPed samples (metallic bonding between Al particles). The roles of the initial porosity, the relative particle size of Al and W, the initial arrangement of the W particles, the confinement, and the constitutive behavior of the Al were also explored to see their effect on the shear instability in the sample. A two-dimensional Eulerian hydocode[@Benson] was used to simulate the behavior of the samples during the drop weight tests. Due to the smallness of the particles relative to the global sample size, small representative elements of the microstructure were used in all the simulations.
The initial particle arrangements, the constitutive model, and the boundary conditions
--------------------------------------------------------------------------------------
The initial particle arrangements of W and Al used in the numerical simulations are shown in Fig. \[fig:4\] through Fig.\[fig:6\]. The description of the particles sizes for both the Al and W are presented in Tables \[tab:1\] and \[tab:2\] . Samples in Fig. \[fig:4\] (a) and (b) were used to investigate the role of porosity and metallic bonding on the shear instability and formation of the shear bands in both the bonded (CIP+HIP) and unbonded (CIP only) samples with similar sized Al and W particles. The sample in Fig. \[fig:5\] was used to investigate how relative sizes of W and Al affected the shear instability in both the bonded and unbonded samples. Additionally, the sample in Fig.\[fig:5\] was used to examine cases with “confinement” boundary conditions as well as the variations for simulations looking at the role of constitutive behavior in Al. Samples in Fig. \[fig:6\] were used to investigate the role of initial mesostructure on the shear instability in the bonded case.
![\[fig:4\] (Color online) Initial arrangements of the Al (blue) and the W (red) particles in the Al-W granular/porous representative samples with similar sized Al and W particles corresponding to -325 mesh (see Tables \[tab:1\] and \[tab:2\]) with .3% (a) and 0% (b) initial porosity. Representative element sample sizes are 0.02 cm (horizontal) by 0.015 cm (vertical). These samples were used for both the bonded and the unbonded simulations. ](fig4.pdf)
![\[fig:5\] (Color online) Initial arrangement of the Al (blue) and the W (red) particles in the Al-W granular/porous sample with fine Al (40 μm) and coarse W (200 μm) particles, and a porosity of 8.5%. This arrangement was used both for the bonded and the unbonded Al particles. Sample size is 0.0625 cm (horizontal) by 0.1 cm (vertical). ](fig5.pdf)
![\[fig:6\] (Color online) The initial arrangement of the 200 μm diameter W (red) particles embedded in a fully dense (0% porosity) Al (blue) matrix. Samples have sizes of 0.3 cm (horizontal) by 0.3 cm (vertical). Both samples have the same volume fractions of Al and W of 0.70 and 0.30 respectively. ](fig6.pdf)
A standard Johnson-Cook with failure[@Jcook] material model and the Mie-Grüneisen equation of state used for both the Al and the W in the simulations. Failure in the material is determined through the equivalent plastic strain as shown in Ref. . The material constants used in this paper are presented in Table \[tab:3\].
Al 6061-T6[@Holm; @Stein] W[@Holm; @Wester]
-------------------------------- --------------------------- -----------------------
Johnson-cook
$\rho$ \[gm cm$^{-1}$\] 2.7 16.98
G \[Mbar\] 0.26 1.24
A \[Mbar\] 2.24$\cdot$10$^{-3}$ 1.506$\cdot$10$^{-2}$
B \[Mbar\] 1.114$\cdot$10$^{-3}$ 1.765$\cdot$10$^{-3}$
n 0.42 0.12
c 2.0$\cdot$10$^{-3}$ 1.60$\cdot$10$^{-3}$
m 1.34 1.0
C$_p$ \[J gm$^{-1}$ K$^{-1}$\] 0.89 0.13
T$_{melt}$ \[K\] 930 1728
T$_{room}$ \[K\] 300 300
D$_1$ -.077 0.0
D$_2$ 1.45 0.33
D$_3$ -0.47 -1.5
D$_4$ 0.0 0.0
D$_5$ 1.6 0.0
Mie-Grüneisen
c$_0$ \[cm $\mu s^{-1}$\] 0.52 0.40
s$_1$ 1.4 1.24
s$_2$ 0.0 0.0
s$_3$ 0.0 0.0
$\gamma_0$ 1.97 1.67
: \[tab:3\] Consitutive model and equation of state material parameters
To simulate the drop weight test, a kinematic boundary condition with constant downward velocity was imposed on the top boundary corresponding to the impact speed of the falling weight in the experiment (10 m/s). For the cases where the small representative element is near the outer edge of the large cylinder, the side wall boundary conditions shown in Fig. \[fig:7\] (a) were used. In cases where the small representative element is near the middle of the large cylinder, the side wall boundary conditions shown in Fig. \[fig:7\] (b) were used to account for the confinement. A variety of confinement conditions were considered in this paper and will be detailed during the discussion of Fig. \[fig:13\]. With the confinement boundary conditions, the main concern is keeping the correct description of the global deformation of the sample, but due to the artificial conditions, the agreement with global stresses is sacrificed. This “confined” geometry produces compressive stresses in particles leading to elevated pressures in the sample.
![\[fig:7\] Boundary conditions imposed on (a) corner samples and (b) internal confined samples. ](fig7.pdf)
The role of bonding and initial porosity in samples with similar sized Al and W particles (-325 mesh)
-----------------------------------------------------------------------------------------------------
The results of the numerical simulations corresponding to the initial mesostructures shown in Fig. \[fig:4\] are presented in Fig. \[fig:8\] corresponding to the deformed sample with a global strain of 0.45. The sample with the bonded particles and initial porosity of 8.3% began developing shear instabilities at the mesoscale once the in situ densification that occurred at the early stages of deformation removed a majority of the porosity from the sample (the initial average porosity was 8.3% and the porosity when shear instability started was 3.2%). The bonded sample with zero initial porosity began developing shear instabilities on the mesoscale within the first 5% global strain. The global shear band began developing around 0.25 global strain in the case with 8.3% initial porosity and around 0.20 global strain in the sample with zero initial porosity. This behavior is similar to observations in the PTFE-Al-W mixtures and in the experiments with the Al-W powder mixture.[@Cai2; @Herbold; @Herbol1; @Dymat; @phaip] The global shear localized zones and the subsequent cracks developed at an approximate 45 degree angle but was “kinked” by the rigid W particles resulting in the initial shear band forming in a range of angles from 36 to 50 degrees (see Fig. \[fig:9\]). The samples in Fig. \[fig:8\] (a) and (b) have a relatively large percentage of small W particles, providing some geometrical “homogenization” of the mixture. As a result, the shear bands do not significantly deviate from this 45 degree angle. In both samples, the mesoscale shear instabilities continue to grow until one begins to dominate (at approximately 0.25 global strain) and a global shear band develops, which in turn leads to the growth of a macro crack. While this simulation only looks at a small representative element along the edge, the experimental CIP+HIP samples exhibit the same behavior (see in Fig. \[fig:2\] (b)).
The corner samples that have unbonded particles, simulating the CIPed only material, are presented in Fig. \[fig:8\] (c) and (d). These samples do not develop shear instabilities like the bonded samples with identical initial mesostructures. In the unbonded samples, the Al and W particles rearrange themselves during the dynamic deformation, effectively blocking shear instabilities from forming. Additionally, due to the free boundary on the right hand side, the sample undergoes bulk disintegration. Due to the two-dimensionality of the simulation and the lack of friction in the simulation, the W particles do not offer much resistance to the movement of the Al out of the right boundary. A similar behavior can be seen in the experimental samples where the Al on the edge is ejected from the sample, disintegrating into agglomerates of Al consisting of 5-30 initial sized Al particles. This ejection of the Al particles from the network of the W wires adjacent to the free boundary was observed during the dynamic loading in experiments using a high speed camera (see Fig. \[fig:2\] (e)).
Initial variations in porosity (0% or 8.35%) in the bonded sample altered the orientation of the mesoscale shear bands despite the practically identical initial mesostructure of the W particles. In the sample with porosity, the pores are bulk distributed with sizes of approximately 4 microns in diameter. This bulk-distributed porosity allows for small movements of the W particles at the initial stages of deformation due to pore closure. The shear instabilities were observed nucleating during this early stage of the deformation. The small alterations in the W particle arrangement at this early stage therefore caused a large deviation in shear bands that developed during later stages of the deformation. The unbonded samples with and without porosity both showed the same characteristic of particle rearrangement blocking shear band formation and bulk disintegration.
![\[fig:8\] (Color online) Role of initial porosity and bonding between particles. Deformed mesostructures of -325 mesh Al and W particles with the initial arrangement of Al and W particles shown in Fig. \[fig:4\] (a) and (b). Damage is plotted to highlight the shear instability and shear bands in the sample. All samples shown at global strain of 0.45 with in itial particle arrangement presented in:\
(a) Fig. \[fig:4\] (a) with bonded particles (CIPed + HIPed).\
(b) Fig. \[fig:4\] (b) with bonded particles (CIPed + HIPed).\
(c) Fig. \[fig:4\] (a) without bonded particles (CIPed only).\
(d) Fig. \[fig:4\] (b) without bonded particles (CIPed only).](fig8.pdf)
![\[fig:9\] (Color online) Effect of rigid W particles on the “kinking” of the shear bands in the Al matrix with the W and the Al -325 mesh sized particles with the initial mesostructure illustrated in Fig. \[fig:4\] (a). Black dots show the kink locations of the shear bands. The angles of the piecewise shear bands are 32 (1), 58 (2), 34 (3), 47 (4), 48 (5), and 45 (6) degrees. ](fig9.pdf)
The role of bonding and initial porosity on samples with fine (40 micron) Al and coarse (200 micron) W particles
----------------------------------------------------------------------------------------------------------------
Numerical calculations exploring how differences in the relative sizes between the W and Al particles (see the initial mesostructure in Fig. \[fig:5\]) effect the development of shear instabilities and the formation of shear bands/cracks are shown in Fig.\[fig:10\]. The W particles in the numerical simulations have a diameter of 200 microns, the same diameter as the W rods used in the experimental samples shown in Fig. \[fig:1\] (a) and (b). The Al particles have a diameter of 40 microns, the size of the larger particles in the -325 mesh particle size used in the experimental sample. The boundary conditions are the same as those described in Fig. \[fig:7\] (a) with a constant downward velocity of 10 m/s on the top boundary.
In the bonded sample simulating the CIPed+ HIPed material (Fig. \[fig:10\] (a) and (b)), shear instabilities begin developing after the pores close like the porous sample with similar sized Al and W particles shown in Fig. \[fig:8\] (a). Mesoscale shear bands develop at approximately 0.25 global strain. The shear band that forms is heavily influenced (guided) by the W particles in its path. When comparing the shear bands in Fig. \[fig:10\] with those in Fig. \[fig:8\], the W particles that are larger relative to the Al have a much greater influence on directing and altering the path of the shear bands. The larger W particles require the shear instability to circumvent a larger radius of the relatively rigid W particle to connect with instabilities that form in the Al on the other side of the W particle. In Fig. \[fig:10\] (a) the shear band can be clearly seen circumnavigating the W particle in the upper central region, causing the shear band to turn almost 90 degrees from one side of the particle to the other. The shear band in the lower section of the same figure is directed between two adjacent W particles. Due to the larger relative size of the W to the Al particles, the heterogeneous nature of the sample is increased in comparison to the previous samples in Fig. \[fig:8\], where the small W particles homogenized the mesostructure. This creates areas of mostly undeformed Al to appear in regions away from W particles and areas of deformed Al to appear in regions near W particles. Based on the simulations, the path of the shear bands closely follows the W particles in CIPed+Hiped material. This behavior is in agreement with the experiments (see Fig. \[fig:3\]).
In the unbonded sample simulating the CIPed only material in Fig. \[fig:10\] (c) and (d), the bulk distributed rearrangement of the Al and W particles effectively block the development of shear bands. This rearrangement is similar to that of the particles in Fig. \[fig:8\] (c) and (d). Like the previous case due to limitations in the two-dimensionality of the simulations some of the three-dimensional W fiber effects cannot be reproduced. However, the trends seen in this two-dimensional simulation can also be seen at the edge of the experimental sample where the Al is ejected from the edge of the cylinder during the dynamic test as shown in Fig. \[fig:2\] (a) and (e).
![\[fig:10\] (Color online) Role of relative particle sizes of W and Al in bonded (a), (b) and unbounded samples (c), (d) at two different global strains of 0.30 (a), (b) and 0.45 (c), (d). The initial mesostucture is shown in Fig. \[fig:5\]. Damage is plotted to highlight the shear instability and the shear band development. ](fig10.pdf)
The relationship between sample strength and porosity in corner samples
-----------------------------------------------------------------------
The engineering stress at the upper and lower boundaries versus the global strain for the simulation of bonded sample described in Fig. \[fig:10\] is shown in Fig. \[fig:11\]. The solid red curve and the dashed black curve show the average engineering stress on the top and bottom boundaries respectively and are nearly identical except for the first 4% strain. Based on this, the sample can be seen as undergoing a quasi-static deformation. In addition to the stress, the percentage of porosity in the sample was plotted to show the relationship between the porosity and sample strength. It can be seen that stresses in the sample increased until a global strain of 0.12 was reached. The maximum of stress corresponds to the minimum porosity due to in situ densification during the initial stage of deformation. After the densification stage (corresponding to a range of global strain 0.12-0.29), the stress begins to drop as the local shear bands begin forming and the global shear band develops. Later (0.30 global strain) macrocracking occurred resulting in a decrease in strength accompanied by a rapid increase of porosity. This behavior is similar to the behavior observed in.[@Cai2; @Herbold; @Herbol1; @Dymat; @phaip] The two-dimensional simulation of the unbonded material, due to the material rearrangement, had near zero engineering stress on the top and bottom boundary and was not included. The comparable strength of bonded and unbounded samples in experiments is mostly due to interconnected W wires that cannot be accounted in these two-dimensional simulations.
![\[fig:11\] (Color online) Average engineering stress versus the global strain at the top boundary (solid red line) and at the bottom boundary (dashed black line) of the corner sample corresponding to initial mesostructure presented in Fig. \[fig:5\] and velocity of impact 10 m/s. The porosity in the sample versus global strain is also plotted (blue dashed and dotted line). ](fig11.pdf)
The role of initial arrangement of W particles in CIP+HIP samples (bonded) with zero initial porosity
-----------------------------------------------------------------------------------------------------
Current processing techniques create samples where the W fibers are initially placed in a random fashion. Multiple “randomized” samples were created for numerical analysis to understand how different experimental realizations of this randomized W fiber placement effects the formation of shear instabilities during dynamic deformation. In two characteristic samples with initial mesostructure presented in Fig. \[fig:6\], the W particles were randomly placed in an Al matrix such that the volume fractions of W and Al in each sample were identical. The samples were then subjected to boundary conditions described in Fig. \[fig:7\] (a) with a constant 10 m/s downward velocity. All samples had zero initial porosity to remove mesostructural changes due to void closure as seen in simulations described in Fig. \[fig:8\]. Results of the two characteristic simulations are shown in Fig. 12, demonstrating variations in the shear instabilities and shear band development while highlighting the similarities that the samples share.
Both samples in Fig. \[fig:12\] have damage plotted to highlight the shear band formation, with red corresponding to fully damaged Al. It is obvious from the comparison of the deformed samples in Fig. \[fig:12\] (a) and (b) that changes in the initial mesostructure (see Fig. 6 (a) and (b)) greatly alter the location of the shear band despite the same volume content of Al and W and the same size of W particles.
While both samples in Fig. \[fig:12\] have very different shear band locations and modes of fracture, there are a number of similar characteristics that the samples exhibit.
First, both samples develop shear instabilities and shear bands at approximately the same global strain of 0.2. It should be mentioned that in these samples with zero initial porosity, the global strains corresponding to well developed shear localization are similar to the corresponding global strains in the initially porous samples (see Fig. \[fig:8\] and Fig. \[fig:10\]).
Second, numerous local shear instabilities form due to the localized, high strain flow of softer Al around the harder W particles. These local instabilities link with other local shear instabilities in close proximity creating localized shear bands. These bands join with other localized shear bands until one global shear band transverses the entire sample and becomes the dominant macroshear band. This macroshear band has a propensity to form at a 45 degree angle, with its path locally altered by W particles, and spans the entire sample before subsequent global shear bands are able to form. This demonstrates that the relatively rigid W particles initiate the shear instability in these granular composites, enhancing the localized high strain plastic flow of Al around the W particles. Local plastic strain in the Al around the W particles was 3 to 4 times higher than the plastic strain in the surrounding Al thus facilitating localized damage and subsequent global shear localization within the Al matrix. Separate experiments and numerical simulations with cylindrical samples made of as is Al6061-T6 and annealed Al6061-T6 did not reveal shear localization at similar or larger global strains (see Fig. \[fig:2\] (c) and (d)).
Third, the W particles are largely responsible in dictating the path of instabilities. This can be seen clearly in the bottom center region of Fig. \[fig:12\] (a), where the shear band develops two branches that circumvent a clump of W particles and reconvene on the bottom boundary. Additionally, in the top central region of Fig. \[fig:12\] (a), a closely packed clump of W particles block shear instabilities from developing in this region. This may be due to a similar effect as seen in Fig. \[fig:8\] (a) and (b) in addition to Fig. \[fig:10\] (a), where having the W particles initially aggregated into a close pack causes the shear instability to move around the relatively rigid mass into the softer area of the surrounding composite. The same effect can be seen in all areas of the samples in Fig. \[fig:12\] where the W particles group in very close proximity to each other.
Finally, due to the random nature of the sample, some of the W particles form angular chains with channels of Al between them facilitating the growth of shear instabilities and shear bands along the channel. The sample in Fig. \[fig:12\] (b) exemplifies this with W particles creating a channel structure spanning the entire sample going diagonally from the lower left corner to the upper right corner. This channel created a favorable path for the local instabilities to follow and eventually lead to the formation of the shear band in the sample. W particles “flow” along the shear band to help form these angular chains. Rearrangement of the rigid particles inside high strain shear flow was observed in explosive compaction on interfaces between particles with high strain shear deformation.[@Nbook] In Fig. \[fig:12\] (a) a distinct global chain structure of W particles is not present like in Fig. \[fig:12\] (b), but rather a series of shorter chain structures in the left and the upper right region of the sample. This lack of long chains may be the reason that the sample accumulated more bulk-distributed damage in comparison to Fig. \[fig:12\] (b).
![\[fig:12\](Color online) Role of initial mesostructure of the W particles on the shear localization and the subsequent fragmentation with zero initial porosity. The deformed samples correspond to the initial W particles arrangements shown in Fig. \[fig:6\] :\
(a) at 0.5 strain corresponding to Fig. \[fig:6\] (a)\
(b) at 0.5 strain corresponding to Fig. \[fig:6\] (b) ](fig12.pdf)
The role of kinematic confinement conditions
---------------------------------------------
Previous simulations examined small representative elements corresponding to a material on, or very near to, the outer edge of the experimental sample. A small representative element near the central region of the sample interacts with surrounding material which resists movement of material leaving the representative element. To account for this “confinement” from the surrounding material, kinematic boundary conditions using a ramped normal velocity were imposed on the side. Multiple velocity ramps were explored in an attempt to model the large variance in the local inhomogeneity within the sample, resulting in various local “confinements”. The first ramp velocity tested kept the global Poisson ratio of the small representative sample geometrically consistent with that of the experimental sample. Two additional constrained boundary conditions were tested corresponding to 70% and 100% increased horizontal expansion in comparison with the geometrically constrained case. The vertical strain rate was kept identical for all samples. Fig. \[fig:13\] shows the simulation results at 0.30 and 0.50 global strains respectively for the samples with the constrained side boundary conditions. Damage is plotted to highlight the shear instabilities.
Shear bands developed in all samples at approximately 0.25 global strain. However, the location and number of shear bands differed in each case. Despite these differences, all the samples share a similar characteristic: areas of heavily deformed Al particles near the W particles, resulting in local damage accumulation and subsequent shear band formation, while material further away from the W particles is relatively undeformed. This trend seems to be independent of the imposed confinement conditions. Also, samples in Fig. \[fig:13\] (b) and (c) exhibit cracks that form between the W particles. The crack locations are very similar to the location of cracks in the large experimental sample (see Fig. \[fig:3\] (b)). The cracks span between the W particles in the areas of the heavily deformed Al. In the simulations, the W particles move, squeezing the Al particles between the nearby W particles, acting as “anvils” that facilitate the heterogeneous deformation of the Al particles. A similar effect may occur in the large experimental sample where the Al flows around the W fibers due to the three-dimensional cage like network that the packed network of W fibers create. A similar effect of rigid and heavy W particles is instrumental also in the shock loading of corresponding mixtures allowing tailored redistribution of internal energy between components.[@Cai2; @Herbold]
![\[fig:13\] (Color online) Role of different conditions of confinement on shear localization and subsequent fracture. Three deformed mesostructures at 0.3 and 0.5 global strains with various confinement conditions are presented. All three samples have boundary conditions described in Fig. \[fig:7\] (b) with a constant 10 m/s downward velocity imposed on the top boundary. Each sample differs only in the imposed horizontal velocity on the side boundaries (linear ramp as vertical strain goes from 0 to 0.5) in the following way:\
(a),(d) A linear ramp from 0 to 7.20 m/s. Corresponding to a final horizontal strain equal to 0.58. This produces the representative element to keep the same geometric proportions as in the experiments. Sample shown at 0.3 (a) and 0.5 (d) vertical strain.\
(b),(e) A linear ramp from 0 to 12.24 m/s. This corresponds to a horizontal strain 1.0. Sample shown at 0.3 (b) and 0.5 (e) vertical strain.\
(c),(f) A linear ramp from 0 to 14.40 m/s. This corresponds to a horizontal strain of 1.16. Sample shown at 0.3 (c) and 0.5 (f) vertical strain. ](fig13.pdf)
The role of the constitutive behavior of Al
-------------------------------------------
The constitutive behavior of the Al particles were modified to explore how they influenced the shear instability and the shear band formation in the bonded samples (CIPed plus HIPed samples in experiments). Simulations were carried out with the initial mesostructure shown in Fig. \[fig:5\] and the boundary conditions in Fig. 7 (a) with a constant 10 m/s downward velocity on the top boundary. The initial yield stress of Al was reduced to a very low level 20 MPa and the results are shown in Fig. \[fig:14\]. The sensitivity of shear band development to other parameters in the Johnson-Cook model was also explored, however, these simulations showed that there were only slight changes in the material response due to these alterations and results are not presented in this paper.
The reduction of the initial yield stress in the material from that of Al 6061-T6 (324 MPa) to that of a much softer Al (20 MPa) caused a large change in sample response (compare to the results in Fig. \[fig:10\] (a) and (b)). The Al particles in Fig. \[fig:14\] experiences greater plastic flow and deformation in all areas of the sample, especially in the areas around the W particles. Due to the softening of the Al matrix, the development of the global shear band was delayed, allowing for more bulk-distributed plastic flow of the Al. The reduction of initial yield stress in the Al resulted in a shear band developing in a different location. This is likely caused by the greater movement of W particles in the softer Al altering the arrangement of the W altering the mesostructure at the earlier stages of sample deformation.
![\[fig:14\] (Color online) Effect of the initial yield strength on the shear instability in porous granular composites. The initial yield strength of the Al is reduced to that of a softer Al (20 MPa). The sample is presented with global strains of 0.30 (a) and 0.40 (b). The damage is plotted to highlight the shear bands. ](fig14.pdf)
Conclusions
===========
Dynamic experiments with Al-W granular/porous composites and numerical simulations revealed the characteristics that had the greatest effects on shear instability and shear band formation. It was shown that in simulated CIPed only (unbonded) samples the Al and W particles rearranged themselves during the dynamic deformation to effectively block shear localization. This resulted in the subsequent bulk disintegration of the sample in agreement with areas near the outer surface of the samples used in the experiments.
All CIPed+HIPed (bonded) samples exhibited shear localization and shear band formation. The shear bands nucleated during the initial stages of the deformation in Al surrounding the W particles and spread to the nearby W particles at angles close to 45 degrees. The shear band is kinked by W particles causing the shear band path to deviate from the ideal 45 degree angle path dictated by global geometry. In simulations with relatively larger W particles, the path of the shear band was influenced to a greater degree than simulations with the similar sized W and Al particles due to increased heterogeneity of the sample. It was also shown that variations in the initial arrangements of the W particles were the main drivers determining where the global shear bands formed in the sample. Numerical calculations and experiments revealed that the mechanism of shear localization in granular composites is due to a localized high strain flow of Al around rigid W particles, causing local damage accumulation and a subsequent growth of the meso/macro shear bands/cracks.
A variety of constraint boundary conditions were examined to represent the heterogeneous nature of the internal structure in the global sample. Each simulation showed shear localization occurred between the nearby W particles while leaving areas away from the W particles relatively undeformed. This result is supported by the microstructural features observed in the experimental sample.
Finally, the role of the constitutive behavior in the Al was examined in numerical calculations. The results showed that a significant reduction of the initial yield stress from Al 6061-T6 (324 MPa) to a softer Al (20 MPa) increased the amount of bulk distributed damage and plastic strain in the sample in addition to altering the shear band location.
Acknowledgements
================
The support for this project provided by the Office of Naval Research Multidisciplinary University Research Initiative Award N00014-07-1-0740. \[1\]\[1\][\#1]{}
[26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [**** ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, ) pp. @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [“,” ]{} (, ) @noop [****, ()]{} @noop [**]{} (, , ) Chap.
|
---
abstract: |
Integer factorization is a very hard computational problem. Currently no efficient algorithm for integer factorization is publicly known. However, this is an important problem on which it relies the security of many real world cryptographic systems.
I present an implementation of a fast factorization algorithm on MapReduce. MapReduce is a programming model for high performance applications developed originally at Google. The quadratic sieve algorithm is split into the different MapReduce phases and compared against a standard implementation.
author:
- Javier Tordable
title: MapReduce for Integer Factorization
---
Introduction
============
The security of many cryptographic algorithms relies on the fact that factoring large integers is a very computationally intensive task. In particular RSA [@1] would be vulnerable if there was an efficient algorithm to factor semiprimes (products of two primes). This could have severe consequences, as RSA is one of the most widely used algorithms in electronic commerce applications [@2].
There are many algorithms for integer factorization [@3]. From the trivial trial division to the classical Fermat’s factorization method [@4] and Euler’s factoring method [@5] to the modern algorithms, the quadratic sieve [@6] and the number field sieve [@7]. In particular the number field sieve algorithm was used in 1996 to factor a 512 bit integer [@8], the lowest integer length used in commercial RSA implementations. There have been several other big integers factored over the course of the last decade. I would like to point out that in those cases the feat was accomplished with tremendous effort developing the software and a very considerable investment in hardware [@9],[@10].
In what follows I will expose how MapReduce, a distributed computational framework, can be used for integer factorization. As an example I will show an implementation of the quadratic sieve algorithm. I will also compare in terms of performance and cost a conventional implementation with the MapReduce implementation.
MapReduce
=========
I claim no participation in the development of the MapReduce framework. This section is basically a short extract of the original MapReduce paper by Jeff Dean and Sanjay Ghemawat [@11]. MapReduce is a programming model inspired in computational programming. Users can specify two functions, *map* and *reduce*. The *map* function processes a series of (key, value) pairs, and outputs intermediate (key, value) pairs. The system automatically orders and groups all (key, value) pairs for a particular key, and passes them to the reduce function. The reduce function receives a series of values for a single key, and produces its output, which is sometimes a synthesis or aggregation of the intermediate values.
The canonical example of a MapReduce computation is the construction of an inverted index. Let’s take a collection of documents $\mathit{\mathcal{D}}=\left\{ D_{0},D_{1},...,D_{N}\right\} $ which are composed of words $D_{0}=\left(d_{0,0},d_{0,1},...,d_{0,L_{0}}\right),D_{1}=\left(d_{1,0},d_{1,1},...,d_{1,L_{1}}\right)$ and so on. We define a map function the following way:
$$map:(i,D_{i})\rightarrow\left\{ \left(d_{i,0},\left(i,0\right)\right),\left(d_{i,1},\left(i,1\right)\right),...,\left(d_{i,L_{i}},\left(i,L_{i}\right)\right)\right\}$$
that is, for a given document it processes each word in the document and outputs an intermediate pair. The key is the word itself, and the value is the location in the corpus, indicated as (document, position). The reduce function is defined as:
$$reduce:\left\{ \left(d,\left(i_{1},j_{1}\right)\right),...,\left(d,\left(i_{L},j_{L}\right)\right)\right\} \rightarrow\left(d,\left\{ \left(i_{1},j_{1}\right),...,\left(i_{L},j_{L}\right)\right\} \right)$$
For a collection of pairs with the same key (the same word), it outputs a new pair, in which the key is the same, and the value is the aggregation of the intermediate values. In this case, the set of locations (document and position in the document) in which the word can be found in the corpus.
The MapReduce implementation automatically takes care of the parallel execution in a distributed system, data transmission, fault tolerance, load balancing and many other aspects of a high performance parallel computation. The MapReduce model escales seamlessly to thousands of machines. It is used continously for a multitude of real world applications, from machine learning to graph computations. And most importantly the effort required to develop a high performance parallel application with MapReduce is much lower than using other models, like for example MPI [@12].
Quadratic Sieve
===============
The Quadratic Sieve algorithm was conceived by Carl Pomerance in 1981. A detailed explanation of the algorithm can be found in [@13]. Here we will just review the basic steps. Let $N$ be the integer that we are trying to factor. We will attempt to find $a,b$ such that: $N\mid\left(a^{2}-b^{2}\right)\Rightarrow N\mid\left(a+b\right)\left(a-b\right)$. If $\left\{ \left(a+b,N\right),\left(a-b,N\right)\right\} \neq\left\{ 1,N\right\} $ then we will have a factorization of $N$.
Lets define:$$Q\left(x\right)=x^{2}-N$$
if we find $x_{1},x_{2},...x_{K}$ such that $\prod_{i=1}^{K}Q\left(x_{i}\right)$ is a perfect square, then: $$N\mid\prod_{i=1}^{K}Q\left(x_{i}\right)-\left(\prod_{i=1}^{K}x_{i}\right)^{2}=\prod_{i=1}^{K}\left(x_{i}^{2}-N\right)-x_{1}^{2}x_{2}^{2}...x_{K}^{2}$$
Finding Squares
---------------
Let’s take a set of integers $x_{1},...,x_{L}$ which are $B$-smooth (all $x_{i}$ factor completely into primes $\leq B$). One way to look for $i_{1},i_{2},...,i_{M}$ such that $\prod_{j=1}^{M}x_{i_{j}}$ is a square is as follows. Let’s denote $p_{i}$ the i-th prime number. $\prod_{j=1}^{M}x_{i_{j}}=p_{j_{1}}^{a_{1}}p_{j_{2}}^{a_{2}}...p_{j_{L}}^{a_{L}}$ is a square if and only if $2\mid a_{k}$ for all $k$ $\Leftrightarrow a_{k}\equiv0\, mod\,(2)$. For each $x_{i}$ we will obtain a vector $v^{i}=v\left(x_{i}\right)$ where $v_{j}^{i}=max\left\{ k:p_{j}^{k}\mid x_{i}\right\} \, mod\,\left(2\right)$. That is, each component $j$ of $v^{i}$ is the exponent of $p_{j}$ in the factorization of $x_{i}$ modulo $2$. For example, for $B=4$:
$$\begin{aligned}
x_{1}=6,v^{1}=\left(1,1,0,0\right)\\
x_{2}=45,v^{2}=\left(0,0,1,0\right)\\
x_{3}=75,v^{3}=\mbox{\ensuremath{\left(0,1,0,0\right)}}\end{aligned}$$
It is immediate that:
$$v\left(\prod_{j=1}^{M}x_{i_{j}}\right)=\sum_{j=1}^{M}v\left(x_{i_{j}}\right)$$
Then$$\prod_{j=1}^{M}x_{i_{j}}\mbox{is a square}\Leftrightarrow v\left(\prod_{j=1}^{M}x_{i_{j}}\right)=\overrightarrow{0}$$
In conclussion, in order to find a subset of $x_{1},...,x_{L}$ which is a perfect square, we just need to solve the linear system:$$\left(\begin{array}{ccccccc}
v^{1} & \mid & v^{2} & \mid & \ldots & \mid & v^{L}\end{array}\right)\left(\begin{array}{c}
e_{1}\\
e_{2}\\
\vdots\\
e_{L}\end{array}\right)\equiv\overrightarrow{0}\, mod\,(2)$$
Sieving for smooth numbers
--------------------------
Back to the original problem, we just need to find a convenient set $\left\{ x_{1},x_{2},...,x_{L}\right\} $ such that $\left\{ Q\left(x_{1}\right),Q\left(x_{2}\right),...,Q\left(x_{L}\right)\right\} $ are $B$-smooth numbers for a particular $B$. First of all, lets notice that we don’t need to consider every prime number $\leq B$. If a prime $p$ verifies: $p\mid Q(x)$ for some $x$ then:
$$p\mid Q(x)\Leftrightarrow p\mid x^{2}-N\Leftrightarrow x^{2}\equiv N\, mod\,(p)\Leftrightarrow\left(\frac{N}{p}\right)=1$$
Because $N$ is a quadratic residue modulo p if and only if the Legendre symbol of n over p is 1. We will take a set of primes which verifies that property and we will call it *factor base*.
In order to consider smaller values of $Q(x)$ we will take values of $x$ around $\sqrt{N},$ i.e. $x\in\left[\lfloor\sqrt{N}\rfloor-M,\lfloor\sqrt{N}\rfloor+M\right]$ for some $M.$ Both $B$ above and $M$ here are chosen as indicated in [@13].
In order to factor all the $Q(x_{i})$ we will use a method called *sieving* which is what gives the quadratic sieve its name. Notice that $p\mid Q(x)\Rightarrow p\mid Q(x+kp)=x^{2}+2kpx+k^{2}p^{2}-N=\left(x^{2}-N\right)+p\left(2kx+k^{2}p\right)$. Then$$Q(x)\equiv0\, mod\,(p)\Rightarrow\forall k\in\mathbb{N},Q(x+kp)\equiv0\, mod\,(p)$$
We can solve the equation $Q(x)\equiv0\, mod\,(p)\Leftrightarrow x^{2}-N\equiv0\, mod\,(p)$ efficiently and obtain two solutions $s_{1},s_{2}$ [@14]. If we take: $$z_{p,\left\{ 1,2\right\} }=min\left\{ x\in\left[\lfloor\sqrt{N}\rfloor-M,\lfloor\sqrt{N}\rfloor+M\right]:x\equiv s_{\left\{ 1,2\right\} }\, mod\,(p)\right\}$$ then all $Q\left(z_{p,\{1,2\}}+kp\right),k\in\left[0,K\right]$ are divisible by $p$. We can divide each one of them by the highest power of $p$ possible. For example:
$$\begin{aligned}
\left(x_{i}\right)= & \left(\ldots,6,7,8,9,10,\ldots\right)\\
\left(Q\left(x_{i}\right)\right)= & \left(\ldots,-41,-28,-13,4,23,\ldots\right)\\
& \left(\frac{77}{2}\right)=1\mbox{ as }77\equiv1\equiv1^{2}\, mod\,(2)\\
& x^{2}-77\equiv0\, mod\,(2)\mbox{ yields }1,3,5,7,9,...\\
& \left(\ldots,-41,-7,-13,1,23,\ldots\right) & \mbox{after sieving by }2\end{aligned}$$
After sieving for every appropriate $p$, all the $Q(z)$ that are equal to $1$ are smooth over the factor base.
Method
======
I developed a basic implementation of the Quadratic Sieve MapReduce which runs on Hadoop [@15]. Hadoop is an open source implementation of the MapReduce framework. It is made in Java and it has been used effectively in configurations ranging from one to a few thousand computers. It is also available as a commercial cloud service [@16].
This implementation is simply a proof of concept. It relies too heavily on the MapReduce framework and it is severy bound by IO. However the size and complexity of the implementation are several orders of manitude lower than many competing alternatives.
The 3 parts of the program are :
- *Controller*: Is the master job executed by the platform. It runs before spawning any worker job. It has two basic functions: first it generates the factor base. The factor base is serialized and passed to the workers as a counter. Second it generates the full interval to sieve. All the data is stored in a single file in the distributed Hadoop file system [@17]. It then relies on the MapReduce framework to automatically split it in an adequate number of shards and distribute it to the workers
- *Mapper*: The mappers perform the sieve. Each one of them receives an interval to sieve, and they return a subset of the elements in that input sieve which are smooth over the factor base. All output elements of all mappers share the same key
- *Reducer*: The reducer receives the set of smooth numbers and attempts to find a subset of them whose product is a square by solving the system modulo 2 using direct bit manipulation. If it finds a suitable subset, it tries to factor the original number, N. In general there will be many subsets to choose from. In case that the factorization is not succesful with one of them, it proceeds to use another one. The single output is the factorization
In order to compare performance I developed another implementations of the Quadratic Sieve algorithm in Maple. Both implementations are basic in the sense that they implement the basic algorithm described above and the code has not been heavily optimized for performance. There are many differences between the two frameworks used that could impact performance. Because of that a direct comparison of running times or memory space may not be meaningful. However it is interesting to notice how each of the implementations scales depending on the size of the problem. The source code is available online at http://www.javiertordable.com/research.
Results
=======
Figures 1 and 2 show the results both in absolute terms and normalized. Figure 3 shows the disk usage of the MapReduce implementation. To test both implementations I took a set of numbers of different sizes[^1]. The number of decimal digits $d$ is indicated in the first column of each table. In order to contruct those numbers I took two factors close to $10^{\frac{d}{2}}$, with their product slightly over $10^{d}$.
In each table sieve size indicates the number of elements that the algorithm analyzed in the sieve phase. For the MapReduce application the time result is taken from the logs, and the memory result is obtained as the maximum memory used by the process. For the Maple implementation both time and memory data are taken from the on screen information in the Maple environment. Finally disk usage data for the MapReduce is taken as the size of the file that contains the list of numbers to sieve. The Maple program runs completely in memory for the samples analyzed.
Decimal Sieve
--------- ----------- ------------ --------------- ------------ ---------------
Digits Size Time ($s$) Memory ($MB$) Time ($s$) Memory ($MB$)
$10$ $5832$ $2.0$ $149.6$ $0.1$ $7.5$
$15$ $85184$ $3.0$ $397.1$ $3.5$ $15.5$
$20$ $970299$ $35.0$ $463.1$ $116.0$ $100.8$
$25$ $7529536$ $495.0$ $670.0$ $3413.7$ $894.0$
: Absolute performance of the MapReduce and Maple implementations
Decimal Sieve
--------- ---------- --------- -------- ----------- ---------
Digits Size Time Memory Time Memory
$10$ $1.0$ $1.0$ $1.0$ $1.0$ $1.0$
$15$ $14.6$ $1.5$ $2.7$ $35.0$ $2.1$
$20$ $166.4$ $17.5$ $3.1$ $1160.0$ $13.4$
$25$ $1291.1$ $247.5$ $4.5$ $34137.0$ $119.2$
: Normalized performance of the MapReduce and Maple implementations
Decimal Absolute Sieve Relative Sieve Absolute Relative
--------- ---------------- ---------------- ------------- -------------
Digits Size Size Disk ($MB$) Disk ($MB$)
$10$ $5832$ $1.0$ $0.1$ $1.0$
$15$ $85184$ $14.6$ $2.1$ $14.6$
$20$ $970299$ $166.4$ $29.4$ $166.4$
$25$ $7529536$ $1291.1$ $275.3$ $1291.1$
: Disk usage of the MapReduce implementation
Discussion
==========
The MapReduce implementation has a relatively big setup cost in time and memory when compared with an application in a conventional mathematical environment. However it scales better with respect to the size of the input data.
MapReduce is optimized to split and distribute data form disk. If an application handles a significant volume of data, IO capacity and performance can be a limiting factor. In our case disk usage is directly proportional to the size of the sieve set, which grows exponentially on the number of digits.
Both MapReduce and Maple implementations are similar in terms of development effort. The Maple implementation seems more adequate for small-sized problems while the MapReduce application is more efficient for medium-sized problems. Also it will be easier to scale in order to solve harder problems.
[17]{} Rivest, R.; A. Shamir; L. Adleman. 1978. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications of the ACM 21 (2): 120126.
Nash, A., Duane, W., and Joseph, C. 2001. Pki: Implementing and Managing E-Security. McGraw-Hill, Inc.
Donald Knuth. 1997. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley. ISBN 0-201-89684-2. Section 4.5.4: Factoring into Primes, pp. 379417
Israel Kleiner. 2005. Fermat: The Founder of Modern Number Theory. Mathematics Magazine, Vol. 78, No. 1 (Feb., 2005), pp. 3-14
McKee, James. 1996. Turning Euler’s Factoring Method into a Factoring Algorithm; in Bulletin of the London Mathematical Society; issue 28 (volume 4); pp. 351-355
Pomerance, C. 1985. The quadratic sieve factoring algorithm. In Proc. of the EUROCRYPT 84 Workshop on Advances in Cryptology: theory and Application of Cryptographic Techniques. Springer-Verlag New York. 169-182.
Lenstra, A. K., Lenstra, H. W., Manasse, M. S., and Pollard, J. M. 1990. The number field sieve. In Proceedings of the Twenty-Second Annual ACM Symposium on theory of Computing. ACM, New York, NY, 564-572.
Cowie, J., Dodson, B., et al. 1996. A World Wide Number Field Sieve Factoring Record: On to 512 Bits. In Proceedings of the international Conference on the theory and Applications of Cryptology and information Security. Lecture Notes In Computer Science, vol. 1163. Springer-Verlag, London, 382-394.
Golliver, R. A., Lenstra, A. K., and McCurley, K. S. 1994. Lattice sieving and trial division. In Proceedings of the First international Symposium on Algorithmic Number theory. L. M. Adleman and M. A. Huang, Eds. Lecture Notes In Computer Science, vol. 877. Springer-Verlag, London, 18-27.
S. Cavallar and W. M. Lioen and H. J. J. Te Riele and B. Dodson and A. K. Lenstra and P. L. Montgomery and B. Murphy Et Al and Mathematisch Centrum. 2000. Factorization of a 512-bit RSA modulus. Proceedings of Eurocrypt 2000. Springer-Verlag. 1-18.
Dean, J. and Ghemawat, S. 2004. MapReduce: Simplified Data Processing on Large Clusters. OSDI’04: Sixth Symposium on Operating System Design and Implementation, San Francisco, CA, December, 2004, 137-150
Gropp, W., Lusk, E., and Skjellum, A. 1994. Using Mpi: Portable Parallel Programming with the Message-Passing Interface. MIT Press. 257-260
Carl Pomerance. 1996. A Tale of Two Sieves, Notices of the AMS, 1473-1485
Niven, I. and Zuckerman, H.S. and Montgomery, H.L. 1960. An introduction to the theory of numbers. John Wiley and Sons, Inc. 110-115
http://hadoop.apache.org/
http://aws.amazon.com/elasticmapreduce/
Borthakur, D. 2007. The hadoop distributed file system: Architecture and design. http://svn.apache.org/repos/asf/hadoop/core/tags/release-0.15.3/docs/hdfs\_design.pdf
[^1]: 1164656837, 117375210056563, 10446257742110057983, 1100472550655106750000029
|
---
abstract: 'It is proved that a metric space is sober, as an approach space, if and only if it is Smyth complete.'
address: 'School of Mathematics, Sichuan University, Chengdu 610064, China'
author:
- Wei Li
- Dexue Zhang
title: Sober metric approach spaces
---
Metric space ,Yoneda completeness ,Smyth completeness ,approach space ,metric approach space ,sober approach space 18B30 ,18B35 ,54B30 ,54E99
Introduction
============
Approach spaces, introduced by Lowen [@RL89], are a common extension of topological spaces and metric spaces. By a metric on a set $X$ we understand, as in Lawvere [@Lawvere73], a map $d:X\times X\rightarrow [0,\infty]$ such that $d(x,x)=0$ and $d(x,y)+d(y,z)\geq d(x,z)$ for all $x,y, z\in X$. An extensive investigation of approach spaces can be found in the monographs of Lowen [@RL97; @Lowen15]. An approach space is said to be a topological one if it is generated by a topological space; and it is said to be a metric one if it is generated by a metric space.
Sober approach spaces, a counterpart of sober topological spaces in the metric setting, are introduced in [@BRC]. It is proved there that a topological space is sober as an approach space, if and only if it is sober as a topological space. So, it is natural to ask what kind of metric approach spaces are sober? A partial answer is obtained in [@BRC]. If $d$ is a usual metric (i.e., a symmetric, separated and finitary metric) on a set $X$, it follows from Corollary 5.19 in [@BRC] that $(X,d)$ is sober, as an approach space, if and only if $(X,d)$ is a complete metric space. This paper presents a complete answer to this question. The answer is a bit surprising: a metric space is sober, as an approach space, if and only if it is Smyth complete. A metric space is Smyth complete if every forward Cauchy net in it converges in its symmetrization [@Goubault; @KS2002]. Smyth completeness originated in the works of Smyth [@Smyth87; @Smyth94] that aimed to provide a common framework for the domain approach and the metric space approach to semantics in computer science.
As advocated in [@GH; @Hof2011; @HST], in this paper we emphasize that the relationship between approach spaces and metric spaces is analogous to that between topological spaces and ordered sets. This point of view has proved to be fruitful, and is well in accordance with the thesis of Smyth [@Smyth87] “that domains are, or should be, a prime area for the application of quasi-uniform ideas, and can help us to get the definitions right."
An order on a set $X$ is a map $X\times X\rightarrow \{0,1\}$ fulfilling certain requirements; a topology (identified with the corresponding closure operator) is a map $X\times 2^X\rightarrow\{0,1\}$ (the transpose of the closure operator) that satisfies certain conditions. Replacing the quantale $2=(\{0,1\},\wedge)$ by Lawvere’s quantale $([0,\infty]^{\rm op},+)$ in the postulations of ordered sets and topological spaces, we obtain metric spaces and approach spaces.
The following commutative squares exhibit some basic relationship among the categories of ordered sets, topological spaces, metric spaces and approach spaces: $$\bfig \square<700,500>[{\sf Ord}` {\sf Top}` {\sf Met} `{\sf App}; \Gamma` \omega ` \omega ` \Gamma] \square(1500,0)/>`<-`<-`>/<700, 500>[{\sf Top} `{\sf Ord} `{\sf App}`{\sf Met} ; \Omega `\iota`\iota`\Omega] \efig$$ where,
- the involved categories are “self evident", and will be explained in the next section;
- the top row: $\Gamma$ sends each ordered set $(X,\leq)$ to its Alexandroff topology, $\Omega $ sends a topological space to its specialization order;
- the bottom row: $\Gamma$ sends a metric space to the corresponding metric approach space, $\Omega$ sends an approach space to its specialization metric;
- $\omega$ (in both cases) is a full and faithful functor with a right adjoint given by $\iota$.
These facts can be found in [@RL97]. The bottom row is an analogy of the top row in the metric setting. In particular, approach spaces extend metric spaces, via the functor $\Gamma$, in the same way as topological spaces extend ordered sets. The problem considered in this paper is to characterize those metric spaces $(X,d)$ for which $\Gamma(X,d)$ are sober. To this end, some properties of the other functors will also be considered. The main results include:
\(1) The specialization metric of a sober approach space is Yoneda complete (Proposition \[Sober implies Yoneda\]). This is an analogy in the metric setting of the fact that the specialization order of a sober topological space is directed complete.
\(2) For a metric space $(X,d)$, the specialization metric space of the sobrification of $\Gamma(X,d)$ coincides with the Yoneda completion of $(X,d)$ (Theorem \[5.8\]).
\(3) For a metric space $(X,d)$, the approach space $\Gamma(X,d)$ is sober if and only if $(X,d)$ is Smyth complete (Theorem \[main\]).
Topological spaces, metric spaces, and approach spaces
======================================================
Write $2$ for the quantale (i.e., a small and complete monoidal closed category) $(\{0,1\},\wedge)$. An ordered set is then a $2$-enriched category. Precisely, an ordered set is a set $X$ together with a map $p:X\times X\rightarrow 2$ such that for all $x,y,z\in X$:
1. $p(x,x)=1$,
2. $p(x,y)\wedge p(y,z)\leq p(x,z)$.
It is traditional to write $x\leq y$ for $p(x,y)=1$ in order theory.
Given a topological space $X$, the closure operator on $X$ induces a map $c:X\times 2^X\rightarrow 2$, given by $$c(x,A)=\left\{\begin{array}{ll}1, & x\in \overline{A},\\
0, &x\notin \overline{A}.\end{array}\right.$$ This map satisfies the following conditions:
1. $c(x, \{x\})=1$,
2. $c(x, \emptyset)=0$,
3. $c(x, A\cup B)=c(x, A)\vee c(x, B)$,
4. $c(x,A)\geq c(x,B)\wedge{\bigwedge}_{y\in B}c(y,A)$.
The condition (C4) expresses the idempotency of the closure operator. Topologies on a set $X$ correspond bijectively to maps $c:X\times 2^X\rightarrow 2$ that satisfy the conditions (C1)-(C4).
The *specialization order* [@Johnstone] of a topological space $X$ is the composite $$X\times X\to^{(x,y)\mapsto(x, \{y\})}X\times 2^X\to^c 2;$$ or equivalently, $x\leq y$ if $x\in\overline{\{y\}}$. Taking specialization order defines a functor $$\Omega:{\sf Top}\rightarrow{\sf Ord}$$ from the category of topological spaces and continuous maps to the category ${\sf Ord}$ of ordered sets and order-preserving maps. The functor $\Omega$ has a left adjoint $$\Gamma:{\sf Ord}\rightarrow{\sf Top}$$ that maps an ordered set $(X,\leq)$ to the space obtained by endowing $X$ with the Alexandroff topology of $(X,\leq)$ (i.e., the topology whose closed sets are the lower subsets in $(X,\leq)$).
A non-empty closed subset $A$ of a topological space $X$ is irreducible if for any closed subsets $B,C$, $A\subseteq B\cup C$ implies $A\subseteq B$ or $A\subseteq C$. A topological space $X$ is sober if for each irreducible closed subset $A$, there exists a unique $x\in X$ such that $A$ equals the closure of $\{x\}$. It is well-known that the specialization order of a sober topological space is directed complete, i.e., every directed set in it has a join [@GH; @Johnstone].
([@Lawvere73]) A metric space is a category enriched over the Lawvere quantale $([0,\infty]^{\rm op},+)$. Explicitly, a metric space $(X,d)$ consists of a set $X$ and a map $d:X\times X\rightarrow [0,\infty]$ such that $d(x,x)=0$ and $d(x,y)+d(y,z)\geq d(x,z)$ for all $x,y, z\in X$. The map $d$ is called a metric, and the value $d(x,y)$ the distance from $x$ to $y$.
A metric space $(X,d)$ is *symmetric* if $d(x,y)=d(y,x)$ for all $x,y\in X$; *separated* if $x=y$ whenever $d(x,y)=d(y,x)=0$; *finitary* if $d(x,y)<\infty$ for all $x,y\in X$. A metric space in the usual sense is exactly a symmetric, separated and finitary one. Given a metric $d$ on a set $X$, the opposite $d^{\rm op}$ of $d$ refers to the metric given by $d^{\rm op}(x,y)=d(y,x)$; the symmetrization $d^{\rm sym}$ of $d$ is given by $d^{\rm sym}(x,y)=\max\{d(x,y),d(y,x)\}$.
A non-expansive map $f: (X,d)\rightarrow (Y,p)$ between metric spaces is a map $f:X\rightarrow Y$ such that $d(x,y)\geq p(f(x),f(y))$ for all $x, y$ in $X$. Metric spaces and non-expansive maps form a category, denoted by ${\sf Met}$. A map $f: (X,d)\rightarrow (Y,p)$ between metric spaces is isometric if $d(x,y)=p(f(x),f(y))$ for all $x,y\in X$.
For any $a,b$ in $[0,\infty]$, the Lawvere distance, $d_L(a,b)$, from $a$ to $b$ is defined to be the truncated minus $b\ominus a$, i.e., $$d_L(a,b)=b\ominus a= \max\{0,b-a\},$$ where we take by convention that $\infty-\infty=0$ and $\infty-a=\infty$ for all $a<\infty$. It is clear that $([0,\infty],d_L)$ is a separated, non-symmetric, and non-finitary metric space.
The opposite of the Lawvere metric is denoted by $d_R$, i.e., $d_R(x,y)=x\ominus y$.
Let $(X,d)$ be a metric space. A weight, a.k.a. a left module [@Lawvere73; @SV2005], of $(X,d)$ is a function $\phi:X\rightarrow[0,\infty]$ such that $\phi(x)\leq \phi(y)+d(x,y)$ for all $x,y\in X$. A coweight, a.k.a. a right module, of $(X,d)$ is a function $\psi:X\rightarrow[0,\infty]$ such that $\psi(y)\leq \psi(x)+d(x,y)$ for all $x,y\in X$. Said differently, a weight of $(X,d)$ is a non-expansive map $\phi:(X, d)\rightarrow([0,\infty],d_R)$; a coweight of $(X,d)$ is a non-expansive map $\psi:(X, d)\rightarrow([0,\infty],d_L)$.
Given a metric space $(X,d)$, let ${{\cal P}}X$ be the set of all weights of $(X,d)$. It is obvious that ${{\cal P}}X$ has the following properties:
1. For each $x\in X$, $d(-,x)\in{{\cal P}}X$. Such weights are said to be representable.
2. For each subset $\{\phi_i\}_{i\in I}$ of ${{\cal P}}X$, both $\inf_{i\in I}\phi_i$ and $\sup_{i\in I}\phi_i$ are in $\mathcal{P}X$.
3. For all $\phi\in\mathcal{P}X$ and $\alpha\in[0,\infty]$, both $\phi+\alpha$ and $\phi\ominus\alpha$ are in $\mathcal{P}X$.
For all $\phi,\psi\in{{\cal P}}X$, let $$\overline{d}(\phi,\psi)=\sup_{x\in X}d_L(\phi(x),\psi(x))$$ Then $\overline{d}$ is a separated metric on ${{\cal P}}X$. For all $x\in X$ and $\phi\in{{\cal P}}X$, it holds that $$\overline{d}(d(-,x),\phi)=\phi(x).$$ In particular, the correspondence $x\mapsto d(-,x)$ defines an isometric map $(X,d)\rightarrow({{\cal P}}X,\overline{d})$. That is, $d(x,y)=\overline{d}(d(-,x),d(-,y))$ for all $x,y\in X$. These facts are instances of the Yoneda lemma and the Yoneda embedding in enriched category theory, see e.g. [@Lawvere73].
\[AP\] ([@RL89; @RL97]) An approach space $(X,\delta)$ consists of a set $X$ and a map $\delta:X\times 2^X \rightarrow [0,\infty]$, subject to the following conditions:
1. $\delta(x, \{x\})=0$,
2. $\delta(x, \emptyset)=\infty$,
3. $\delta(x, A\cup B)=\min\{\delta(x, A),\delta(x, B)\}$,
4. $\delta(x,A)\leq\delta(x,B)+\sup_{b\in B}\delta(b,A)$,
for all $x\in X$ and $ A,B\in 2^X$. The map $\delta$ is called an approach distance on $X$.
It should be noted that in [@RL89; @RL97], instead of (A4), the following condition is used in the definition of approach spaces:
1. For all $\varepsilon\in[0,\infty]$, $\delta(x, A)\leq \delta(x, A^\varepsilon)+ \varepsilon$, where $A^\varepsilon=\{x\in X|\ \delta(x,A) \leq\varepsilon\}$.
In the presence of (A1)-(A3), (A4’) is equivalent to (A4). The implication (A4’$)\Rightarrow($A4) is contained in [@RL97]. Putting $B=A^\varepsilon$ in (A4) gives the converse implication.
The conditions (A1)-(A4) are metric version of (C1)-(C4), respectively. Thus, it can be said that while metric spaces are $[0,\infty]$-valued ordered sets, approach spaces are $[0,\infty]$-valued topological spaces. The theory of approach spaces has been extended to the quantale-valued setting in the recent paper [@LT16].
A contraction $f: (X,\delta)\rightarrow (Y,\rho)$ between approach spaces is a map $f: X\rightarrow Y$ such that $\delta(x,A)\geq \rho(f(x),f(A))$ for all $A\subseteq X$ and $x\in X$. Approach spaces and contractions form a category, denoted by ${\sf App}$.
Given an approach space $(X,\delta)$, define $\Omega(\delta) :X\times X\rightarrow[0,\infty]$ by $\Omega(\delta)(x,y)= \delta(x,\{y\})$, then $\Omega(\delta)$ is a metric on $X$, called the specialization metric of $(X,\delta)$. The term *specialization metric* is chosen because of its analogy to the specialization order of topological spaces. The correspondence $(X,\delta)\mapsto (X,\Omega(\delta))$ defines a functor $$\Omega:{\sf App}\rightarrow{\sf Met}.$$ This functor is a counterpart of $\Omega:{\sf Top}\rightarrow{\sf Ord}$ in the metric setting. We denote both of them by $\Omega$, since it is easy to detect from the context which one is meant.
Given a metric space $(X,d)$, define $\Gamma(d):X\times 2^X \rightarrow [0,\infty]$ by $$\Gamma(d)(x,A)=\left\{\begin{array}{ll}\infty, &A=\emptyset,\\ \inf\limits_{y\in A}d(x,y),& A\not=\emptyset. \end{array}\right.$$ Then $\Gamma(d)$ is an approach distance on $X$, called the Alexandroff distance generated by $d$. The correspondence $(X,d)\mapsto(X,\Gamma(d))$ defines a full and faithful functor $$\Gamma:{\sf Met}\rightarrow {\sf App}$$ that is left adjoint to $\Omega:{\sf App}\rightarrow{\sf Met}$ [@RL97]. In particular, [Met]{} is a coreflective full subcategory of [App]{}. A space of the form $\Gamma(X, d)$ is said to be a metric approach space. The functor $\Gamma:{\sf Met}\rightarrow {\sf App}$ is a metric version of $\Gamma:{\sf Ord}\rightarrow {\sf Top}$.
\[real approach\] For all $x\in[0,\infty]$ and $A\subseteq[0,\infty]$, let $$\delta_\mathbb{P}(x,A)= \left\{\begin{array}{ll}\max\{x-\sup A,0\}, & A\neq\emptyset,\\
\infty, &A=\emptyset.
\end{array}\right.$$ Then $\delta_\mathbb{P}$ is an approach distance on $[0,\infty]$. The approach space $\mathbb{P}=([0,\infty],\delta_\mathbb{P})$ is introduced in Lowen [@RL97], it plays an important role in the theory of approach spaces.
The specialization metric of $\mathbb{P}$ is the opposite $d_R$ of the Lawvere distance on $[0,\infty]$, i.e., $d_R(a,b)=a\ominus b$. The approach space $\mathbb{P}$ is not a metric one. In fact, for all $x\in[0,\infty]$ and $A\subseteq[0,\infty]$, $$\Gamma(d_R)(x,A)= \left\{\begin{array}{ll}\infty, &A=\emptyset, \\ \max\{x-\sup A,0\}, & x\not=\infty, A\neq\emptyset,\\
\infty, & x=\infty, \infty\notin A,\\ 0, & x=\infty\in A.
\end{array}\right.$$ So, $\delta_\mathbb{P}$ and $\Gamma(d_R)$ are different approach distances.
Approach spaces can be equivalently described in many ways [@RL97], one of them we need is the description by regular functions. A regular function of an approach space $(X,\delta)$ is a contraction $\phi:(X,\delta)\rightarrow\mathbb{P}$, where $\mathbb{P}$ is the approach space given in Example \[real approach\]. Explicitly, a regular function of $(X,\delta)$ is a function $\phi:X\rightarrow[0,\infty]$ such that $$\delta(x,A)\geq \phi(x)\ominus \sup \phi(A)$$ for all $x\in X$ and all $A\subseteq X$.
For each subset $A$ of $X$, the condition (A4) in the definition of approach spaces ensures that $\delta(-,A)$ is a regular function of $(X,\delta)$.
The following proposition says that an approach space is uniquely determined by its regular functions.
([@RL97]) \[regular functions\] Let $(X,\delta)$ be an approach space. Then the set $\mathcal{R}X$ of regular functions of $(X,\delta)$ satisfies the following conditions:
1. For each subset $\{\phi_i\}_{i\in I}$ of $\mathcal{R}X$, $\sup_{i\in I}\phi_i\in \mathcal{R}X$.
2. For all $\phi,\psi\in\mathcal{R}X$, $\min\{\phi,\psi\}\in \mathcal{R}X$.
3. For all $\phi\in\mathcal{R}X$ and $\alpha\in[0,\infty]$, both $\phi+\alpha$ and $\phi\ominus \alpha$ are in $\mathcal{R}X$.
Conversely, suppose that $\mathcal{S}\subseteq [0,\infty]^X$ satisfies the conditions [(R1)–(R3)]{}. Define a function $\delta: X\times 2^X\rightarrow [0, \infty]$ by $$\delta(x,A)=\sup\{\phi(x)\mid \phi\in\mathcal{S}, \forall a\in A, \phi(a)=0\}.$$ Then $(X,\delta)$ is an approach space with $\mathcal{S}$ being its set of regular functions.
Contractions between approach spaces can be characterized in terms of regular functions.
\[contraction by regular frame\] ([@RL97]) If $(X,\delta)$ and $(Y,\rho)$ are approach spaces and $f:X\rightarrow Y$ is a map, then $f$ is a contraction if and only if for each $\phi\in\mathcal{R}Y$, $\phi\circ f \in\mathcal{R}X$.
Since $\Omega(\mathbb{P})=([0,\infty],d_R)$, each regular function of an approach space $(X,\delta)$ is a weight of the metric space $(X,\Omega(\delta))$. Given a metric space $(X,d)$, the universal property of the map ${\rm id}:(X,d){\rightarrow}\Omega\circ\Gamma(X,d)$ entails that a map $\phi:X{\rightarrow}[0,\infty]$ is a weight of $(X,d)$ if and only if it is a regular function of $\Gamma(X,d)$, as stated in the following conclusion.
([@RL97], Proposition 3.1.9) \[regular function in MAS\] For a metric space $(X,d)$, a function $\phi:X\rightarrow[0,\infty]$ is a weight of $(X,d)$ if and only if it is a regular function of the approach space $\Gamma(X,d)$.
([@BRC]) An approach prime of an approach space $(X,\delta)$ is a regular function $\phi$ subject to the following conditions:
(1) $\inf_{x\in X}\phi(x)=0$;
(2) for all regular functions $\xi$ and $\psi$ of $(X,\delta)$, if $\min\{\xi, \psi\}\leq \phi$ then either $\xi\leq\phi$ or $\psi\leq\phi$.
For each element $x$ in an approach space $(X,\delta)$, $\delta(-,\{x\})$ is an approach prime. The following notion is central in this paper.
([@BRC]) An approach space $(X,\delta)$ is sober if for each approach prime $\phi$ of $(X,\delta)$, there exists a unique $x\in X$ such that $\phi=\delta (-,\{x\})$.
The approach space $\mathbb{P}$ is sober. This is proved in [@GVV], Proposition 1.6. Another proof is contained in Proposition \[P is sober\].
Write $$\omega: 2\rightarrow[0,\infty]$$ for the map that sends $1$ in the quantale $2$ to $0$ in $[0,\infty]$ and sends $0$ in $2$ to $\infty$ in $[0,\infty]$.
If $p:X\times X\rightarrow 2$ is an order on $X$, then the composite of $$\omega\circ p:X\times X\rightarrow 2\rightarrow[0,\infty]$$ is a metric on $X$. Similarly, if $c:X\times2^X\rightarrow 2$ satisfies the conditions (C1)-(C4), then the composite of $$\omega\circ c:X\times2^X\rightarrow 2\rightarrow[0,\infty]$$ is an approach distance on $X$. These processes yield two full and faithful functors $\omega:{\sf Ord}\rightarrow{\sf Met}$ and $\omega:{\sf Top}\rightarrow{\sf App}$. Both of them are denoted by the same symbol since this will cause no confusion. Approach spaces of the form $\omega(X)$ are said to be *topological* [@RL97].
Write $$\iota: [0,\infty]\rightarrow 2$$ for the map that sends $0$ in $[0,\infty]$ to $1$ in the quantale $2$ and sends all $x$ in $(0,\infty]$ to $0$ in $2$.
Given a metric $d:X\times X\rightarrow[0,\infty]$ on a set $X$, the composite $\iota\circ d$ is an order on $X$, called the *underlying order* of $d$. Given an approach distance $\delta:X\times2^X\rightarrow[0,\infty]$, the composite $\iota\circ \delta$ satisfies the conditions (C1)-(C4), hence determines a topology on $X$, called the *underlying topology* of $\delta$. In this way, we obtain two (forgetful) functors: $\iota: {\sf Met}\rightarrow{\sf Ord}$ and $\iota: {\sf App}\rightarrow{\sf Top}$. It is easily seen that $\iota$ is right adjoint to $\omega$ (for both cases) and that the following diagrams are commutative: $$\bfig
\square<700,500>[{\sf Ord}` {\sf Top}` {\sf Met} `{\sf App}; \Gamma` \omega ` \omega ` \Gamma]
\square(1500,0)/>`<-`<-`>/<700, 500>[{\sf Top} `{\sf Ord} `{\sf App}`{\sf Met} ; \Omega `\iota`\iota`\Omega]
\efig$$
Both $\omega: (2, \wedge)\rightarrow([0,\infty]^{\rm op},+)$ and $\iota:([0,\infty]^{\rm op},+)\rightarrow (2, \wedge)$ are closed maps between quantales [@Ro90] (or, lax functors [@HST; @Lawvere73] if quantales are treated as monoidal closed categories). So, both $\omega:{\sf Ord}\rightarrow{\sf Met}$ and $\iota: {\sf Met}\rightarrow{\sf Ord}$ are examples of the change-of-base functors in enriched category theory [@Lawvere73]. The following conclusion shows that the notion of sober approach spaces extends that of sober topological spaces.
([@BRC]) \[top sober\] A topological space $X$ is sober if and only if $\omega(X)$ is a sober approach space. The underlying topology of a sober approach space is sober.
Sobrification of approach spaces
================================
The sobrification of an approach space $(X,\delta)$ is constructed in [@BRC] as the spectrum of the approach frame of regular functions of $(X,\delta)$. In this section, we present a description of this construction without resort to the notion of approach frames. This description will be useful in subsequent sections.
For an approach space $(X,\delta)$, let $$\widehat{X}=\{\phi\in \mathcal{R}X\mid \phi~\rm{is~an ~approach~prime}\}.$$ For each $\xi\in \mathcal{R}X$, define a map $\widehat{\xi}: \widehat{X}\rightarrow [0,\infty]$ by $$\widehat{\xi}(\phi)=\sup_{x\in X}d_L(\phi(x),\xi(x))=\inf\{\alpha\in[0,\infty]\mid \xi\leq \phi+\alpha\}.$$
\[widehat propoties\] Let $(X,\delta)$ be an approach space.
(1) For all $\xi\in\mathcal{R}X$ and $a\in X$, $\widehat{\xi}(\delta(-,\{a\}))=\xi(a)$.
(2) For all $\xi,\psi \in\mathcal{R}X$, $\xi\leq \psi\Leftrightarrow \widehat{\xi}\leq\widehat{\psi}$.
(3) For all $\xi \in\mathcal{R}X$ and $\phi\in\widehat{X}$, $\widehat{\xi}(\phi)=0 \Leftrightarrow \xi\leq \phi$.
(4) For every subset $\{\xi_i\}_{i\in I}$ of $\mathcal{R}X$, $\widehat{\sup \xi_i}=\sup \widehat{\xi_i}$.
(5) For all $\xi, \psi\in \mathcal{R}X$, $\widehat{\min\{\xi, \psi\}}=\min\{\widehat{\xi},\widehat{\psi}\}$.
(6) For all $\xi\in\mathcal{R}X$ and $\alpha\in [0,\infty]$, $\widehat{\xi+\alpha}=\widehat{\xi}+\alpha$ and $\widehat{\xi\ominus \alpha}= \widehat{\xi}\ominus \alpha $.
We check (1) and (5) for example.
\(1) On one hand, by definition of $\widehat{\xi}$, $$\widehat{\xi}(\delta(-,\{a\}))=\sup_{x\in X}d_L(\delta(x,\{a\}),\xi(x))\geq d_L(\delta(a,\{a\}),\xi(a))=\xi(a).$$ On the other hand, since $\xi:(X,\delta)\rightarrow\mathbb{P}$ is a contraction, it follows that $\xi(x)-\delta(x,\{a\})\leq \xi(a)$ for all $x\in X$, hence $$\widehat{\xi}(\delta(-,\{a\}))=\sup_{x\in X}d_L(\delta(x,\{a\}),\xi(x))\leq \xi(a).$$
\(5) That $\widehat{\min\{\xi, \psi\}}\leq\min\{\widehat{\xi},\widehat{\psi}\}$ is obvious. It remains to check that $\min\{\widehat{\xi}(\phi),\widehat{\psi}(\phi)\}\leq\widehat{\min\{\xi, \psi\}}(\phi)$ for all $\phi\in \widehat{X}$. By definition, $$\widehat{\min\{\xi, \psi\}}(\phi)=\inf\{\alpha\mid \min\{\xi, \psi\}\leq \phi+\alpha\}.$$ Since $\phi$ is an approach prime, for each $\alpha\in[0,\infty]$, if $\min\{\xi,\psi\}\leq \phi+\alpha$, then either $\xi\ominus\alpha\leq \phi $ or $\psi\ominus\alpha\leq \phi$, it follows that either $\xi\leq \phi+\alpha$ or $\psi\leq \phi+\alpha$, so, $\min\{\widehat{\xi}(\phi),\widehat{\psi}(\phi)\}\leq \alpha$, hence $\min\{\widehat{\xi}(\phi),\widehat{\psi}(\phi)\} \leq\widehat{\min\{\xi, \psi\}}(\phi).$
Given an approach space $(X,\delta)$, the set $\{\widehat{\xi}\mid \xi\in\mathcal{R}X\}$ satisfies the conditions in Proposition \[regular functions\], hence it determines an approach distance $\widehat{\delta}$ on $\widehat{X}$ via $$\label{widehat distance*}\widehat{\delta}(\phi,A)= \sup\{\widehat{\psi}(\phi)\mid \psi\in\mathcal{R}X,\forall \xi\in A,\widehat{\psi}(\xi)=0\}$$ for all $\phi\in\widehat{X}$ and $A\subseteq \widehat{X}$. In particular, for all $\phi,\xi\in \widehat{X}$, $$\label{widehat distance}\widehat{\delta}(\phi,\{\xi\})= \sup\{\widehat{\psi}(\phi)\mid \psi\in\mathcal{R}X, \widehat{\psi}(\xi)=0\} =\widehat{\xi}(\phi).$$
Define a map $$\eta_X:(X,\delta)\rightarrow (\widehat{X},\widehat{\delta})$$ by $\eta_X(x)=\delta(-,\{x\})$. Then for all $x\in X$ and $A\subseteq X$, $$\begin{aligned}
\widehat{\delta}(\eta_X(x), \eta_X(A))&= \sup\{\widehat{\psi}(\delta(-,\{x\}))\mid \psi\in \mathcal{R}X, \forall a\in A, \widehat{\psi}(\delta(-,\{a\}))=0\} \\ &= \sup\{\psi(x)\mid \psi\in \mathcal{R}X, \forall a\in A, \psi(a)=0\}\\ &=\delta(x,A).\end{aligned}$$ This shows that $\eta_X:(X,\delta)\rightarrow (\widehat{X},\widehat{\delta})$ is an isometric map.
It is clear that $(X,\delta)$ is sober if and only if $\eta_X$ is bijective, hence an isomorphism in ${\sf App}$.
\[sobrification\]Let $(X,\delta)$ be an approach space.
(1) $(\widehat{X},\widehat{\delta})$ is a sober approach space.
(2) For each contraction $f$ from $(X,\delta)$ to a sober approach space $(Y,\rho)$, there is a unique contraction $\overline{f}:(\widehat{X},\widehat{\delta})\rightarrow (Y,\rho)$ such that $f=\overline{f}\circ\eta_X$.
\(1) We must show that each approach prime of $(\widehat{X},\widehat{\delta})$ is of the form $\widehat{\delta}(-,\{\phi\})$ for a unique approach prime $\phi$ of $(X,\delta)$. Uniqueness of $\phi$ is clear since $$\widehat{\delta}(\delta(-,x),\{\phi\})=\phi(x)$$ for all $x\in X$. It remains to check the existence.
By definition, each approach prime (indeed, each regular function) on $(\widehat{X},\widehat{\delta})$ is of the form $\widehat{\xi}$ for some $\xi\in\mathcal{R}X$. Given an approach prime $\widehat{\xi}$ of $(\widehat{X},\widehat{\delta})$, if we could show that $\xi$ is an approach prime of $(X,\delta)$, then we would obtain $\widehat{\xi}=\widehat{\delta}(-,\{\xi\})$ by virtue of Equation (\[widehat distance\]), proving the existence. So, it suffices to show that if $\widehat{\xi}$ is an approach prime of $(\widehat{X},\widehat{\delta})$, then $\xi$ is an approach prime of $(X,\delta)$.
\(a) $\inf_{x\in X}\xi(x)=0$. Given $\varepsilon>0$, since $\widehat{\xi}$ is an approach prime, there is $\phi\in\widehat{X}$ such that $$\widehat{\xi}(\phi)=\sup_{x\in X}d_L(\phi(x),\xi(x))<\varepsilon.$$ Since $\phi$ is an approach prime of $(X,\delta)$, there exists some $x_0$ such that $\phi(x_0)<\varepsilon$. Thus, $\xi(x_0)<2\varepsilon$, so, $\inf_{x\in X}\xi(x)=0$ by arbitrariness of $\varepsilon$.
\(b) Suppose that $\phi, \psi\in\mathcal{R}X$ and that $\min\{\phi, \psi\}\leq \xi$. Then $\min\{\widehat{\phi},\widehat{\psi}\}=\widehat{\min\{\phi, \psi\}}\leq \widehat{\xi}$. Since $\widehat{\xi}$ is an approach prime, either $\widehat{\phi}\leq \widehat{\xi}$ or $\widehat{\psi}\leq \widehat{\xi}$, it follows that either $\phi\leq \xi$ or $\psi\leq \xi$ by Lemma \[widehat propoties\](2).
Therefore, $\xi\in\widehat{X}$, as desired.
\(2) Suppose $(Y,\rho)$ is a sober approach space, $f:(X,\delta)\rightarrow(Y,\rho)$ is a contraction. We show that there is a unique contraction $\overline{f}:(\widehat{X},\widehat{\delta})\rightarrow (Y,\rho)$ such that $f=\overline{f}\circ\eta_X$.
**Existence**. For each $\phi\in \widehat{X}$, let $$f^\dag(\phi) =\sup\{\psi\in\mathcal{R}Y\mid\psi\circ f\leq \phi\}.$$ That is, $\psi\leq f^\dag(\phi)\iff \psi\circ f\leq \phi$ for all $\psi\in\mathcal{R}Y$. We claim that $ f^\dag(\phi)$ is an approach prime of $(Y,\rho)$.
Given $\varepsilon>0$, there exists some $x_\varepsilon\in X$ such that $\phi(x_\varepsilon)<\varepsilon$. Let $y_\varepsilon=f(x_\varepsilon)$. Then $\psi(y_\varepsilon)< \varepsilon$ whenever $\psi\circ f\leq \phi$, it follows that $ f^\dag(\phi)(y_\varepsilon)\leq\varepsilon$. Therefore, $\inf_{y\in Y} f^\dag(\phi)(y)=0$.
Suppose that $\psi,\xi\in\mathcal{R}Y$ and $\min\{\psi,\xi\}\leq f^\dag(\phi)$. Since $$\min\{\psi\circ f,\xi\circ f\}=\min\{\psi,\xi\}\circ f \leq\phi,$$ we obtain that either $\psi\circ f\leq \phi$ or $\xi\circ f\leq \phi$, hence either $\psi\leq f^\dag(\phi)$ or $\xi\leq f^\dag(\phi)$.
Therefore, $ f^\dag(\phi)$ is an approach prime of $(Y,\rho)$. Since $(Y,\rho)$ is sober, there is a unique $y\in Y$ such that $ f^\dag(\phi)=\rho(-,\{y\})$. Define $\overline{f}(\phi)$ to be this $y$. We claim that $\overline{f}:\widehat{X}\rightarrow Y$ satisfies the requirement.
\(a) $\overline{f}:(\widehat{X},\widehat{\delta})\rightarrow (Y,\rho)$ is a contraction. By Proposition \[contraction by regular frame\], it is sufficient to show that for each $\psi\in\mathcal{R}Y$, $\psi\circ \overline{f}$ is a regular function of $(\widehat{X},\widehat{\delta})$. Since $f:(X,\delta)\rightarrow(Y,\rho)$ is a contraction, $\psi\circ f$ is a regular function of $(X,\delta)$. If we could show that $\psi\circ \overline{f}= \widehat{\psi\circ f}$, then $\psi\circ \overline{f}$ would be a regular function of $(\widehat{X},\widehat{\delta})$, as desired.
For each $\phi\in\widehat{X}$, $$\begin{aligned}
\psi\circ \overline{f}(\phi) &=\widehat{\psi}(\rho(-,\{\overline{f}(\phi)\})) &({\rm Lemma}~\ref{widehat propoties}(1))\\
& = \widehat{\psi}( f^\dag(\phi)) &({\rm definition~of}~\overline{f})\\
&= \inf\{\alpha\mid \psi\leq f^\dag(\phi) +\alpha\} \\
&= \inf\{\alpha\mid \psi\ominus \alpha\leq f^\dag(\phi) \} \\
& = \inf\{\alpha\mid \psi\circ f\ominus \alpha\leq \phi \} \\
& = \inf\{\alpha\mid \psi\circ f\leq \phi+\alpha\} \\
& = \widehat{\psi\circ f}(\phi). \end{aligned}$$
\(b) $f=\overline{f}\circ\eta_X$. For each $x\in X$, $\overline{f}\circ\eta_X(x)$ is the unique element in $Y$ such that $\rho(-,\{\overline{f}\circ\eta_X(x)\}) =f^\dag(\delta(-,\{x\}))$, so, it suffices to check that $\rho(-,\{f(x)\}) =f^\dag(\delta(-,\{x\}))$. On one hand, since for any $x'\in X$, $$\rho(-,\{f(x)\})\circ f(x')= \rho(f(x'),\{f(x)\}) \leq \delta(x',\{x\}),$$ it follows that $\rho(-,\{f(x)\})\circ f\leq \delta(-,\{x\})$, hence $\rho(-,\{f(x)\})\leq f^\dag(\delta(-,\{x\}))$. On the other hand, suppose that $\psi\in\mathcal{R}Y$ and $\psi\circ f\leq \delta(-,\{x\})$. Then for any $y\in Y$, $$\begin{aligned}
\psi(y)\leq\psi\circ f(x) +\rho(y,\{f(x)\}) \leq \delta(x,\{x\})+\rho(y,\{f(x)\})=\rho(y,\{f(x)\}), \end{aligned}$$ showing that $f^\dag(\delta(-,\{x\}))\leq \rho(-,\{f(x)\})$.
**Uniqueness**. Suppose $g: (\widehat{X},\widehat{\delta})\rightarrow(Y,\rho)$ is a contraction with $g\circ\eta_X=f$. We show that for each $\phi\in \widehat{X}$, $g(\phi) =\overline{f}(\phi)$, i.e., $ f^\dag(\phi)=\rho(-,\{g(\phi)\})$.
On one hand, for each $x\in X$, since $g$ is a contraction, one has $$\rho(-,\{g(\phi)\})\circ f(x)=\rho(g\circ\eta_X(x),\{g(\phi)\})\leq \widehat{\delta}(\eta_X(x),\{\phi\})= \widehat{\phi}(\eta_X(x))=\phi(x),$$ so, $\rho(-,\{g(\phi)\})\circ f\leq \phi$, hence $\rho(-,\{g(\phi)\})\leq f^\dag(\phi)$.
On the other hand, for every $\psi\in\mathcal{R}Y$ with $\psi\circ f\leq \phi$, since $g$ is a contraction, $\psi\circ g$ is a regular function of $(\widehat{X},\widehat{\delta})$, hence there exists some $\xi\in \mathcal{R}X$ such that $\psi\circ g=\widehat{\xi}$. Then $$\begin{aligned}
\psi\circ f\leq \phi &~\Rightarrow~ \psi\circ g\circ\eta_X\leq\phi \\ &~\Rightarrow~ \widehat{\xi}\circ\eta_X\leq\phi \\ &~\Rightarrow~ \forall x\in X, \xi(x)\leq\phi(x) \\ &~\Rightarrow~ \widehat{\xi}(\phi)=0. \end{aligned}$$ Since $\psi:(Y,\rho)\rightarrow\mathbb{P}$ is a contraction, it follows that for each $y\in Y$, $$\psi(y)\leq \psi\circ g(\phi)+\rho(y,\{g(\phi)\})= \widehat{\xi}(\phi)+\rho(y,\{g(\phi)\}) =\rho(y,\{g(\phi)\}).$$ This proves the inequality $ f^\dag(\phi) \leq\rho(-,\{g(\phi)\})$.
Let ${\sf SobApp}$ denote the full subcategory of ${\sf App}$ consisting of sober approach spaces. The universal property of $(\widehat{X},\widehat{\delta})$ gives rise to a functor $$s:{\sf App}\rightarrow{\sf SobApp},\quad s(X,\delta)= (\widehat{X},\widehat{\delta})$$ that is left adjoint to the inclusion functor ${\sf SobApp}\rightarrow{\sf App}$. The sober approach space $s(X,\delta)$ is called the *sobrification* of $(X,\delta)$.
Yoneda completion of metric spaces
==================================
A net $\{x_{\lambda}\}$ in a metric space $(X,d)$ is forward Cauchy [@BvBR1998; @Wagner97] if $$\inf_{\lambda}\sup_{\nu\geq\mu\geq{\lambda}}d(x_\mu,x_\nu)=0.$$
([@BvBR1998; @Wagner97]) Let $\{x_{\lambda}\}$ be a net in a metric space $(X,d)$. An element $x\in X$ is a Yoneda limit (a.k.a. liminf) of $\{x_{\lambda}\}$ if for all $y\in X$, $$d(x,y)= \inf_\lambda\sup_{\sigma\geq\lambda}d(x_{\sigma},y).$$
Yoneda limits are not necessarily unique. However, if both $x$ and $y$ are Yoneda limit of a net $\{x_\lambda\}$, then $d(x,y)=d(y,x)=0$. So, Yoneda limits in separated metric spaces are unique.
([@BvBR1998; @Wagner97]) A metric space is Yoneda complete if each forward Cauchy net in it has a Yoneda limit.
A non-expansive map $f:(X,d)\rightarrow(Y,p)$ is *Yoneda continuous* if it preserves Yoneda limits in the sense that if $a$ is a Yoneda limit of a forward Cauchy net $\{x_{\lambda}\}$ then $f(a)$ is a Yoneda limit of $\{f(x_{\lambda})\}$.
\[d\_L\] Consider the metric space $([0,\infty],d_L)$. If $\{x_\lambda\}$ is a forward Cauchy net in $([0,\infty],d_L)$, then $\{x_\lambda\}$ is either an eventually constant net with value $\infty$ or eventually a Cauchy net of real numbers in the usual sense. In the first case, $\infty$ is a Yoneda limit of $\{x_\lambda\}$; in the second case, the limit of the Cauchy net $\{x_\lambda\}$ is a Yoneda limit of $\{x_\lambda\}$. Thus, $([0,\infty],d_L)$ is Yoneda complete.
It is easily seen that for each forward Cauchy net $\{x_\lambda\}$ in a metric space $(X,d)$, $\{d(x,x_{\lambda})\}$ is a forward Cauchy net in $([0,\infty],d_L)$ for all $x\in X$. In particular, $$\label{forward cauchy nets converge} \inf_{\lambda}\sup_{{\sigma}\geq{\lambda}}d(x,x_{\sigma})= \sup_{\lambda}\inf_{{\sigma}\geq{\lambda}}d(x,x_{\sigma}).$$
\[d\_R\] Consider the metric space $([0,\infty],d_R)$. A net $\{x_\lambda\}$ in $[0,\infty]$ is almost increasing if for each $\varepsilon>0$, there is some ${\lambda}$ such that $x_\mu-x_\nu\leq\varepsilon$ whenever $\nu\geq\mu\geq {\lambda}$. It is clear that every almost increasing net is forward Cauchy in $([0,\infty],d_R)$. Furthermore, if a net $\{x_\lambda\}$ is forward Cauchy in $([0,\infty],d_R)$, then $\{x_\lambda\}$ is either an almost increasing net that tends to infinity or a Cauchy net in the usual sense. In the first case, $\infty$ is a Yoneda limit of $\{x_\lambda\}$ in $([0,\infty],d_R)$; in the second case, the limit of the Cauchy net $\{x_\lambda\}$ is a Yoneda limit of $\{x_\lambda\}$ in $([0,\infty],d_R)$. Thus, $([0,\infty],d_R)$ is Yoneda complete.
The metric space $([0,\infty),d_R)$ is not Yoneda complete, but $([0,\infty),d_L)$ is.
The underlying order of a Yoneda complete metric space is directed complete.
Let $(X,d)$ be a Yoneda complete metric space, $\leq_d$ be the underlying order of $(X, d)$, and $D$ be a directed subset in $(X,\leq_d)$. Regard $D$ as a net $\{x\}_{x\in D}$ in $(X,d)$ in the obvious way. By definition of $\leq_d$ we have $d(x,y)=0$ whenever $x\leq_d y$. Thus, $\{x\}_{x\in D}$ is a forward Cauchy net in $(X,d)$. Let $a$ be a Yoneda limit of $\{x\}_{x\in D}$. We show that $a$ is a join of $D$ in $(X,\leq_d)$.
Since $a$ is a Yoneda limit of $\{x\}_{x\in D}$, it holds that $d(a,y)=\inf_{x\in D}\sup_{z\geq_d x}d(z,y)$ for each $y\in X$, in particular, $\inf_{x\in D}\sup_{z\geq_d x}d(z,a)=d(a,a)=0$. Thus, for each $\varepsilon>0$ there exists $x_\varepsilon\in D$ such that $d(x,a)<\varepsilon$ for all $x\in D$ with $x\geq_d x_\varepsilon$.
For a fixed $x\in D$, let $y$ be an upper bound of $x$ and $x_\varepsilon$ in $D$. Then $$d(x,a)\leq d(x,y)+d(y,a)\leq \varepsilon.$$ Therefore, $d(x,a)=0$ by arbitrariness of $\varepsilon$, showing that $a$ is an upper bound of $D$.
Let $z$ be another upper bound of $D$. Then for all $y\in D$ we have $d(y,z)=0$. So, $$d(a,z)=\inf_{x\in D}\sup_{y\geq_d x}d(y, z)=0,$$ showing that $a\leq_d z$. This proves that $a$ is a join of $D$ in $(X,\leq_d)$.
For each weight $\phi$ and each coweight $\psi$ of a metric space $(X,d)$, the tensor product of $\phi$ and $\psi$ [@SV2005] (a special case of composition of bimodules in [@Lawvere73]) is an element in $[0,\infty]$, given by $$\phi\otimes \psi=\inf_{x\in X}(\phi(x)+\psi(x)).$$
Let $\phi$ and $\psi$ be a weight and a coweight of a metric space $(X,d)$, respectively. We say that $\phi$ is a right adjoint of $\psi$ (or, $\psi$ is a left adjoint of $\phi$) if $\phi\otimes \psi=0$ and $\phi(x)+\psi(y)\geq d(x,y)$ for all $x,y\in X$. This notion is a special case of adjoint bimodules in enriched category theory [@Lawvere73; @St05]. So, the left adjoint of a weight, if exists, is unique.
Let $(X,d)$ be a metric space, $\phi$ a weight of $(X,d)$.
(1) ([@Lawvere73]) $\phi$ is a Cauchy weight if it has a left adjoint.
(2) ([@SV2005]) $\phi$ is a flat weight if $\inf_{x\in X}\phi(x)=0$ and $\phi\otimes\max\{\psi_1,\psi_2\}=\max\{\phi\otimes \psi_1,\phi\otimes \psi_2\}$ for all coweights $\psi_1,\psi_2$ of $(X,d)$.
Each representable weight $d(-,x)$ is Cauchy, since it is right adjoint to the coweight $d(x,-)$. Following Lawvere [@Lawvere73], we say that a metric space is Cauchy complete if it is separated and all of its Cauchy weights are representable. In the realm of separated and symmetric metric spaces, this notion of Cauchy completeness agrees with the traditional one, namely, every Cauchy sequence converges.
If $\phi$ is a Cauchy weight of $(X,d)$, it is easy to check that its left adjoint is given by $$\label{left adjoint}\phi^\vdash(x)=\overline{d}(\phi,d(-,x)).$$
\[Cauchy weight\] Let $\phi$ be a Cauchy weight of a metric space $(X,d)$ and $\phi^\vdash$ be its left adjoint.
(1) For each coweight $\psi$ of $(X,d)$, $\phi\otimes \psi=\sup_{y\in X}d_L(\phi^\vdash(y),\psi(y)).$
(2) For each weight $\xi$ of $(X,d)$, $\xi\otimes \phi^\vdash =\overline{d}(\phi,\xi)$.
(3) For each non-empty set $\{\psi_i\}$ of coweights of $(X,d)$, $\phi\otimes\sup_i\psi_i=\sup_i(\phi\otimes \psi_i)$. In particular, $\phi$ is flat.
(4) For each non-empty set $\{\xi_i\}$ of weights of $(X,d)$, $\inf_i \overline{d}(\phi, \xi_i) =\overline{d}(\phi,\inf_i\xi_i).$
The formulas in (1) and (2) are a special case of 2(d) and 2(e) in Stubbe [@St05], Lemma 2.2 that hold for all quataloids. We include here a direct verification for convenience of the reader.
\(1) For each $y\in Y$, $$\begin{aligned}
\phi\otimes \psi+ \phi^\vdash(y) &= \inf_{x\in X}(\phi(x)+\psi(x)) + \phi^\vdash(y) \\ & =\inf_{x\in X}(\phi(x)+\psi(x) + \phi^\vdash(y)) \\ &\geq \inf_{x\in X}(d(x,y)+\psi(x)) \\ &\geq \psi(y), \end{aligned}$$ it follows that $\phi\otimes \psi\geq \sup_{y\in X}d_L(\phi^\vdash(y),\psi(y))$. To see the converse inequality, take $a\in[0,\infty]$ with $a\geq \sup_{y\in X}d_L(\phi^\vdash(y),\psi(y))$. Then we have $$\begin{aligned}
&\forall y, d_L(\phi^\vdash(y),\psi(y)) \leq a \\ \Rightarrow ~& \forall y, \psi(y) \leq \phi^\vdash(y)+a \\ \Rightarrow ~& \forall y, \phi(y)+\psi(y)\leq \phi(y)+\phi^\vdash(y)+a\\ \Rightarrow ~& \phi\otimes \psi\leq a.\end{aligned}$$ This proves that $\phi\otimes \psi\leq \sup_{y\in X}d_L(\phi^\vdash(y),\psi(y))$.
\(2) Similar to (1).
\(3) An immediate consequence of (1).
\(4) An immediate consequence of (2).
Let $f:(X,d)\rightarrow(Y,p)$ be a non-expansive map between metric spaces. If $\phi$ is a weight of $(X,d)$ then $f(\phi):Y\rightarrow[0,\infty]$, given by $$f(\phi)(y) =\inf_{x\in X}(\phi(x)+p(y,f(x))),$$ is a weight of $(Y,p)$. If $\psi$ is a weight (coweight, resp.) of $(Y,p)$ then $\psi\circ f$ is a weight (coweight, resp.) of $(X,d)$.
\[image of Cauchy weight\] Let $f:(X,d)\rightarrow(Y,p)$ be a non-expansive map between metric spaces, $\phi$ a weight of $(X,d)$.
(1) If $\phi$ is flat then so is $f(\phi)$.
(2) If $\phi$ is Cauchy then so is $f(\phi)$.
(3) If $f(\phi)$ is Cauchy and $f$ is an isometric map then $\phi$ is Cauchy.
\(1) First, $\inf_{y\in Y}f(\phi)(y) \leq \inf_{y=f(x)}f(\phi)(y)=\inf_{x\in X}\phi(x)=0$. Second, it is easy to check that for each coweight $\psi$ of $(Y,p)$ it holds that $$f(\phi)\otimes\psi=\phi\otimes(\psi\circ f).$$ Therefore, for all coweights $\psi_1, \psi_2$ of $(Y,p)$, we have $$\begin{aligned}
f(\phi)\otimes\max\{\psi_1, \psi_2\}&=\phi\otimes(\max\{\psi_1, \psi_2\}\circ f)\\
&= \phi\otimes(\max\{\psi_1\circ f, \psi_2\circ f\}) \\
&=\max\{\phi\otimes(\psi_1\circ f),\phi\otimes(\psi_2\circ f)\} &(\phi~{\rm is~flat})\\
&=\max\{f(\phi)\otimes\psi_1,f(\phi)\otimes\psi_2\},\end{aligned}$$ showing that $f(\phi)$ is flat.
\(2) If $\phi^\vdash$ is a left adjoint of $\phi$, then the coweight $\psi:Y\rightarrow[0,\infty]$, given by $\psi(y) =\inf_{x\in X}(\phi^\vdash(x)+p(f(x),y))$, is a left adjoint of $f(\phi)$.
\(3) We leave it to the reader to check that if $\psi$ is a left adjoint of $f(\phi)$, then $\psi\circ f$ is a left adjoint of $\phi$.
The following proposition is contained in Vickers [@SV2005], Proposition 7.9 and Theorem 7.15. An extension to generalized partial metric spaces can be found in [@LLZ16], Proposition 7.4.
([@SV2005]) \[flat weight\] Let $(X,d)$ be a metric space. Then for each function $\phi:X\rightarrow[0,\infty]$, the following are equivalent:
1. $\phi$ is a flat weight of $(X,d)$.
2. $\phi$ is a weight of $(X,d)$ satisfying the following conditions:
1. $\inf_{x\in X}\phi(x)=0$;
2. if $\phi(x_i)<\varepsilon_i\ (i=1,2)$, then there is some $y\in X$ and $\varepsilon>0$ such that $f(y)<\varepsilon$ and that $d(x_i,y)+\varepsilon<\varepsilon_i\ (i=1,2)$.
3. There is a forward Cauchy net $\{x_{\lambda}\}$ in $(X,d)$ such that $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$.
For a metric space $(X,d)$, let $$({{\cal F}}X,\overline{d})$$ be the subspace of $({{\cal P}}X,\overline{d})$ consisting of flat weights. Define $${{\bf y}}_X:(X,d)\rightarrow ({{\cal F}}X,\overline{d})$$ by ${{\bf y}}_X(x)= d(-,x)$. Then ${{\bf y}}_X$ is an isometric map.
\[supremum\] A metric space $(X,d)$ is Yoneda complete if and only if for each flat weight $\phi$ of $(X,d)$, there is some $a\in X$ such that for all $y\in X$, $$\label{colim} \overline{d}(\phi,{{\bf y}}_X(y))=d(a,y).$$
By Lemma 46 in [@FSW], for each forward Cauchy net $\{x_{\lambda}\}$ in $(X,d)$, an element $a$ in $X$ is a Yoneda limit of $\{x_{\lambda}\}$ if and only if $\overline{d}(\phi,{{\bf y}}_X(y))=d(a,y)$ for all $y\in X$, where $\phi$ is the weight of $(X,d)$ given by $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$. The conclusion follows immediately from a combination of this fact and Proposition \[flat weight\].
An element $a$ satisfying Equation (\[colim\]) is called, in enriched category theory, a colimit of the identity $(X,d)\rightarrow(X,d)$ weighted by $\phi$ [@Kelly; @KS05; @Ru]. In this paper, we simply say that $a$ is a *colimit* of $\phi$ and write $a={{\rm colim}}\phi$. The above proposition says that a metric space $(X,d)$ is Yoneda complete if and only if every flat weight of $(X,d)$ has a colimit.
The following conclusion is contained in Vickers [@SV2005], Proposition 7.14 and Theorem 7.15. It implies that for each metric space $(X,d)$, the metric space $({{\cal F}}X,\overline{d})$ is Yoneda complete.
([@SV2005])\[Yoneda complete\] Let $(X,d)$ be a metric space. Every forward Cauchy net $\{\phi_{\lambda}\}$ in the metric space $({{\cal P}}X,\overline{d})$ has a Yoneda limit given by $\inf_{\lambda}\sup_{{\lambda}\leq\mu}\phi_\mu$; the subspace $({{\cal F}}X,\overline{d})$ is closed in $({{\cal P}}X,\overline{d})$ with respect to Yoneda limits of forward Cauchy nets.
From the point of view of category theory, a combination of Proposition \[image of Cauchy weight\](1), Proposition \[flat weight\], Proposition \[supremum\] and Theorem \[Yoneda complete\] says that flat weights form a saturated class of weights [@KS05; @LZ07] on metric spaces. As pointed out to us by the referee, the saturatedness of the class of flat weights is a special case of a general result in enriched category theory, namely, Proposition 5.4 in Kelly and Schmidt [@KS05]. The space $({{\cal F}}X,\overline{d})$ has the following universal property: for each non-expansive map $f$ from $(X,d)$ to a separated and Yoneda complete metric space $(Y,p)$, there exists a unique Yoneda continuous map $f^*:({{\cal F}}X,\overline{d}) \rightarrow(Y,p)$ such that $f=f^*\circ{{\bf y}}_X$. This universal property is also a special case of a result in [@Kelly; @KS05] about cocompletion with respect to saturated classes of weights. Because of this universal property, $({{\cal F}}X,\overline{d})$ is called the *Yoneda completion* of $(X,d)$. The subspace of $({{\cal F}}X,\overline{d})$ consisting of Cauchy weights is the *Cauchy completion* of $(X,d)$ [@Lawvere73].
The Yoneda completion of $([0,\infty),d_R)$ is $([0,\infty],d_R)$.
Sobrification of metric approach spaces
=======================================
In this section, we show that the specialization metric space of the sobrification of a metric approach space $\Gamma(X,d)$ coincides with the Yoneda completion of $(X,d)$.
\[approach prime\] Let $(X,\delta)$ be an approach space. If $\{x_{\lambda}\}$ is a forward Cauchy net in $(X,\Omega(\delta))$, then the function $$\phi: X\rightarrow[0,\infty], \quad \phi(x)= \sup_{\lambda}\delta(x,A_{\lambda}),$$ is an approach prime of $(X,\delta)$, where $A_{\lambda}=\{x_\sigma\mid \sigma\geq{\lambda}\}$.
For simplicity, we write $d$ for the metric $\Omega(\delta)$. We prove the conclusion in three steps.
**Step 1**. $\phi$ is a regular function of $(X,\delta)$. This follows from Proposition \[regular functions\](R1) and the fact that $\delta(-,A_{\lambda})$ is a regular function for each ${\lambda}$.
**Step 2**. $\inf_{x\in X}\phi(x)=0$. For any $\varepsilon>0$, there exists ${\lambda}_0$ such that $d(x_\mu,x_\nu)<\varepsilon$ whenever $\nu\geq\mu\geq{\lambda}_0$. Then for all ${\lambda}$, $\delta(x_{{\lambda}_0},A_{\lambda})\leq \inf_{\sigma\geq{\lambda}_0,{\lambda}}d(x_{{\lambda}_0},x_\sigma) <\varepsilon,$ hence $\phi(x_{{\lambda}_0})=\sup_{\lambda}\delta(x_{{\lambda}_0},A_{\lambda})\leq\varepsilon$. This shows that $\inf_{x\in X}\phi(x)=0$.
**Step 3**. For any regular functions $\psi$ and $\xi$ of $(X,\delta)$, if $\min\{\psi,\xi\}\leq \phi$ then either $\psi\leq \phi$ or $\xi\leq \phi$. If not, there exist $x_1$ and $x_2$ such that $\psi(x_1)>\phi(x_1)$ and $\xi(x_2)>\phi(x_2)$. Take $\varepsilon>0$ with $\psi(x_1)-\phi(x_1)>\varepsilon$ and $\xi(x_2)-\phi(x_2)>\varepsilon$, i.e., $$\psi(x_1)-\sup_{\lambda}\delta(x_1,A_{\lambda})>\varepsilon, \quad \xi(x_2)-\sup_{\lambda}\delta(x_2,A_{\lambda})>\varepsilon.$$ Since $\psi,\xi:(X,\delta)\rightarrow\mathbb{P}$ are contractions, for every ${\lambda}$, it holds that $$\delta(x_1,A_{{\lambda}})\geq \psi(x_1)-\sup \psi(A_{{\lambda}}),\quad \delta(x_2,A_{{\lambda}})\geq \xi(x_2)-\sup \xi(A_{{\lambda}}),$$ hence $$\sup \psi(A_{{\lambda}})\geq\psi(x_1)- \delta(x_1,A_{{\lambda}})>\varepsilon,\quad \sup \xi(A_{{\lambda}})\geq\xi(x_2)- \delta(x_2,A_{{\lambda}})>\varepsilon.$$ By arbitrariness of ${\lambda}$ and the forward Cauchyness of $\{x_\lambda\}$, there exists some $\sigma$ such that $\psi(x_\sigma)>\varepsilon$, $\xi(x_\sigma)>\varepsilon$, and that $d(x_\sigma,x_\nu)<\varepsilon$ whenever $\sigma\leq\nu$. Then $$\phi(x_\sigma)=\sup_{\lambda}\delta(x_\sigma,A_{\lambda})\leq \sup_{\lambda}\inf_{\nu\geq{\lambda},\sigma}d(x_\sigma, x_\nu) \leq \varepsilon,$$ a contradiction to that $\min\{\psi,\xi\}\leq \phi$.
The following conclusion is an analogy, in the metric setting, of the fact that the specialization order of a sober topological space is directed complete.
\[Sober implies Yoneda\] The specialization metric of a sober approach space is Yoneda complete.
Let $(X,\delta)$ be a sober approach space and $d=\Omega(\delta)$ be its specialization metric. Assume that $\{x_{\lambda}\}$ is a forward Cauchy net in $(X,d)$. Then $\sup_{\lambda}\delta(-,A_{\lambda})$ is an approach prime of $(X,\delta)$ by Lemma \[approach prime\]. Since $(X,\delta)$ is sober, there exists $a\in X$ such that $$\sup_{\lambda}\delta(-,A_{\lambda})=\delta(-,\{a\})=d(-,a).$$ We claim that $a$ is a Yoneda limit of $\{x_{\lambda}\}$, i.e., for all $x\in X$ $$\inf_{\lambda}\sup_{\sigma\geq{\lambda}}d(x_\sigma,x)=d(a,x).$$
For each ${\lambda}$, since $\delta(a,A_{\lambda})\leq\sup_\sigma \delta(a,A_\sigma)=d(a,a)=0$, then $$d(a,x)=\delta(a,\{x\}) \leq \delta(a,A_{\lambda})+\sup_{\sigma\geq{\lambda}}d(x_\sigma,x) =\sup_{\sigma\geq{\lambda}}d(x_\sigma,x)$$ by (A4), hence $$d(a,x)\leq \inf_{\lambda}\sup_{\sigma\geq{\lambda}}d(x_\sigma,x).$$
For the converse inequality, we first show that $$\inf_{\lambda}\sup_{\sigma\geq{\lambda}}\sup_\tau\delta(x_\sigma, A_\tau)=0.$$ Given $\varepsilon>0$, since $\{x_{\lambda}\}$ is forward Cauchy, there is some ${\lambda}_0$ such that $d(x_\mu,x_\nu)<\varepsilon$ whenever $\nu\geq\mu\geq{\lambda}_0$. Then for any index $\tau$ and any $\sigma\geq{\lambda}_0$, $$\delta(x_\sigma,A_\tau)\leq\inf_{\mu\geq\sigma,\tau} d(x_\sigma,x_\mu)<\varepsilon,$$ it follows that $\sup_{\sigma\geq{\lambda}_0}\sup_\tau\delta(x_\sigma,A_\tau) \leq\varepsilon$, hence $$\inf_{\lambda}\sup_{\sigma\geq{\lambda}}\sup_\tau\delta(x_\sigma,A_\tau) =0$$ by arbitrariness of $\varepsilon$. Therefore, $$\begin{aligned}
\inf_{\lambda}\sup_{\sigma\geq{\lambda}}d(x_\sigma,x)&\leq \inf_{\lambda}\sup_{\sigma\geq{\lambda}}(d(x_\sigma,a)+d(a,x))\\ &= \inf_{\lambda}\sup_{\sigma\geq{\lambda}}\sup_\tau\delta(x_\sigma,A_\tau) +d(a,x) & (d(x_\sigma,a)=\sup_{\lambda}\delta(x_\sigma,A_{\lambda}))\\ &=d(a,x).\end{aligned}$$ This completes the proof.
\[metric approach prime\] For each metric space $(X,d)$, the approach primes of $\Gamma(X, d)$ are exactly the flat weights of $(X,d)$.
Given an approach prime $\phi$ of $\Gamma(X, d)$, we show that $\phi$ is a flat weight of $(X,d)$. It suffices to check that $\phi$ satisfies the condition (b) in Proposition \[flat weight\].
Suppose $\phi(x_i)<\varepsilon_i\ (i=1,2)$. Consider the functions $\psi(x)=\max\{0, \varepsilon_1-d(x_1,x)\}$ and $\xi(x)=\max\{0, \varepsilon_2-d(x_2,x)\}$. It is easy to check that $\psi$ and $\xi$ are regular functions satisfying $\psi\nleqslant \phi$ and $\xi\nleqslant \phi$ ($\psi(x_1)=\varepsilon_1$, $\xi(x_2)=\varepsilon_2$). Since $\phi$ is an approach prime, we have $\min\{\psi,\xi\}\nleqslant \phi$. Thus, there exists $y\in X$ such that $\phi(y)<\min\{\psi(y),\xi(y)\}$, namely $$\phi(y)<\varepsilon_1-d(x_1,y)\ \ {\rm and}\ \ \phi(y)<\varepsilon_2-d(x_2,y).$$ So, there exists $\varepsilon>0$ such that $\phi(y)<\varepsilon$ and $d(x_i,y)+\varepsilon<\varepsilon_i\ (i=1,2)$.
Conversely, we show that each flat weight $\phi$ of $(X,d)$ is an approach prime of $(X,\Gamma(d))$. By Proposition \[flat weight\], $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ for a forward Cauchy net $\{x_{\lambda}\}$ in $(X,d)$. For each $x\in X$, $\{d(x,x_{\lambda})\}$ is a forward Cauchy net in $([0,\infty],d_L)$, hence, thanks to Equation (\[forward cauchy nets converge\]), $$\phi(x)=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(x,x_\sigma)= \sup_{\lambda}\inf_{\sigma\geq{\lambda}}d(x,x_\sigma)= \sup_{\lambda}\Gamma(d)(x,A_{\lambda}),$$ where $A_{\lambda}=\{x_\sigma\mid \sigma\geq\lambda\}$. Therefore, $\phi$ is an approach prime of $(X,\Gamma(d))$, by Lemma \[approach prime\].
Now we come to the main result in this section.
\[5.8\] For a metric space $(X,d)$, the specialization metric space of the sobrification of $\Gamma(X, d)$ coincides with the Yoneda completion of $(X,d)$.
Let $(\widehat{X},\widehat{\Gamma(d)})$ denote the sobrification of $\Gamma(X, d)$, and $({{\cal F}}X,\overline{d})$ the Yoneda completion of $(X,d)$. By Lemma \[metric approach prime\], $\widehat{X}$ and ${{\cal F}}X$ have the same elements, i.e., the flat weights of $(X,d)$. For any flat weights $\phi,\psi$ of $(X,d)$, we have by Equation (\[widehat distance\]) that $$\widehat{\Gamma(d)}(\phi,\{\psi\})=\widehat{\psi}(\phi) =\sup_{x\in X}d_L(\phi(x),\psi(x))= \overline{d}(\phi,\psi),$$ showing that the specialization metric of $(\widehat{X},\widehat{\Gamma(d)})$ coincides with $ \overline{d}$.
\[P is sober\] The approach space $\mathbb{P}$ is the sobrification of the metric approach space $\Gamma([0,\infty), d_R)$. In particular, $\mathbb{P}$ is sober.
Suppose that $\phi$ is an approach prime of $([0,\infty), \Gamma(d_R))$. Then there is a forward Cauchy net $\{a_{\lambda}\}$ in $([0,\infty), d_R)$ such that $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d_R(-,a_\sigma)$. If $\{a_{\lambda}\}$ is eventually a Cauchy net of real numbers in the usual sense, then $\phi =d_R(-,a)$, where $a=\lim_{\lambda}a_{\lambda}$. If $\{a_{\lambda}\}$ is an almost increasing net that tends to infinity, then $\phi$ is the constant function $\underline{0}$ on $([0,\infty), d_R)$ with value $0$.
Define a map $$f:\mathbb{P}\rightarrow (\widehat{[0,\infty)}, \widehat{\Gamma(d_R)})$$ by $f(a)= d_R(-,a)$ for all $a\in[0,\infty)$ and $f(\infty)=\underline{0}$. We claim that $f$ is an isomorphism of approach spaces. Since $f$ is clearly a bijection, we only need to check that $\delta_\mathbb{P}(b,A)=\widehat{\Gamma(d_R)}(f(b),f(A))$ for all $b\in[0,\infty]$ and (non-empty) $A\subseteq[0,\infty]$.
By Equation (\[widehat distance\*\]), $$\widehat{\Gamma(d_R)}(f(b),f(A))= \sup\{\widehat{\phi}(f(b))\mid \phi\in\mathcal{R}[0,\infty),\forall a\in A, \widehat{\phi}(f(a))=0\},$$ where, $\mathcal{R}[0,\infty)$ denotes the set of regular functions of $([0,\infty), \Gamma(d_R))$. For each $\phi\in\mathcal{R}[0,\infty)$, we have $\widehat{\phi}(f(a))=\widehat{\phi}(d_R(-,a))=\phi(a)$ for $a\in[0,\infty)$ and $\widehat{\phi}(f(\infty))= \widehat{\phi}(\underline{0})=\sup_{x\in[0,\infty)}\phi(x)$. We proceed with three cases.
**Case 1**. $\sup A=\infty$. In this case, the constant function $\underline{0}$ is the only regular function of $([0,\infty), \Gamma(d_R))$ that satisfies the condition that $\widehat{\phi}(f(a))=0$ for all $a\in A$, hence $$\delta_\mathbb{P}(b,A)=0 =\widehat{\underline{0}}(f(b))= \widehat{\Gamma(d_R)}(f(b),f(A)).$$
**Case 2**. $b=\infty$, $\sup A<\infty$. Since the regular function $\phi=d_R(-, \sup A)$ satisfies the condition that $\widehat{\phi}(f(a))=0$ for all $a\in A$, $$\widehat{\Gamma(d_R)}(f(b),f(A)) \geq \widehat{\phi}(f(b)) = \widehat{\phi}(\underline{0}) = \sup_{x\in[0,\infty)}d_R(x,\sup A)=\infty,$$ it follows that $\delta_\mathbb{P}(b,A) =\infty=\widehat{\Gamma(d_R)}(f(b),f(A))$.
**Case 3**. $b<\infty$, $\sup A<\infty$. Since $\psi= d_R(-,\sup A)$ is a regular function of $([0,\infty), \Gamma(d_R))$ such that $\widehat{\psi}(f(a))=0$ for all $a\in A$, it follows that $$\delta_\mathbb{P}(b,A) = d_R(b,\sup A) = \widehat{\psi}(f(b))\leq \widehat{\Gamma(d_R)}(f(b),f(A)).$$ Conversely, let $\phi$ be a regular function on $([0,\infty), \Gamma(d_R))$ such that $\widehat{\phi}(f(a))=\phi(a)=0$ for all $a\in A$. Since $\phi$ is a weight of $([0,\infty), d_R)$ by Proposition \[regular function in MAS\], it follows that $$\widehat{\phi}(f(b))=\phi(b)\leq\phi(a)+d_R(b,a) =d_R(b,a)$$ for all $a\in A$, hence $\widehat{\phi}(f(b))\leq\inf_{a\in A}d_R(b,a)=d_R(b,\sup A)$. Therefore $$\widehat{\Gamma(d_R)}(f(b),f(A))\leq d_R(b,\sup A)= \delta_\mathbb{P}(b,A),$$ completing the proof.
Sober metric approach spaces
============================
In this section we characterize metric approach spaces that are sober as exactly the Smyth complete spaces.
\[Smyth complete\] A metric space is Smyth complete if it is separated and all of its forward Cauchy nets converge in its symmetrization.
The metric space $([0,\infty],d_L)$ is Smyth complete. But, $([0,\infty],d_R)$ is not, though it is Yoneda complete.
Smyth completeness originated in the works of Smyth [@Smyth87; @Smyth94]. The above postulation is taken from [@Goubault; @KS2002]. For more information on Smyth completeness the reader is referred to [@BvBR1998; @FS02; @Goubault; @KSF; @KS2002]. In these works, Smyth completeness is more or less related to the topological properties of the spaces under consideration. However, as shown below, if we view metric spaces as categories enriched over Lawvere’s quantale $([0,\infty]^{\rm op},+)$, Smyth completeness for metric spaces can be formulated purely in categorical terms: a metric space is Smyth complete if it is separated and all of its flat weights are representable. This shows, in close resemblance to Lawvere’s postulation of complete metric spaces (i.e., every Cauchy weight is representable), that Smyth completeness is a categorical property. This can be thought of as an example for “whether Lawvere’s work has any bearing on what we are doing here", a question raised by Smyth in [@Smyth87].
We need some preparations. A net $\{x_{\lambda}\}$ in a metric space $(X,d)$ is biCauchy [@KS2002] if $$\inf_{\lambda}\sup_{\nu,\mu\geq{\lambda}}d(x_\mu,x_\nu)=0.$$ Every forward Cauchy net in $([0,\infty],d_L)$ is biCauchy. The sequence $\{n\}$ in $([0,\infty],d_R)$ is forward Cauchy, but not biCauchy.
\[bicauchy=cauchy\] A forward Cauchy net $\{x_{\lambda}\}$ in a metric space $(X,d)$ is biCauchy if and only if the weight $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ is Cauchy.
If $\{x_{\lambda}\}$ is biCauchy, it is easily verified that the coweight $\psi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(x_\sigma,-)$ is a left adjoint of $\phi$, hence $\phi$ is Cauchy. Conversely, suppose that $\{x_{\lambda}\}$ is a forward Cauchy net and $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ is a Cauchy weight. By Equation (\[left adjoint\]) the left adjoint $\psi$ of $\phi$ is given by $$\psi(x) =\overline{d}(\phi,d(-,x)).$$ Since $\phi$ is a Yoneda limit of the forward Cauchy net $\{d(-,x_{\lambda})\}$ in $(\mathcal{F}X, \overline{d})$ by Theorem \[Yoneda complete\], it follows that for all $x\in X$, $$\begin{aligned}
\psi(x) &=\overline{d}(\phi,d(-,x)) \\
&=\inf_{\lambda}\sup_{\sigma\geq{\lambda}}\overline{d}(d(-,x_\sigma),d(-,x))\\
&=\inf_{\lambda}\sup_{\sigma\geq{\lambda}}d(x_\sigma,x).\end{aligned}$$ Therefore, $$\inf_{\lambda}\sup_{\sigma,\mu\geq{\lambda}}d(x_\sigma,x_\mu)\leq \inf_{x\in X}\inf_{\lambda}\sup_{\sigma,\mu\geq{\lambda}}(d(x,x_\mu)+d(x_\sigma,x)) = \inf_{x\in X}(\phi(x)+\psi(x))=0,$$ showing that $\{x_{\lambda}\}$ is biCauchy.
The above lemma is similar to Proposition 4.13 in Hofmann and Reis [@HR2013]. However, there is a subtle difference. Proposition 4.13 in [@HR2013] says that for every net $\{x_{\lambda}\}$ in a metric space, the coweight $\psi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(x_\sigma,-)$ is left adjoint to the weight $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ if and only if $\{x_{\lambda}\}$ is biCauchy. The above lemma shows that for a forward Cauchy net $\{x_{\lambda}\}$, if the weight $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ has a left adjoint, then this left adjoint must be $\psi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(x_\sigma,-)$ and $\{x_{\lambda}\}$ is biCauchy.
A metric space $(X,d)$ is Smyth complete if and only if for each flat weight $\phi$ of $(X,d)$, there is a unique $a\in X$ such that $\phi=d(-,a)$.
**Sufficiency**. That $(X,d)$ is separated is obvious. Given a forward Cauchy net $\{x_{\lambda}\}$ in $(X,d)$, let $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$. Then $\phi$ is a flat weight, hence $\phi=d(-,a)$ for some $a\in X$. We leave it to the reader to check that $\{x_{\lambda}\}$ converges to $a$ in $(X,d^{\rm sym})$.
**Necessity**. Let $\phi$ be a flat weight of $(X,d)$. By Proposition \[flat weight\], there is a forward Cauchy net $\{x_{\lambda}\}$ in $(X,d)$ such that $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$. By assumption, $\{x_{\lambda}\}$ has a unique limit, say $a$, in $(X,d^{\rm sym})$. So, $\{x_{\lambda}\}$ is a biCauchy net in $(X,d)$ with $a$ as a Yoneda limit. Thus, $\phi$ is a Cauchy weight by Lemma \[bicauchy=cauchy\] and ${{\rm colim}}\phi=a$ by Proposition \[supremum\]. Then, by Equation (\[left adjoint\]) and Equation (\[colim\]), $d(a,-)$ is a left adjoint of $\phi$, hence $\phi=d(-,a)$.
If there is an isometric map from a metric space $(X,d)$ to a Smyth complete metric space $(Y,p)$, then, by Proposition \[image of Cauchy weight\], every flat weight of $(X,d)$ will be a Cauchy weight. This leads to the following
A metric space is Smyth completable if all of its flat weights are Cauchy.
The following conclusion says that the above postulation of Smyth completable metric spaces is equivalent to that in [@KS2002].
A metric space is Smyth completable if and only if all of its forward Cauchy nets are biCauchy.
**Sufficiency**. Let $\phi$ be a flat weight. By Proposition \[flat weight\], there is a forward Cauchy net $\{x_{\lambda}\}$ such that $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$. By assumption, $\{x_{\lambda}\}$ is a biCauchy net, thus, $\phi$ is a Cauchy weight by Lemma \[bicauchy=cauchy\].
**Necessity**. Let $\{x_{\lambda}\}$ be a forward Cauchy net. Then $\phi=\inf_{\lambda}\sup_{\sigma\geq{\lambda}} d(-,x_\sigma)$ is a flat weight by Proposition \[flat weight\], hence a Cauchy weight by assumption. Thus, $\{x_{\lambda}\}$ is biCauchy by Lemma \[bicauchy=cauchy\].
Let $(X,d)$ be a metric space. The following are equivalent:
(1) $(X,d)$ is Smyth completable.
(2) The sobrification of $\Gamma(X, d)$ is a metric approach space.
(3) The Yoneda completion on $(X,d)$ is idempotent, i.e., the map ${{\bf y}}_{{{\cal F}}X}: ({{\cal F}}X,\overline{d})\rightarrow ({{\cal F}}({{\cal F}}X),\overline{\overline{d}})$ is surjective.
In this case, the sobrification of $\Gamma(X, d)$ is generated by the Cauchy completion of $(X,d)$.
$(1)\Rightarrow(2)$ First of all, by virtue of Lemma \[metric approach prime\], each approach prime of $(X,\Gamma(d))$ is a flat weight of $(X,d)$, hence a Cauchy weight of $(X,d)$.
If we could show that for each approach prime $\psi$ of $(X,\Gamma(d))$ and every non-empty set $\{\phi_i\}_{i\in I}$ of approach primes of $(X,\Gamma(d))$, it holds that $$\widehat{\Gamma(d)}(\psi,\{\phi_i\}_{i\in I})=\inf_{i\in I}\overline{d}(\psi,\phi_i),$$ then the sobrification of $(X,\Gamma(d))$ will be a metric approach space, generated by $({{\cal F}}X,\overline{d})$, the Yoneda completion of $(X,d)$. To see this, we calculate: $$\begin{aligned}
\widehat{\Gamma(d)}(\psi,\{\phi_i\}_{i\in I}) &=\sup\{\widehat{\xi}(\psi)\mid \xi\in\mathcal{R}X,~\forall i\in I,~\widehat{\xi}(\phi_i)=0\} \\
&=\sup\{\widehat{\xi}(\psi)\mid \xi\in\mathcal{R}X,~\forall i\in I,~\xi\leq \phi_i\}\\
&=\sup\{\widehat{\xi}(\psi)\mid \xi\in\mathcal{R}X,~\xi\leq \inf_{i\in I}\phi_i\}\\
&=\widehat{\inf_{i\in I}\phi_i}(\psi)&(\inf_{i\in I}\phi_i\in\mathcal{R}X )\\
&=\sup_{x\in X}d_L(\psi(x),\inf_{i\in I}\phi_i(x))\\
&=\inf_{i\in I}\sup_{x\in X}d_L(\psi(x),\phi_i(x))&({\rm Lemma~ \ref{Cauchy weight}})\\
&=\inf_{i\in I}\overline{d}(\psi,\phi_i),\end{aligned}$$ where, $\mathcal{R}X$ denotes the set of regular functions of $(X,\Gamma(d))$.
$(2)\Rightarrow(3)$ Since the sobrification $(\widehat{X},\widehat{\Gamma(d)})$ of $(X,\Gamma(d))$ is a metric approach space, it must be generated by the Yoneda completion $({{\cal F}}X,\overline{d})$ of $(X,d)$ by Theorem \[5.8\]. Given a flat weight $\Phi:{{\cal F}}X\rightarrow[0,\infty]$ of $({{\cal F}}X,\overline{d})$, it follows from Lemma \[metric approach prime\] that $\Phi$ is an approach prime of $(\widehat{X},\widehat{\Gamma(d)})$. Thus, there exists a unique $\xi\in \widehat{X}(={{\cal F}}X)$ such that $$\Phi=\widehat{\Gamma(d)}(-,\{\xi\}) = \overline{d}(-,\xi).$$ This shows that ${{\bf y}}_{{{\cal F}}X}: ({{\cal F}}X,\overline{d})\rightarrow ({{\cal F}}({{\cal F}}X),\overline{\overline{d}})$ is surjective, the conclusion thus follows.
$(3)\Rightarrow(1)$ If $\phi$ is flat, then ${{\bf y}}_X(\phi)$ is a flat weight of $({{\cal F}}X,\overline{d})$. Thus, ${{\bf y}}_X(\phi)=\overline{d}(-,\psi)$ for some $\psi\in {{\cal F}}X$ since ${{\bf y}}_{{{\cal F}}X}: ({{\cal F}}X,\overline{d})\rightarrow ({{\cal F}}({{\cal F}}X),\overline{\overline{d}})$ is surjective. This shows that ${{\bf y}}_X(\phi)$ is a Cauchy weight of $({{\cal F}}X,\overline{d})$. Then, applying Proposition \[image of Cauchy weight\](3) to ${{\bf y}}_X$ gives that $\phi$ is Cauchy.
In this case, the Cauchy completion and the Yoneda completion coincide with each other. Hence, the final claim follows from Theorem \[5.8\].
([@BRC]) Let $(X,d)$ be a symmetric metric space. Then the sobrification of $(X,\Gamma(d))$ is a metric approach space and is generated by the Cauchy completion of $(X,d)$.
This follows from that every symmetric metric space is Smyth completable.
\[main\] Let $(X,d)$ be a metric space. The following are equivalent:
(1) The approach space $(X,\Gamma(d))$ is sober.
(2) $(X,d)$ is Smyth complete.
(3) $(X,d)$ is a fixed point of the Yoneda completion, i.e., ${{\bf y}}_X:(X,d)\rightarrow ({{\cal F}}X,\overline{d})$ is an isomorphism.
$(1)\Rightarrow(2)$ If $\phi$ is flat weight of $(X,d)$, then $\phi$ is an approach prime of $(X,\Gamma(d))$ by Lemma \[metric approach prime\], so, there is a unique $a\in X$ such that $\phi=\Gamma(d)(-,\{a\})=d(-,a)$, showing that $(X,d)$ is Smyth complete.
$(2)\Rightarrow(3)$ This follows from the construction of the Yoneda completion and the fact that all flat weights of $(X,d)$ are of the form $d(-,a)$.
$(3)\Rightarrow(1)$ Let $\phi$ be an approach prime of $(X,\Gamma(d))$. By Lemma \[metric approach prime\], $\phi$ is a flat weight of $(X,d)$, hence an element of the Yoneda completion of $(X,d)$. Since $(X,d)$ is a fixed point of the Yoneda completion, there is a unique $a\in X$ such that $\phi=d(-,a) =\Gamma(d)(-,\{a\})$. Hence $(X,\Gamma(d))$ is sober.
[**Acknowledgement**]{} The authors thank cordially the referee for her/his most valuable comments and helpful suggestions.
[10]{} B. Banaschewski, R. Lowen, C. Van Olmen, Sober approach spaces, Topology and its Applications 153 (2006) 3059-3070.
M. M. Bonsangue, F. van Breugel, J. J. M. M. Rutten, Generalized metric space: completion, topology, and powerdomains via the Yoneda embedding, Theoretical Computer Science 193 (1998) 1-51.
R. C. Flagg, P. Sünderhauf, The essence of ideal completion in quantitative form, Theoretical Computer Science 278 (2002) 141-158.
R. C. Flagg, P. Sünderhauf, K. R. Wagner, A logical approach to quantitative domain theory, Topology Atlas Preprint No. 23, 1996. http://at.yorku.ca/e/a/p/p/23.htm
A. Gerlo, E. Vandersmissen, C. Van Olmen, Sober approach spaces are firmly reflective for the class of epimorphic embeddings, Applied Categorical Structures 14 (2006) 251-258.
G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, D. S. Scott, Continuous Lattices and Domains, Cambridge University Press, 2003.
J. Goubault-Larrecq, Non-Hausdorff Topology and Domain Theory, Cambridge University Press, Cambridge, 2013.
G. Gutierres, D. Hofmann, Approaching metric domains, Applied Categorical Structures 21(2013) 617-650.
D. Hofmann, Injective spaces via adjunction, Journal of Pure and Applied Algebra 215 (2011) 283-302. D. Hofmann, G. J. Seal, W. Tholen (editors), Monoidal Topology: A Categorical Approach to Order, Metric, and Topology, Encyclopedia of Mathematics and its Applications, Vol. 153, Cambridge University Press, Cambridge, 2014.
D. Hofmann, C. D. Reis, Probabilistic metric spaces as enriched categories, Fuzzy Sets and Systems 210 (2013) 1-21.
P. T. Johnstone, Stone Spaces, Cambridge University Press, Cambridge, 1982.
G. M. Kelly, Basic Concepts of Enriched Category Theory, London Mathematical Society Lecture Notes Series, Vol. 64, Cambridge University Press, Cambridge, 1982.
G. M. Kelly, V. Schmitt, Notes on enriched categories with colimits of some class, Theory and Applications of Categories 14 (2005) 399-423.
R. Kopperman, P. Sünderhauf, B. Flagg, Smyth completion as bicompletion, Topology and its Applications 91 (1999) 169-180.
H. P. Künzi, M. P. Schellekens, On the Yoneda completion of a quasi-metric space, Theoretical Computer Science 278 (2002) 159-194.
H. Lai, W. Tholen, Quantale-valued approach spaces via closure and convergence, 2016. arXiv:1604.08813v1
H. Lai, D. Zhang, Complete and directed complete $\Omega$-categories, Theoretical Computer Science 388 (2007) 1-25.
F. W. Lawvere, Metric spaces, generalized logic, and closed categories, Rendiconti del Seminario Matématico e Fisico di Milano 43 (1973) 135-166. W. Li, H. Lai, D. Zhang, Yoneda completeness and flat completeness of ordered fuzzy sets, Fuzzy Sets and Systems 313 (2017) 1-24.
R. Lowen, Approach spaces: a common supercategory of **TOP** and **MET**, Mathematische Nachrichten 141 (1989) 183-226.
R. Lowen, Approach Spaces: the Missing Link in the Topology-Uniformity-Metric Triad, Oxford University Press, 1997.
R. Lowen, Index Analysis, Approach Theory at Work, Springer, 2015.
K. I. Rosenthal, Quantales and Their Applications, Pitman Research Notes in Mathematics Series, Vol. 234, Longman, Essex, 1990.
J. J. M. M. Rutten, Weighted colimits and formal balls in generalized metric spaces, Topology and its Applications 89 (1998) 179-202.
M. B. Smyth, Quasi-uniformities: Reconciling domains with metric spaces, Lecture Notes in Computer Science, Vol. 298, Springer, Berlin, 1987, pp. 236-253.
M. B. Smyth, Completeness of quasi-uniform and syntopological spaces, Journal of London Mathematical Society 49 (1994) 385-400.
I. Stubbe, Categorical structures enriched in a quantaloid: categories, distributors and functors, Theory and Applications of Categories 14 (2005) 1-45.
S. Vickers, Localic completion of generalized metric apaces, Theory and Application of Categories 14 (2005) 328-356.
K. R. Wagner, Liminf convergence in ${\Omega}$-categories, Theoretical Computer Science 184 (1997) 61-104.
|
---
abstract: 'We present a considerably improved analysis of model-independent bounds on new physics effects in non-leptonic tree-level decays of B-mesons. Our main finding is that contributions of about $\pm 0.1 $ to the Wilson coefficient of the colour-singlet operator $Q_2$ of the effective weak Hamiltonian and contributions in the range of $\pm 0.5$ (both for real and imaginary part) to $Q_1$ can currently not be excluded at the $90\%$ C.L.. Effects of such a size can modify the direct experimental extraction of the CKM angle $\gamma$ by up to $10^{\circ}$ and they could lead to an enhancement of the decay rate difference $\Delta \Gamma_d$ of up to a factor of 5 over its SM value - a size that could explain the D0 dimuon asymmetry. Future more precise measurements of the semi-leptonic asymmetries $a_{sl}^q$ and the lifetime ratio $\tau (B_s) / \tau (B_d)$ will allow to shrink the bounds on tree-level new physics effects considerably. Due to significant improvements in the precision of the non-perturbative input we update all SM predictions for the mixing obervables in the course of this analysis, obtaining: $\Delta M_s = (18.77 \pm 0.86 ) \, \mbox{ps}^{-1}$, $\Delta M_d = (0.543 \pm 0.029) \, \mbox{ps}^{-1}$, $\Delta \Gamma_s = (9.1 \pm 1.3 ) \cdot 10^{-2} \, \mbox{ps}^{-1}$, $\Delta \Gamma_d = (2.6 \pm 0.4 ) \cdot 10^{-3} \, \mbox{ps}^{-1}$, $a_{sl}^s = (2.06 \pm 0.18) \cdot 10^{-5}$ and $a_{sl}^d = (-4.73 \pm 0.42) \cdot 10^{-4}$.'
address:
- |
Institute for Particle Physics Phenomenology, Durham University,\
DH1 3LE Durham, United Kingdom\
alexander.lenz@durham.ac.uk
- |
[Theoretische Physik 1, Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Strasse 3, D-57068 Siegen, Germany\
gtx@physik.uni-siegen.de]{}
- |
[ Nikhef, Theory Group, Science Park 105, 1098 XG, Amsterdam, The Netherlands.\
gtx@nikhef.nl]{}
author:
- Alexander Lenz
- 'Gilberto Tetlalmatzi-Xolocotzi'
bibliography:
- 'paper\_v18.bib'
title: 'Model-independent bounds on new physics effects in non-leptonic tree-level decays of B-mesons'
---
New Physics, B-Physics, CP violation
Introduction {#sec:intro}
============
Motivations for flavour physics are manifold. Standard model parameters, like the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix [@Cabibbo:1963yz; @Kobayashi:1973fv] or quark masses are determined very accurately in this field. Moreover the quark-sector is the only sector, where CP violating effects have been detected so far - since 1964 in the Kaon sector [@Christenson:1964fg] and since 2001 also in the B-sector [@Aubert:2002ic; @Abe:2002px]. Very recently CP violation has been measured for the first time in the charm sector [@Aaij:2019kcg], which might actually be an indication for physics beyond the standard model (BSM) [@Chala:2019fdb; @Dery:2019ysp]. Considering that CP violation is a necessary ingredient for creating a baryon asymmetry in the universe [@Sakharov:1967dj], flavour physics might shed some light on this unsolved problem. In addition flavour physics is perfectly suited for indirect new physics (NP) searches, because there are many processes strongly suppressed in the standard model (SM) but not necessarily in hypothetical NP models. And, last but not least, a comparison between experiment and theory predictions can provide a deeper insight into the dynamics of QCD.\
In recent years experimental flavour physics entered a new precision era, which was initiated by the B-factories at KEK and SLAC (see e.g. [@Bevan:2014iga]) and the Tevatron at Fermilab [@Anikeev:2001rk; @Borissov:2013fwa]. Currently this field is dominated by the results of the LHCb collaboration [@Bediaga:2012py; @Koppenburg:2015pca], but also complemented by competing results from the general purpose detectors ATLAS and CMS, see e.g. [@Aad:2016tdj; @Khachatryan:2015nza].\
The corresponding dramatic increase in experimental precision, demands complementary improvements in theory. Besides calculating higher orders in perturbative QCD or more precise lattice evaluations, this also means revisiting some common approximations by investigating questions like: How large are penguin contributions? How well does QCD-factorization [@Beneke:1999br; @Beneke:2000ry; @Beneke:2001ev; @Beneke:2003zv] work? How large can duality violation in the Heavy Quark Expansion (HQE) (see e.g. [@Khoze:1983yp; @Shifman:1984wx; @Bigi:1991ir; @Bigi:1992su; @Blok:1992hw; @Blok:1992he; @Chay:1990da; @Luke:1990eg] for pioneering papers and [@Lenz:2015dra] for a recent review) be? How sizeable NP effects in tree-level decays can be? Some of these questions have been studied in detail for quite some time. There is e.g. a huge literature on penguin contributions, see e.g. [@Fleischer:2015mla; @Artuso:2015swg] for reviews. Others gained interest recently, for instance duality violations [@Jubb:2016mvq]. In principle all these questions are interwoven, but as a starting point it is reasonable to consider them separately. The assumption of no NP effects at tree-level in non-leptonic $b$-decays was already challenged after the measurement of the dimuon asymmetry by the D0-collaboration [@Abazov:2010hv; @Abazov:2010hj; @Abazov:2011yk; @Abazov:2013uma], see e.g. [@Bauer:2010dga]. And after the measurements of $B \to D^{(*)} \tau \nu$ by BaBar, Belle and LHCb [@Lees:2012xj; @Aaij:2015yra; @Huschle:2015rga; @Abdesselam:2016cgx] for the case of semi-leptonic $b$-decays.\
Compared to numerous systematic studies of NP effects in the Wilson coefficients of the penguin operators $Q_7$, $Q_9$ and $Q_{10}$, see e.g. [@Jager:2014rwa; @Descotes-Genon:2015uva; @Ciuchini:2015qxb; @Altmannshofer:2015sma; @Hurth:2016fbr], we are not aware of systematic studies for NP effects in the Wilson coefficients for non-leptonic tree-level decays, except the ones in [@Bobeth:2014rda; @Bobeth:2014rra; @Brod:2014bfa; @Jager:2017gal; @Jager:2019bgk].\
The aim of the current paper is to considerably extend the studies in [@Bobeth:2014rda; @Brod:2014bfa] by incorporating two main improvements:
1. A full $\chi^2$-fit is performed instead of a simple parameter scan. To implement this step we use the package MyFitter [@Wiebusch:2012en] and allow the different nuisance parameters to run independently. This will allow us to account properly for the corresponding statistical correlations.\
2. Instead of simplified theoretical equations we include full expressions for the observables under investigation.
The recent work in [@Jager:2017gal; @Jager:2019bgk] concentrates exclusively on the transition $b \to c \bar{c} s$, while we consider in this paper all different hadronic decays, that occur in the SM on tree-level. Moreover in this work we consider only BSM effects to the tree-level operators $Q_1$ and $Q_2$, while [@Jager:2017gal; @Jager:2019bgk] investigates also effects of four-quark operators that do not exist in the SM. Whenever there is some direct overlap between the work in [@Jager:2017gal; @Jager:2019bgk] we directly compare the results. Any realistic BSM model that gives rise to new tree-level effects will also give new effects at the loop-level, which are not considered in the current model independent approach. In that respect this work can be considered as an important building block of future model dependent studies.
The paper is organised as follows: In Section \[sec:basic\] we describe briefly the theoretical tools to be used: we start with the effective Hamiltonian in Section \[sec:Heff\], then in Section \[sec:HQE\] we introduce the Heavy Quark Expansion and in Section \[sec:QCDF\] we review basic concepts in QCD factorization relevant to this project. Next in Section \[sec:strategy\], we outline our strategy for performing the $\chi^2$-fit. We discuss all our different constraints on NP effects in non-leptonic tree-level decays in Section \[sec:constraints\]. The bounds on individual decay channels are organized as follows: $b \to c \bar{u} d$ in Section \[sec:bcud\], $b \to u \bar{u} d$ in Section \[sec:buud\], $b \to c \bar{c} s$ in Section \[sec:bccs\], $b \to c \bar{c} d$ in Section \[sec:bccd\]. Additionally, in Section \[sec:multiple\_channels\] we present observables constraining more decay channels. Our main results are presented in Section \[sec:Globalfitresults\]: fits for the allowed size of BSM effects in the tree-level Wilson coefficients based on individual decay channels will be discussed in Sections \[sec:buudfit\] - \[sec:bccdfit\]. In particular we focus on the channels which can enhance the decay rate difference of neutral $B_d^0$-mesons $\Delta \Gamma_d$ and we calculate these enhancements. Flavour-universal bounds on the tree level Wilson coefficients will be presented in Section \[sec:CKMgamma\], with an emphasis on the consequences of tree-level NP effects on the precision in the direct extraction of the CKM angle $\gamma$. In Section \[sec:future\] we study observables that seem to be most promising in shrinking the space for new effects in $C_1$ and $C_2$. Finally we conclude in Section \[sec:conclusion\] and give additional information in the appendices.\
Since there has been tremendous progress (see e.g.[@King:2019rvk; @DiLuzio:2019jyq]) in the theoretical precision of the mixing observables, we will present in this work numerical updates of all mixing observables: $\Delta \Gamma_q$ in Section \[subsec:DGs\], $\Delta M_q$ in Section \[sec:sin2betaM12\] and the semi-leptonic CP asymmetries $a_{sl}^q$ and mixing phases $\phi_q$ in Section \[sec:multiple\_channels\].
Basic formalism {#sec:basic}
===============
In this section we provide an overview of the basic theoretical tools required for the description of our different flavour observables, this includes: the effective Hamiltonian, the Heavy Quark Expansion for inclusive decays and mixing observables. A quick review of QCD factorization for exclusive, non-leptonic decays is also provided. In addition we fix the notation to be used during this work.
Effective Hamiltonian {#sec:Heff}
---------------------
We start by introducing the effective Hamiltonian describing a $b$-quark decay into a $p \bar{p}' q$ final state via electroweak interactions, with $p, p'= u, c$ and $q=s, d $:
$$\begin{aligned}
{\cal \hat{H}}_{eff}&=&
\frac{G_F}{\sqrt{2}}\left\{
\sum_{p,p'=u,c}\lambda^{(q)}_{pp'}\sum_{i=1,2}C^{q, \, pp'}_{i}(\mu)\hat{Q}^{q, \, pp'}_{i}
\right.
\nonumber\\
&&
\left.
+ \sum_{p=u,c}\lambda^{(q)}_p \left[
\sum^{10}_{i=3} C^q_{i}(\mu)\hat{Q}^q_{i} + C^q_{7\gamma} \hat{Q}^q_{7\gamma} + C^q_{8 g} \hat{Q}^q_{8 g} \right]
\right\} + h. c. \, .
\label{eq:Hamiltonian}\end{aligned}$$
The Fermi constant is denoted by $G_F$, additionally we have introduced the following CKM combinations $$\begin{aligned}
\lambda^{(q)}_p&=&V_{pb}V^*_{pq} \, ,
\nonumber\\
\lambda^{(q)}_{pp'}&=&V_{pb}V^*_{p'q} \, .
\label{eq:lambmdadef}\end{aligned}$$ Moreover $C_i$ denote the Wilson coefficients of the following dimension six operators:
$$\begin{aligned}
\hat{Q}_{1}^{q, \, pp'}&=\Bigl(\bar{\hat{p}}_{\beta}\hat{b}_{\alpha}\Bigl)_{V-A}
\Bigl(\bar{ \hat{q} }_{\alpha} \hat{p}'_{\beta}\Bigl)_{V-A}
\, , &
\hat{Q}_{2}^{q, \, pp'}=& \Bigl(\bar{\hat{p}} \hat{b}\Bigl)_{V-A} \Bigl(\bar{\hat{q}} \hat{p}'\Bigl)_{V-A}\, ,
\nonumber \\
\hat{Q}^q_{3}&=\Bigl(\bar{\hat{q}}\hat{b}\Bigl)_{V-A} \sum_k\Bigl(\bar{\hat{k}}\hat{k}\Bigl)_{V-A} \, ,
&
\hat{Q}^q_{4}=&\Bigl(\bar{\hat{q}}_\alpha \hat{b}_\beta\Bigl)_{V-A}\sum_k\Bigl(\bar{\hat{k}}_\beta \hat{k}_\alpha\Bigl)_{V-A}\, ,
\nonumber\\
\hat{Q}^q_{5}&=\Bigl(\bar{\hat{q}} \hat{b}\bigl)_{V-A} \sum_k\Bigl(\bar{\hat{k}}\hat{k}\Bigl)_{V+A} \, ,
& \hat{Q}^q_{6}=&\Bigl(\bar{\hat{q}}_\alpha \hat{b}_\beta\bigl)_{V-A}\sum_k\Bigl(\bar{\hat{k}}_\beta \hat{k}_\alpha\Bigl)_{V+A}\, ,
\nonumber\\
\hat{Q}^q_{7}&=\Bigl(\bar{\hat{q}} \hat{b}\bigl)_{V-A}\sum_k\frac{3}{2}e_k\Bigl(\bar{\hat{k}} \hat{k}\Bigl)_{V+A} \, ,
& \hat{Q}^q_{8}=&\Bigl(\bar{\hat{q}}_\alpha \hat{b}_\beta\bigl)_{V-A}\sum_k\frac{3}{2}e_k\Bigl(\bar{\hat{k}}_\beta \hat{k}_\alpha\Bigl)_{V+A}\, ,
\nonumber\\
\hat{Q}^q_{9 }&=\Bigl(\bar{\hat{q}} \hat{b} \Bigl)_{V-A}\sum_{k}\frac{3}{2}e_k\Bigl(\bar{\hat{k}} \hat{k} \Bigl)_{V-A}\, ,
&\hat{Q}^q_{10}=&\Bigl(\bar{\hat{q}}_\alpha \hat{b}_\beta \Bigl)_{V-A}\sum_{k}\frac{3}{2}e_k\Bigl(\bar{\hat{k}}_\beta \hat{k}_{\alpha} \Bigl)_{V-A}\, ,
\nonumber\\
\hat{Q}^q_{7\gamma}=&\frac{e}{8\pi^2}m_b\bar{\hat{q}}\sigma_{\mu\nu}\Bigl(1+\gamma_5 \Bigl)\hat{F}^{\mu\nu}\hat{b}\, ,
&\hat{Q}^q_{8 g}=&\frac{g_s}{8\pi^2}m_b\bar{\hat{q}}\sigma_{\mu\nu}\Bigl(1+\gamma_5 \Bigl)\hat{G}^{\mu\nu}\hat{b}\, .
\label{eq:mainbasis}\end{aligned}$$
Here $\alpha$ and $\beta$ are colour indices, $e_k$ is the electric charge of the quark $k$ (in the penguin operators the quark flavours are summed over $k= u,d,s,c,b$), $e$ is the $U(1)_Y$ coupling and $g_s$ the $SU(3)_C$ one, $m_b$ is the mass of the $b$-quark and $F^{\mu\nu}$ and $G^{\mu\nu}$ are the electro-magnetic and chromo-magnetic field strength tensors respectively. In this work we consider NP effects that will affect the tree-level operators $\hat{Q}_{1}^{q, \, pp'}$ and $\hat{Q}_{2}^{q, \, pp'}$ by modifying their corresponding Wilson coefficients. In our notation $ \hat{Q}_{1}^{q, \, pp'}$ is colour non diagonal and $ \hat{Q}_{2}^{q, \, pp'}$ is the colour singlet, the QCD penguin operators correspond to $\hat{Q}^q_{3-6}$ and the electro-weak penguin interactions are described by $\hat{Q}^q_{7-10}$. Different bases compared to the one in Eq. (\[eq:mainbasis\]) are used in the literature. Our notation agrees with the one used in [@Buchalla:1995vs] and [@Beneke:1998sy], here $C_{8g}$ is negative because we are considering $-i g \gamma_\mu T^a$ as the Feynman rule for the quark-gluon vertex. In [@Beneke:2001ev] a different basis is taken into account, where $\hat{Q}_1$ and $\hat{Q}_2$ are interchanged and $\hat{Q}_{7 \gamma}$ and $\hat{Q}_{8g}$ have a different sign (this is equivalent to the sign convention $i D^\mu = i \partial^\mu + g_s A_a^\mu T^a$ for the gauge-covariant derivative). A nice introduction on effective Hamiltonians can be found in [@Buras:1998raa], and a concise review up to NLO-QCD in [@Buchalla:1995vs].\
\[RGE\] The Wilson coefficients $C_i$ with $i= 1,2,...,10,7\gamma,8g$ in Eq. (\[eq:Hamiltonian\]) are obtained by matching the calculations of the effective theory and the full SM at the scale $\mu=M_W$ and then evolving down to the scale $\mu \sim m_b$ using the renormalisation group equations according to $$\begin{aligned}
\vec{C}(\mu)&=&\textit{\textbf{U}}(\mu, M_W, \alpha)\vec{C}(M_W) \, ,
\label{eq:fullevo}\end{aligned}$$ where the NLO evolution matrix is given by [@Beneke:2001ev] $$\begin{aligned}
\textit{\textbf{U}}(\mu, M_W, \alpha)&=&\textit{\textbf{U}}(\mu,\mu_W) +
\frac{\alpha}{4\pi} \textit{\textbf{R}}(\mu,\mu_W).
\label{eq:fullevoMat}\end{aligned}$$ The matrix $\textbf{U}(\mu,\mu_W)$ accounts for pure QCD evolution, on the other hand $\textbf{R}(\mu,\mu_W)$ introduces QED effects as well. We write at NLO [@Beneke:2001ev] $$\begin{aligned}
\textit{\textbf{U}}(\mu, M_W, \alpha)
&=&
\Bigl[\textit{\textbf{U}}_0 + \frac{\alpha_{s}(\mu)}{4\pi}\textit{\textbf{J}\textbf{U}}_0 -\frac{\alpha_s(M_W)}{4\pi}\textit{\textbf{U}}_0
\textit{\textbf{J}}
\nonumber\\
&&
+ \frac{\alpha}{4\pi}\Bigl(\frac{4\pi}{\alpha_s(\mu)}\textit{\textbf{R}}_0 + \textit{\textbf{R}}_1\Bigl) \Bigl],
\label{eq:ev_matrix}\end{aligned}$$ where $\alpha_s(\mu)$ denotes the strong coupling at the scale $\mu$ calculated up to NLO-QCD precision and $\alpha$ is the electro-magnetic coupling. At LO the evolution matrix $\textit{\textbf{U}}(\mu, M_W, \alpha)$ reduces to
$$\begin{aligned}
\label{Eq:LOevo}
\textit{\textbf{U}}^{\rm LO}_0(\mu,\mu_W, \alpha) &=&\textit{\textbf{U}}_0 + \frac{\alpha}{\alpha_s(\mu)} \textit{\textbf{R}}_0.\end{aligned}$$
The NLO-QCD corrections are then introduced through $\textit{\textbf{J}}$, the explicit expressions for $\textit{\textbf{U}}_0$ and $\textit{\textbf{J}}$ are given in Eqns. (3.94)-(3.98) of [@Buchalla:1995vs]. The anomalous dimension matrices $\mathbf{\gamma}_s^{(0)}$ and ${\bf \gamma}_s^{(1)}$ required for these evaluations can be found in Eqn. (6.25) and Tables XIV and XV of [@Buchalla:1995vs]. To introduce QED corrections we calculate ${\bf{R_0}}$ and ${\bf{R_1}}$ using Eqns. (7.24)-(7.28) of [@Buchalla:1995vs], the anomalous dimension matrices used are $\gamma^{(0)}_{e}$ and $\gamma^{(1)}_{e}$ and are given in Tables XVI and XVII of [@Buchalla:1995vs].\
The initial conditions for the Wilson coefficients have the following expansion at NLO $$\begin{aligned}
\vec{C}(M_W)&=&\vec{C}^{(0)}_s(M_W) + \frac{\alpha_s(M_W)}{4\pi}\vec{C}^{(1)}_s(M_W)\nonumber\\
&&+
\frac{\alpha}{4\pi}
\left[
\vec{C}^{(0)}_e(M_W) + \frac{\alpha_s(M_W)}{4\pi}\vec{C}^{(1)}_e(M_W) + \vec{R}^{(0)}_e(M_W)
\right]
\label{eq:CMW},\end{aligned}$$ as pointed out in [@Beneke:2001ev] the electroweak contributions $\vec{C}^{(0)}_e$ and $\vec{C}^{(1)}_e$ in Eq. (\[eq:CMW\]) can be $x_t$ and/or $1/\sin^2\theta_W$ enhanced. Consequently it is fair to treat the product between $\alpha$ and $\vec{C}^{(0)}_e$ as a LO contribution and the product between $\alpha$ and $\vec{C}^{(1)}_e$ as a NLO effect. The remainder, denoted by $\vec{R}^{(0)}_e$, is numerically smaller in comparison with $\vec{C}^{(0)}_e$ and it is therefore treated as a NLO effect, it contains the NLO scheme dependency. This approach differs from the one followed by [@Buchalla:1995vs], where the contribution of $\vec{C}^{(0)}_e(M_W) + \vec{R}^{(0)}_e(M_W)$ is introduced as a NLO effect and then $\vec{C}^{(1)}_e$ is omitted. The explicit expressions for $\vec{C}^{(0)}_s$, $\vec{C}_s^{(1)}$, $\vec{C}^{(0)}_e$, $\vec{C}^{(1)}_e$ and $\vec{R}^{(0)}_e$ of $\vec{C}(M_W)$ are given in Section VII.B of [@Buchalla:1995vs] and Section 3.1 of [@Beneke:2001ev], the results presented for $\vec{C}^{(1)}_e$ in [@Beneke:2001ev] are based in the calculations of [@Buras:1999st].\
It should be further stressed that when applying Eq. (\[eq:fullevo\]) we consistently dropped products between NLO contributions from $\textit{\textbf{U}}(\mu, M_W, \alpha)$ and NLO effects from $\vec{C}(M_W)$ but we have taken into account products between NLO contributions from $\textit{\textbf{U}}(\mu, M_W, \alpha)$ and LO contributions from $\vec{C}(M_W)$ and vice versa.
Heavy Quark Expansion {#sec:HQE}
---------------------
The effective Hamiltonian can be used to calculate inclusive decays of a heavy hadron $B_q$ into an inclusive final state $X$ via $$\Gamma ( B_q \to X) = \frac{1}{2 m_{B_q}} \sum \limits_{X} \int_{\rm PS} (2 \pi)^4 \delta^{(4)}
(p_{B_q} - p_X) | \langle X | {\cal \hat{H}}_{eff} | B_q \rangle |^2
\, .
\label{total}$$ With the help of the optical theorem the total decay rate in Eq. (\[total\]) can be rewritten as $$\Gamma(B_q \to X) = \frac{1}{2 m_{B_q}} \langle B_q |{\cal \hat{T} } | B_q \rangle
\, ,$$ with the transition operator $${\cal \hat{T}} = \mbox{Im} \; i \int d^4x
\hat{T} \left[ {\cal \hat{H} }_{eff}(x) {\cal \hat{H} }_{eff} (0) \right]
\, ,
\label{trans}$$ consisting of a non-local double insertion of the effective Hamiltonian. Expanding this bi-local object in local operators gives the Heavy Quark Expansion (see e.g. [@Khoze:1983yp; @Shifman:1984wx; @Bigi:1991ir; @Bigi:1992su; @Blok:1992hw; @Blok:1992he; @Chay:1990da; @Luke:1990eg] for pioneering papers and [@Lenz:2015dra] for a recent review). The total decay rate $\Gamma$ of a $b$-hadron can then be expressed as products of perturbatively calculable coefficients $\Gamma_i$ times non-perturbative matrix elements $\langle O_{D} \rangle$ of $\Delta B = 0$-operators of dimension $D = i+3$: $$\begin{aligned}
\Gamma & = & \Gamma_0 \langle O_{D=3} \rangle
+ \Gamma_2 \frac{\langle O_{D=5} \rangle}{m_b^2}
+ \tilde{\Gamma}_3 \frac{\langle \tilde{O}_{D=6} \rangle}{m_b^3}
+ ...
\nonumber
\\
&&+ 16 \pi^2 \left[
\Gamma_3 \frac{\langle O_{D=6} \rangle}{m_b^3}
+ \Gamma_4 \frac{\langle O_{D=7} \rangle}{m_b^4}
+ \Gamma_5 \frac{\langle O_{D=8} \rangle}{m_b^5}
+ ...
\right]\, ,\end{aligned}$$ with $\langle O_{D} \rangle = \langle B_q | O_{D} | B_q \rangle/(2 M_{B_q})$. The leading term $\Gamma_0$ describes the decay of a free $b$-quark and is free of non-perturbative uncertainties, since $\langle O_{D=3} \rangle = 1 + {\cal O} (\langle O_{D=5} \rangle/ m_b^2 )$. At order $1/m_b^2$ small corrections due to the kinetic and chromomagnetic operator are arising, at order $1/m_b^3$ we get e.g. the Darwin term in $\tilde{\Gamma}_3$, but also phase space enhanced terms $\Gamma_3$, stemming from weak exchange, weak annihilation and Pauli interference. The numerical values of the matrix elements are expected to be of the order the hadronic scale $\Lambda_{QCD}$, thus the HQE is an expansion in the small parameter $\Lambda_{QCD}/m_b$. Each of the terms $\Gamma_i$ with $i=0,2,3,...$ can be expanded as $$\Gamma_i = \Gamma_i^{(0)} + \frac{\alpha_s}{4 \pi} \Gamma_i^{(1)}
+ \left( \frac{\alpha_s}{4 \pi} \right)^2 \Gamma_i^{(2)} +
... \, .$$ In our investigation of the lifetimes we will use $\Gamma_0^{(0)}$ and $\Gamma_0^{(1)}$ from [@Krinner:2013cja], which is based on [@Bagan:1994zd; @Bagan:1995yf; @Lenz:1997aa; @Lenz:1998qp; @Greub:2000an; @Greub:2000sy], $\Gamma_3^{(0)}$ from [@Jager:2017gal] based on [@Uraltsev:1996ta; @Neubert:1996we] and $\Gamma_3^{(1)}$ from [@Beneke:2002rj; @Franco:2002fc]. The matrix elements of the dimension six operators were recently determined in [@Kirk:2017juj].\
The HQE can also be used to describe the off-diagonal element $\Gamma_{12}$ of the meson mixing matrix. $$\Gamma_{12}^q = \left[ \Gamma_{12,3}^{q,(0)} + \frac{\alpha_s}{4 \pi} \Gamma_{12,3}^{q,(1)} + ... \right]
\frac{\langle Q_{D=6} \rangle}{m_b^3}
+ \left[ \Gamma_{12,4}^{q,(0)} + \frac{\alpha_s}{4 \pi} \Gamma_{12,4}^{q,(1)} + ... \right]
\frac{\langle Q_{D=7} \rangle}{m_b^4}
+ ... \, ,$$ with $\langle Q_{D} \rangle = \langle B_q | Q_{D} | \bar{B}_q \rangle/(2 M_{B_q})$, where $Q_D$ are $\Delta B = 2$-operators of dimension $D$. The matrix element $\Gamma_{12}^q$ can be used together with $M_{12}^q$ to predict physical observables like mass differences, decay rate differences or semi-leptonic CP-asymmetries, see e.g. [@Artuso:2015swg]. $$\begin{aligned}
\Delta M_q & = & 2 |M_{12}^q| \, ,\label{eq:dMq}
\\
\Delta \Gamma_q & = & 2 |\Gamma_{12}^q| \cos \phi_{12}^q = - {\rm Re}\left( \frac{\Gamma_{12}^q}{M_{12}^q} \right) \Delta M_q \, ,\label{eq:dGammaq}
\\
a_{sl}^q & = & \left| \frac{\Gamma_{12}^q}{M_{12}^q} \right| \sin \phi_{12}^q = {\rm Im} \left( \frac{\Gamma_{12}^q}{M_{12}^q} \right) \, ,\label{eq:aslq}\end{aligned}$$ with the phase $\phi_{12}^q = \arg (- M_{12}^q / \Gamma_{12}^q)$. For our numerical analysis we use results for $\Gamma_{12,3}^{q,(0)}$ , $\Gamma_{12,3}^{q,(1)}$ and $\Gamma_{12,4}^{q,(0)}$ from [@Beneke:1998sy; @Beneke:2002rj; @Beneke:1996gn; @Dighe:2001gc; @Ciuchini:2003ww; @Beneke:2003az; @Lenz:2006hd], results for $M_{12}^q$ from [@Inami:1980fz; @Buras:1990fn] and for the hadronic matrix elements of dimension six the averages presented in [@DiLuzio:2019jyq] based on [@Grozin:2016uqy; @Kirk:2017juj; @King:2019lal] and [@Christ:2014uea; @Bussone:2016iua; @Hughes:2017spc; @Bazavov:2017lyh]. Recently also the first non-perturbative evaluation of dimension seven matrix elements became available [@Davies:2019gnp], which we will use for $\Gamma_{12}^q$.
QCD Factorization {#sec:QCDF}
-----------------
In our analysis we included different observables based on non-leptonic $B$ meson decays such as: $B\rightarrow D \pi$, $B\rightarrow \pi\pi$, $B\rightarrow \pi\rho$ and $B\rightarrow \rho\rho$. To calculate the corresponding amplitudes we used the expressions available in the literature obtained within the QCD Factorization (QCDF) framework [@Beneke:1999br; @Beneke:2000ry; @Beneke:2001ev; @Beneke:2003zv]. In this section we briefly summarise the QCDF results relevant for the evaluation of some of our flavour constraints. Consider the process $B\rightarrow M_1 M_2$, in which a $B$ meson decays into the final states $M_1$ and $M_2$, where either $M_1$ and $M_2$ are two “light” mesons or $M_1$ is “heavy” and $M_2$ is “light” [^1].\
If both $M_1$ and $M_2$ are light, then the matrix element $\langle M_1 M_2 |\hat{Q}_{i}| B \rangle$ of the dimension six effective operators in Eq. (\[eq:mainbasis\]) can be written as $$\begin{aligned}
\langle M_1 M_2|\hat{Q}_{i}| B \rangle &=& \sum_{j}F^{B\rightarrow M_{1}}_{j}(0) \int_{0}^{1}du T^{I}_{ij}(u) \Phi_{M_2}(u) + (M_1 \leftrightarrow M_2)\nonumber\\
&& + \int_{0}^{1}d\xi du dv T^{II}_{i}(\xi, u, v)\Phi_{B}(\xi)\Phi_{M_1}(v)\Phi_{M_2}(u).
\label{eq:fact1}\end{aligned}$$ In the right hand side of Eq. (\[eq:fact1\]) $F^{B\rightarrow M_{1,2}}_{j}(m^2_{2,1})$ represents the relevant form factor to account for the transition $B\rightarrow M_{1}$ (and correspondingly for $B\rightarrow M_{2}$) and $\Phi_M(u)$ is the non-perturbative Light-Cone Distribution Amplitude (LCDA) for the meson $M$, see Fig. \[Fig:QCDFactMatrix\].\
Notice that Eq. (\[eq:fact1\]) is written in such a way that it can be applied to situations where the spectator quark can end in any of the two final state light mesons. If the spectator can go into only one of the final mesons, this one will be labelled as $M_1$ and just the first and the third terms on the right hand side of Eq. (\[eq:fact1\]) should be included. The functions $T^{I, II}$ are called hard-scattering kernels and can be calculated perturbatively. The kernel $T^{I}$ contains, at higher order in $\alpha_s$, nonfactorizable contributions from hard gluon exchange or penguin topologies. On the other hand, nonfactorizable hard interactions involving the spectator quark are part of $T^{II}$.\
When in the final state the mesons $M_1$ is “heavy” and $M_2$ is “light”, then the corresponding QCDF formula for the matrix element $\langle M_1 M_2|\hat{Q}_{i}| B \rangle$ becomes $$\begin{aligned}
\langle M_1 M_2|\hat{Q}_{i}| B \rangle &=&\sum_{j}F^{B\rightarrow M_{1}}_{j}(m^2_{2}) \int_{0}^{1}du T^{I}_{ij}(u) \Phi_{M_2}(u),
\label{eq:fact2}\end{aligned}$$ where the meaning of the different terms in Eq. (\[eq:fact2\]) are analogous to those given for Eq. (\[eq:fact1\]).
![Factorization of matrix elements for $B$ meson decays into “light”-“light” mesons (both diagrams included) and “heavy”-“light” (only left diagram) in QCDF.[]{data-label="Fig:QCDFactMatrix"}](QCDF.pdf){height="3.5cm"}
\
To determine the decay amplitude $\mathcal{A}(B\rightarrow M_1 M_2)$, the matrix element $\langle M_1 M_2 |\hat{\mathcal{H}}_{eff}| B \rangle$ should be calculated, with $\hat{\mathcal{H}}_{eff}$ being the effective Hamiltonian introduced in Eq. (\[eq:Hamiltonian\]). In QCDF the final expression for $\mathcal{A}(B\rightarrow M_1 M_2)$ is written as a linear combination of sub-amplitudes $\alpha^{p, M_1 M2}_i$ and $\beta^{p, M_1 M2}_i$, which for the purposes of our discussion will be termed “Topological Amplitudes” (TA). The TA $\alpha_i^{p}(M_1 M_2)$, for $p=u,c$, have the following generic structure at NLO in $\alpha_s$ [@Beneke:2003zv] $$\begin{aligned}
\alpha_i^{p, M_1 M_2}&=&
\left[ C_{i}(\mu_b) + \frac{C_{i\pm 1}(\mu_b)}{N_c}\right] N_{i}(M_2)
\nonumber \\
&& +
\frac{\alpha_s(\mu_b)}{4\pi} \ \frac{C_F}{N_c} C_{i\pm 1}(\mu_b) V_{i}(M_2)
+ P^{p}_{i}(M_2)
\nonumber\\
&&
+ \frac{\alpha_s(\mu_h)}{4\pi} \frac{4 \pi^2 C_F}{N^2_c} C_{i\pm 1}(\mu_h) H_{i}(M_1 M_2) \, ,
\label{eq:alphaGen0}\end{aligned}$$
where $C_i$ are the Wilson coefficients calculated at the scale $\mu\sim m_b$, and the subindex in the coefficient $C_{i\pm 1}$ is assigned following the rule $$C_{i\pm1} = \left\{
\begin{array}{lr}
C_{i + 1} : & \hbox{if } i \hbox{ is odd }, \\
C_{i - 1} : & \hbox{if } i \hbox{ is even}.
\end{array}
\right.$$\
The Wilson coefficients inside the squared bracket in Eq. (\[eq:alphaGen0\]) will be modified to allow for NP contributions as discussed below, see Section \[sec:strategy\], and $N_c$ denotes the number of colours under consideration and will be taken as $N_c=3$. The global factor $N_{i}(M_2)$ multiplying the square bracket corresponds to the normalisation of the light cone distribution for the meson $M_2$, and is evaluated according to the following rule $$N_i(M_2) = \left\{
\begin{array}{lr}
0 : & \hbox{if $i=6, 8$ and $M_2$ is a vector meson,}\\
1 : & \hbox{in any other case}.
\end{array}
\right.$$\
The symbol $V_{i}(M_2)$ in Eq. (\[eq:alphaGen0\]) stands for the one loop vertex corrections illustrated in Fig. \[fig:Vertex\]. Additionally, the contributions from Penguin diagrams such as those shown in Fig. \[fig:Penguin\] are included in $P_i^{p}(M_2)$, with $p=u,c$. Finally the hard spectator interactions shown in Fig. \[fig:Hard\_scattering\] are accounted for by the term $H_i(M_1 M_2)$.
![NLO Vertex contributions to the process $B\rightarrow M_1 M_2$.[]{data-label="fig:Vertex"}](Vertex.pdf){height="2.5cm"}
![NLO penguin contributions to the process $B\rightarrow M_1 M_2$.[]{data-label="fig:Penguin"}](Penguins.pdf){height="2.5cm"}
![Hard spectator-scattering contributions to the decay $B\rightarrow M_1 M_2$.[]{data-label="fig:Hard_scattering"}](Hard_scattering.pdf){height="2.5cm"}
If $M_1$ and $M_2$ are both pseudoscalar mesons or if one of them is a pseudoscalar and the other is a vector meson, then the hard spectator function $H_i(M_1 M_2)$ can be written in terms of the leading twist LCDAs of $M_1$ and $M_2$, $\Phi_{M_1}$ and $\Phi_{M_2}$ respectively, and the twist-3 LCDA of $M_1$, $\Phi_{m_1}$, as [@Beneke:2003zv]:
$$\begin{aligned}
H_{i}(M_1 M_2)&=&\frac{B_{M_1 M_2}}{A_{M_1 M_2}} \int^{1}_{0} d\xi \frac{\Phi_{B}(\xi)}{\xi}\int^{1}_{0} dx
\int^{1}_{0} dy \Bigl[ \frac{ \Phi_{M_2}(x) \Phi_{M_1}(y)}{\bar{x} \bar{y}} \nonumber\\
&& + r^{M_1}_{\chi} \frac{ \Phi_{M_2}(x) \Phi_{m_1}(y)}{x \bar{y}} \Bigl],~\hbox{(for $i=1,...,4,9,10$)} \, ,\nonumber\\
H_{i}(M_1 M_2)&=&-\frac{B_{M_1 M_2}}{A_{M_1 M_2}} \int^{1}_{0} d\xi \frac{\Phi_{B}(\xi)}{\xi}\int^{1}_{0} dx
\int^{1}_{0} dy \Bigl[ \frac{ \Phi_{M_2}(x) \Phi_{M_1}(y)}{x \bar{y}} \nonumber\\
&& + r^{M_1}_{\chi} \frac{ \Phi_{M_2}(x) \Phi_{m_1}(y)}{\bar{x} \bar{y}} \Bigl],~\hbox{(for $i=5,7$)} \, ,\nonumber\\
H_{i}(M_1 M_2)&=&0,~\hbox{(for $i=6, 8$)} \, .
\label{eq:HS1andHS2}\end{aligned}$$
The analogous expressions for $H_i(M_1 M_2)$ when $M_1$ and $M_2$ are two longitudinally polarised light vector mesons can be found in [@Bartsch:2008ps]. We provide the functions $H_i(M_1 M_2)$ for the processes relevant to this project in Appendix \[Sec:QCDFact\]. The global coefficients $A_{M_1 M_2}$ and $B_{M_1 M_2}$ presented in Eqs. (\[eq:HS1andHS2\]) depend on form factors and decay constants and are given in Eq. (\[eq:GenPar\]) also in Appendix \[Sec:QCDFact\].\
We want to highlight two sources of uncertainty arising in Eq. (\[eq:HS1andHS2\]). The first one stems from the contribution of the twist-3 LCDA $\Phi_{m_1}(y)$. Since this function does not vanish at $y=1$, the integral $\int^1_0 dy \Phi_{m_1}(y)/\bar{y}$ is divergent. To isolate the divergence we follow the prescription given in [@Beneke:2003zv] and write $$\begin{aligned}
\label{eq:Twist3}
\int^1_0 \frac{dy}{\bar{y}} \Phi_{m_1}(y)&=& \Phi_{m_1}(1)\int^1_0 \frac{dy}{\bar{y}} +
\int^1_0 \frac{dy}{\bar{y}} \Bigl[\Phi_{m_1}(y) - \Phi_{m_1}(1) \Bigl]\nonumber\\
&=&\Phi_{m_1}(1) X_H + \int^1_0 \frac{dy}{[\bar{y}]_+}\Phi_{m_1}(y).\end{aligned}$$ The divergent piece of Eq. (\[eq:Twist3\]) is contained in $X_H$. The remaining integral $\int^1_0 dy/[\bar{y}]_+\Phi_{m_1}(y)$ is finite (for instance for a pseudo scalar meson $\Phi_{m_1}(y)=1$ and trivially $\int^1_0 dy/[\bar{y}]_+\Phi_{m_1}(y)=0$ ). Physically $X_H$ represents a soft gluon interaction with the spectator quark. It is expected that $X_H\approx \hbox{ln}(m_b/\Lambda_{QCD})$ because the divergence appearing is regulated by a physical scale of the order $\Lambda_{QCD}$. A complex coefficient cannot be excluded since multiple soft scattering can introduce a strong interaction phase. Here we use the standard parameterisation for $X_H$ introduced by Beneke-Buchalla-Neubert-Sachrajda (BBNS) [@Beneke:2000ry] $$\begin{aligned}
X_H &=& \Bigl( 1 + \rho_H e^{i \phi_H} \Bigl) \hbox{ln}\frac{m_B}{\Lambda_{h}},
\label{eq:XH}\end{aligned}$$ where $\Lambda_h \approx \mathcal{O}(\Lambda_{QCD})$ and $\rho_H \approx \mathcal{O}(1)$.\
The second source of theoretical uncertainty in Eqs. (\[eq:HS1andHS2\]) that deserves special attention is the inverse moment of the LCDA $\Phi_B$ corresponding to the $B$ meson. Following [@Beneke:1999br] we write $$\begin{aligned}
\label{eq:Mom1LCD}
\int_0^1 d\xi \frac{\Phi_B(\xi)}{\xi}&\equiv&\frac{m_B}{\lambda_B},\end{aligned}$$ where $\lambda_B$ is expected to be of $\mathcal{O}(\Lambda_{QCD})$. We provide more details about the values for $X_H$ and $\lambda_B$ used in this work at the end of this subsection.\
Next we address the contributions from weak annihilation topologies, see Fig. \[fig:Annihilation\], which are power suppressed in the $\Lambda_{QCD}/m_b$ expansion with respect to the factorizable amplitudes. Although they do not appear in Eq. (\[eq:fact1\]), they are included in terms of subamplitudes denoted as $\beta^{p, M_1 M_2}_k$. The numerical subscript $k$ describes the Dirac structure under consideration: $k=1$ for $(V-A)\otimes(V-A)$, $k=2$ for $(V-A)\otimes (V+A)$ and $k=3$ for $(-2)(S-P)\otimes(S+P)$. The annihilation coefficients are expressed in terms of a set of basic “building blocks” denoted by $A^{i,f}_k$. Where the subindex $k$ also denotes the Dirac structure being considered as previously explained, and the superindices $i$ and $f$ denote the emission of a gluon by an initial or a final state quark as shown in Fig. \[fig:Annihilation\]. The coefficients $A^{i,f}_k$ relevant for this work can be found in Appendix \[Sec:QCDFact\]. The final expressions for annihilation are the result of the convolution of twist-2 and twist-3 LCDA with the corresponding hard scattering kernels; as in the case of hard spectator scattering, there are also endpoint singularities that are treated in a model dependent fashion. To parameterize these divergences, we follow once more the approach of BBNS. Thus, in analogy with hard spectator scattering we introduce [@Beneke:2000ry] $$\begin{aligned}
X_A &=& \Bigl(1 + \rho_A e^{i \phi_A} \Bigl) \hbox{ln} \frac{m_B}{\Lambda_h}.
\label{eq:XA}\end{aligned}$$
![Annihilation topologies contributing to the decay process $B\rightarrow M_1 M_2$.[]{data-label="fig:Annihilation"}](Annihilation.pdf){height="1.5cm"}
To finalize this subsection we discuss the numerical inputs used in our evaluations of $\lambda_B$, $X_H$ and $X_A$. As indicated in Eq. (\[eq:Mom1LCD\]), the inverse moment of the LCDA of the $B$ meson introduces the parameter $\lambda_B$. The description of non-leptonic $B$ decays based on QCDF requires $\lambda_B\sim 200~\hbox{MeV}$ [@Beneke:2003zv; @Beneke:2009ek]. In contrast, QCD sum rules calculations give a higher value. For instance, in [@Braun:2003wx] the result $\lambda_B=(460 \pm 110)~\hbox{MeV}$ was found. In [@Beneke:2011nf] the usage of the channel $B\rightarrow \gamma \ell \nu_{\ell}$ was proposed in order to extract $\lambda_B$ experimentally. This study was updated by [@Beneke:2018wjp], where subleading power corrections in $1/E_{\gamma}$ and $1/m_b$ were also included. Based on this idea, the Belle collaboration found [@Heller:2015vvm] $$\begin{aligned}
\label{eq:bellebound}
\lambda_B \Bigl|_{\rm Belle}> 238~\hbox{MeV},\end{aligned}$$ at the $90\%$ C.L. and it is expected that the Belle II experiment improves this result [@Beneke:2018wjp]. Interestingly the experimental bound in Eq. (\[eq:bellebound\]) is compatible with the QCD sum rules value quoted above and other theoretical approaches, including the one in [@Lee:2005gza] where the value $\lambda_B=(476.19\pm 113.38)\hbox{ MeV}$ was obtained. For the purposes of our analysis, we consider the following result calculated in [@Bell:2009fm] with QCD sum rules: $$\begin{aligned}
\lambda_B= (400 \pm 150) \hbox{ MeV}.\end{aligned}$$ As discussed above, the calculation of hard spectator interactions and the evaluation of annihilation topologies, leads to extra sources of uncertainty associated with endpoint singularities that are power suppressed. As indicated in Eqs. (\[eq:XH\]) and (\[eq:XA\]) they can be parameterized through the functions $X_H(\rho_{H},\phi_{H})$ and $X_A(\rho_{A},\phi_{A})$ respectively. Using these models, we account for the hard spectator scattering power suppressed singularities through the parameters $\rho_H$ and $\phi_H$. Correspondingly, we introduce $\rho_A$ and $\phi_A$ to address the analogous effects from annihilation topologies. Based on phenomenological considerations we will take into account the intervals [@Bobeth:2014rra; @Hofer:2010ee] $$\begin{aligned}
0<\rho_{H,A}<2, & \hbox{ } 0 <\phi_{H,A}<2\pi, \label{set1}\end{aligned}$$ which correspond to a $200\%$ uncertainty on $|X_H|$ and $|X_A|$.\
To evaluate the central values of our observables we take $\rho_{H,A}=0$, or equivalently $X_H=X_A=\hbox{ln}~ m_B/\Lambda_h$. Finally, we calculate the percentual error from $X_A$ and $X_H$, by estimating the difference between the maximum and the minimum values reached by the hadronic observables when considering the intervals in Eq. (\[set1\]), and then we normalize by two times the corresponding central values.
Strategy {#sec:strategy}
========
Consider the effective Hamiltonian in Eq. (\[eq:Hamiltonian\]) written in terms of the basis in Eq. (\[eq:mainbasis\]). We introduce “new physics” in the Wilson coefficients $\{C_1, C_2\}$ of the operators $\hat{Q}_1$ and $\hat{Q}_2$ following the prescription $$\begin{aligned}
C_{1}(M_W) := C^{\rm SM}_{1}(M_W) + \Delta C_{1}(M_W),\nonumber\\
C_{2}(M_W) := C^{\rm SM}_{2}(M_W) + \Delta C_{2}(M_W),
\label{eq:NPC12}\end{aligned}$$ where in the SM $$\begin{aligned}
\Delta C_{1}(M_W)&=&0,\nonumber\\
\Delta C_{2}(M_W)&=&0.\end{aligned}$$ In this paper we present possible bounds on $\Delta C_1$ and $\Delta C_2$ at the matching scale $\mu=M_W$ and work under the assumption of “single operator dominance” by considering changes to each Wilson coefficient independently, e.g. to establish constraints on $\Delta C_1(M_W)$ we fix $\Delta C_2(M_W)=0$ and vice versa. This is the most conservative approach, if we allow both parameters to change simultaneously this can result into partial cancellations leading to potentially bigger NP allowed regions for $\{\Delta C_{1}(M_W), \Delta C_{2}(M_W)\}$. Since the theoretical formulae for our observables are calculated at the scale $\mu=m_b$, we evolve down the modified Wilson coefficients $C_{1}(M_W)$ and $C_{2}(M_W)$ up to this scale using the renormalisation group formalism described in Section \[sec:Heff\]. We consider NP to be leading order only, therefore we treat the SM contribution $\{C^{\rm SM}_1(M_W), C^{\rm SM}_2(M_W)\}$ and the NP components $\{\Delta C_{1}(M_W), \Delta C_{2}(M_W)\}$ differently under the renormalisation group equations. For instance the evolution of $\{C^{\rm SM}_{1}(M_W), C^{\rm SM}_{2}(M_W)\}$ is done using the full NLO expressions in Eq. (\[eq:fullevoMat\]), on the other hand $\{\Delta C_{1}(M_W), \Delta C_{2}(M_W)\}$ are evolved down using only the LO version shown in Eq. (\[Eq:LOevo\]). Notice that, even though at the scale $\mu=M_W$ the only modified Wilson coefficients are $C_1(M_W)$ and $C_2(M_W)$, the non diagonal nature of the evolution matrices propagates these effects to all the other Wilson coefficients undergoing mixing at $\mu=m_b$. Hence, when writing expressions for the different physical observables, it makes sense to consider NP effects in $C_{i}(m_b)$ even for $i\neq 1,2$.
Statistical analysis {#sec:stat}
--------------------
The values of $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$ compatible with experimental data are evaluated using the program MyFitter [@Wiebusch:2012en]. The full statistical procedure is based on a likelihood ratio test. The basic ingredient is the $\chi^2$ function $$\begin{aligned}
\chi^2(\vec{\omega}) &=&\sum_i \Bigl(\frac{\tilde{O}_{i,\rm exp} -
\tilde{O}_{i,\rm theo}(\vec{\omega}) }{\sigma_{i,\rm exp}} \Bigl)^2,
\label{eq:chi2}\end{aligned}$$ where $\tilde{O}_{i, \rm exp}$ and $\tilde{O}_{i,\rm theo}$ are the experimental and theoretical values of the $i-$th observable respectively and $\sigma_{i,\rm exp}$ is the corresponding experimental uncertainty. The vector $\vec{\omega}$ contains all the inputs necessary for the evaluation of $\tilde{O}_{i,\rm theo}$ and will be written as $$\begin{aligned}
\label{eq:omegavector}
\vec{\omega}&=&\Bigl(\Delta C_1(M_W), \Delta C_2(M_W), \vec{\lambda}\Bigl).\end{aligned}$$ In Eq. (\[eq:omegavector\]) we are making a distinction between $\{\Delta C_{1}(M_W), \Delta C_{2}(M_W)\}$ and the rest of the theoretical inputs, which have been included in the subvector $\vec{\lambda}$. Examples of the entries inside $\vec{\lambda}$ are masses, decay constants, form-factors, etc. Notice that our main target is the determination of $\Delta C_{1}(M_W)$ and $\Delta C_{2}(M_W)$, however, the components entering $\vec{\lambda}$ are crucial in defining the uncertainty of our observables and hence in establishing the potential values of $\Delta C_{1}(M_W)$ and $\Delta C_{2}(M_W)$. In this respect, we will say that the elements inside $\vec{\lambda}$ are our nuisance parameters, and that the determination of the possible NP values compatible with data are obtained by profiling the likelihood with respect to $\{\Delta C_{1}(M_W), \Delta C_{2}(M_W)\}$. During our analysis the elements of $\{\Delta C_1(M_W), \Delta C_2(M_W)\}$ are assumed to be complex and, as indicated in the argument, the initial evaluation is done at the scale $\mu=M_W$. The statistical theory behind the $\chi^2$-fit software used, e.g. MyFitter [@Wiebusch:2012en], can be found in the documentation of the computer program. Here we only summarize the key steps involved in our analysis:
1. We first define the Confidence Level $CL$ for the $\chi^2$-fit. Following the criteria established in [@Bobeth:2014rda; @Brod:2014bfa] for our study we take $$\begin{aligned}
CL=90\%,
\end{aligned}$$ which is equivalent to $1.64$ standard deviations approximately.
2. \[step2\] Then, we establish a sampling region on the plane defined by the real and the imaginary components of $\{\Delta C_1(M_W), \Delta C_2(M_W)\}$. The sampling region is observable dependent. In our case we opt for rectangular grids around the origin of the complex plane defined by $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$. Notice that the origin of our complex plane corresponds to the SM value. The number of points in our test grid depends on three factors: the numerical stability of our algorithms, on the time required to compute a particular combination of observables and the size of the NP regions determined by them.
3. Each one of the points inside the sampling grid described in the previous step corresponds to a null-hypothesis for the components of $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$. We test our null-hypothesis values using a likelihood ratio test considering the confidence level established in the first step. For a combination of multiple observables several nuisance parameters are involved and the full statistical procedure becomes time and resource consuming. Hence, the parallelization of our calculations using a computer cluster became necessary. We did our first numerical evaluations partially at the Institute for Particle Physics and Phenomenology (IPPP, Durham University). The results presented in this work were obtained in full using the computing facilities available at the Dutch National Institute for Subatomic Physics (Nikhef).
Individual Constraints {#sec:constraints}
======================
In this section we present the different observables considered during the analysis. From Sections \[sec:bcud\] to \[sec:bccs\] we focus exclusively on observables that constrain individual $b$ decay channels, in our case: $b\rightarrow c\bar{u}d$, $b\rightarrow u \bar{u}d$, $b\rightarrow c\bar{c}s$ and $b\rightarrow c\bar{c}d$. In Section \[sec:multiple\_channels\] we will study observables that affect multiple $b$ decay channels. In what follows and unless stated otherwise, the SM predictions as well as the experimental determinations are given at $1~\sigma$, i.e. $68\%~\rm{C. L.}$. However the allowed NP regions for $C_1$ and $C_2$ are presented at $1.64~\sigma$, i.e. $90\%~\rm{C. L.}$.\
Following the notation introduced in Eqs. (\[eq:Hamiltonian\]) and (\[eq:mainbasis\]) we will denote the NP effects in the Wilson coefficient of the operator $\hat{Q}^{q, \, pp'}_i$ as $\Delta C^{q, \, pp'}_i$ for $i=1,2$ and $q=d,s$. Then for example, $\{\Delta C^{d, \, cu}_{1}(M_W), \Delta C^{d, \, cu}_{2}(M_W)\}$ will quantify the potential deviations from the SM values in the coefficients of $\{\hat{Q}^{d, \, cu}_{1}, \hat{Q}^{d, \, cu}_{2}\}$ which describe the tree level process $b\rightarrow c \bar{u} d$.\
In this work NP is supposed to be leading order in $\alpha_s$ and $\alpha$ only. Since all the vertex corrections $V^{M}_{i}$, penguins $P^{p, M}_{i}$ and hard scattering spectator interactions $H^{M_1 M_2}_{i}$ inside Eq. (\[eq:alphaGen0\]) are already suppressed by factors of $\mathcal{O}(\alpha_s)$ and $\mathcal{O}(\alpha)$, we will consistently drop the extra contributions $\Delta C_1^{d, \, uu}(M_W)$ and $\Delta C_2^{d, \, uu}(M_W)$ affecting any of these terms for all observables that are described by QCDF.
Observables constraining $b\rightarrow c\bar{u}d$ transitions {#sec:bcud}
-------------------------------------------------------------
We start with the dominant quark level decay $ b \to c \bar{u} d$ and describe our analysis of the potential NP regions for $\Delta C^{d, \, cu}_{1}(M_W)$ and $\Delta C^{d, \, cu}_{2}(M_W)$. The decay $\bar{B}^{0}\rightarrow D^{*+}\pi^{-}$ will exclude large positive values of $\Delta C^{d, \, cu}_{1}(M_W)$ and it will significantly constrain $\Delta C^{d, \, cu}_{2}(M_W)$.
### $\bar{B}_d^{0}\rightarrow D^{*+}\pi^{-}$ {#sec:RDpi}
Our bounds will be established using the ratio between the decay width for the non-leptonic decay $\bar{B}^0_d\rightarrow D^{*+}\pi^{-}$ and the differential rate for the semi-leptonic process $\bar{B}^0_d\rightarrow D^{*+}l^-\bar{\nu}_l$ evaluated at $q^2=m^2_{\pi}$ for $l=e, \mu$ $$\begin{aligned}
\label{eq:RDpi}
R_{D^{*}\pi}&=&\frac{\Gamma(\bar{B}^0\rightarrow D^{*+}\pi^{-})}
{d\Gamma(\bar{B}^0\rightarrow D^{*+} l^{-}\bar{\nu}_{l})/dq^2|_{q^2=m^2_{\pi}}}
\simeq 6\pi^2f^2_{\pi}|V_{ud}|^2|\alpha_{2}^{D^{*}\pi}+\beta^{D^* \pi }_2|^2.\nonumber\\\end{aligned}$$ This observable was proposed by Bjorken to test the factorization hypothesis [@Bjorken:1988kk], it is free from the uncertainties associated with the required form factor to describe the transition $B\rightarrow D^{*}$ and offers the possibility of comparing directly the coefficient $\alpha^{D^{*}\pi}_{2}$ calculated using QCDF against experimental observations. At NLO the TA $\alpha_{2}^{D^*\pi}$ [@Beneke:2000ry] is given by $$\begin{aligned}
\alpha^{{\rm NLO}, D^{*}\pi}_{2}&=& C^{d, \, cu}_2(\mu_b) + \frac{C^{d, \, cu}_{1}(\mu_b)}{3} + \frac{\alpha_{s}(\mu_b)}{4\pi}\frac{C_F}{N_c}C^{d, \, cu}_{1}(\mu_b)
\Bigl[-\tilde{B} -6 \hbox{ln}\frac{\mu^2}{m^2_{b}}\nonumber\\
&&
+ \int^1_{0}du F(u, -x_c)\Phi_{\pi}(u) \Bigl]
\approx 1.057 \pm 0.040 \, ,
\label{eq:BDpi}\end{aligned}$$ where the term $\tilde{B}$ inside the square bracket cancels the renormalisation scheme dependence of the Wilson coefficients $C^{d, \, cu}_1$ and $C^{d, \, cu}_2$, which in naive dimensional regularisation requires $\tilde{B}=11$. The kernel $F(u,x_c)$ includes QCD vertex corrections arising in the decay $b\rightarrow c \bar{u} d$ and has to be evaluated at $x_c=\bar{m}_c(\bar{m}_b) /\bar{m}_b$ before being convoluted with the light-cone distribution $\Phi_{\pi}$ associated with the $\pi^-$ meson in the final state. For the explicit evaluation of Eq. (\[eq:RDpi\]) we use the updated determination of the TA $\alpha^{D^{*}\pi}_{2}$ at NNLO calculated in [@Huber:2016xod]
$$\begin{aligned}
\alpha^{{\rm NNLO}, D^{*}\pi}_{2}&=&1.071^{+0.013}_{-0.014}. \end{aligned}$$
The annihilation topologies contributions are taken into account through $$\begin{aligned}
\beta^{D^* \pi }_2&=&\frac{C_F}{N^2_c}\frac{B_{D^* \pi}}{A_{D^* \pi}} C_2^{d, \, cu}(\mu_h) A^i_1(\mu_h) \approx 0.014 \pm 0.045 \, ,\end{aligned}$$ where $$\begin{aligned}
\frac{B_{D^* \pi}}{A_{D^* \pi}}=\frac{f_B f_{D^*}}{m^2_B A^{B\rightarrow D^*}_0(0)},\end{aligned}$$ and $$\begin{aligned}
A^i_1(\mu_h)&\approx& 6\pi\alpha_s(\mu_h)
\Biggl[3\Biggl(X_A -4 + \frac{\pi^2}{3}\Biggl) + r^{D^*}_{\chi}(\mu_h) r^{\pi}_{\chi}(\mu_h)\Bigl(X^2_A -2 X_A\Bigl)\Biggl],
\nonumber
\\\end{aligned}$$ with the parameters $X_A$ are given in Eq. (\[eq:XA\]) and the factors $r^{\pi}_{\chi}$ and $r^{D^*}_{\chi}$ quoted in Eq. (\[eq:GenPar\]). Using the numerical inputs given in \[Sec:Inputs\] we find $$\begin{aligned}
R^{\rm SM}_{D^{*}\pi} &=&\Bigl(1.09\pm 0.17\Bigl)\hbox{GeV}^2,
\label{eq:BDpi_theo}\end{aligned}$$ corresponding to [ $z=0.225$]{}, the partial contributions to the total error are shown in Table \[tab:errorBDpi\].
Parameter Relative error
---------------------------- ----------------
$X_A$ $13.25\%$
$\mu$ $8.14\%$
$f_{\pi}$ $1.23\%$
$\Lambda^{QCD}_5$ $0.47\%$
$A_0^{B\rightarrow D^{*}}$ $0.08\%$
$f_B$ $0.02\%$
$m_b$ $0.01\%$
Total $15.61\%$
: Error budget for the observable $R_{D^{*}\pi}$.[]{data-label="tab:errorBDpi"}
The SM result is dominated by the contribution of $C_2$, thus we will get from $R_{D^*\pi}$ strong constraints on $C_2$ and relatively weak ones on $C_1$. To compute the experimental result we use [@Huber:2016xod] $$\begin{aligned}
d\Gamma(\bar{B}^0_d \rightarrow D^{*+}l^{-}\bar{\nu}_l)/dq^2\Bigl|_{q^2=m^2_{\pi}}= (2.04 \pm 0.10)\cdot 10^{-3}
\hbox{GeV}^{-2}\hbox{ps}^{-1}\, ,\end{aligned}$$ together with [@Amhis:2016xyh] $$\begin{aligned}
\mathcal{B}r(\bar{B}^0\rightarrow D^{*+}\pi^{-})&=&
(2.84\pm 0.15)\cdot 10^{-3},\end{aligned}$$ to obtain $$\begin{aligned}
R^{\rm Exp}_{D^{*}\pi}&=&
(0.92\pm 0.07)\hbox{GeV}^2.
\label{eq:BDpiExp}\end{aligned}$$
![ Potential regions for the NP contributions in $\Delta C^{d, cu}_1(M_W)$ and $\Delta C^{d, cu}_2(M_W)$ allowed by the observable $R_{D^{*}\pi}$ at $90\%$ C.L.. The black point corresponds to the SM value. Since $R_{D^{*}\pi}$ is dominated by $C_2$, we get strong constraints on $C_2$ and relatively weak ones on $C_1$.[]{data-label="fig:BDpi"}](BDpi_float_dC1.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions in $\Delta C^{d, cu}_1(M_W)$ and $\Delta C^{d, cu}_2(M_W)$ allowed by the observable $R_{D^{*}\pi}$ at $90\%$ C.L.. The black point corresponds to the SM value. Since $R_{D^{*}\pi}$ is dominated by $C_2$, we get strong constraints on $C_2$ and relatively weak ones on $C_1$.[]{data-label="fig:BDpi"}](BDpi_float_dC2.pdf "fig:"){height="5cm"}
Our $\chi^2$-fit provides the $90$ % confidence level regions allowed by $\Delta C_1^{d, cu}(M_W)$ and $\Delta C_2^{d, cu}(M_W)$ displayed in Fig. \[fig:BDpi\], which show that $\Delta C_1^{d, cu}(M_W)$ is quite unconstrained. On the other hand, there are stronger restrictions on the values that $\Delta C_2^{d, cu}(M_W)$ can assume. This is not surprising considering that $C^{d, cu}_2$ gives the leading contribution to $\alpha^{D^*\pi}_2$; this can be seen in the NLO version of the formula for this term in Eq. (\[eq:BDpi\]).
Observables constraining $b\rightarrow u\bar{u}d$ transitions {#sec:buud}
-------------------------------------------------------------
We proceed to describe the constraints to the NP contributions $\Delta C^{d, uu}_{1,2}(M_W)$ entering in the CKM suppressed quark level transition $b\rightarrow u\bar{u}d$. Our bounds are obtained taking into account both the branching ratios, but also the CP asymmetries of the decays $B\to \pi\pi,\, \rho\pi,\, \rho\rho$ and using again QCDF for the theoretical description. The combination of CP-conserving and CP-violating observables significantly shrinks the allowed region for $\Delta C^{d, uu}_{2}(M_W)$.
### $R_{\pi\pi}$ {#sec:Rpipi}
Our first observable is the theoretical clean ratio [@Bjorken:1988kk] $$\begin{aligned}
R_{\pi\pi}&=&\frac{\Gamma(B^+\rightarrow \pi^+\pi^0)}{d\Gamma(\bar{B}^0_d\rightarrow \pi^{+} \ell^{-}\bar{\nu}_{\ell})/dq^2|_{q^2=0}}
\simeq
3\pi^2f^2_{\pi}|V_{ud}|^2|\alpha_1^{\pi\pi} + \alpha_2^{\pi\pi}|^2,
\label{eq:Rpipi}\end{aligned}$$ where $\ell^{-}=\mu^{-},~e^{-}$ and $\alpha_1^{\pi\pi}$, $\alpha_2^{\pi\pi}$ are the TA associated with the decays $B\rightarrow \pi\pi$ which were introduced in a generic way in Eq. (\[eq:alphaGen0\]). The dependence of $ R_{\pi\pi}$ is now symmetric in $C_1$ and $C_2$, so both Wilson coefficients will be constrained in an almost identical way. Notice that the denominator in Eq. (\[eq:Rpipi\]) refers to the differential distribution $d\Gamma(\bar{B}^0_d\rightarrow \pi^{+} \ell^{-}\bar{\nu}_{\ell})/dq^2$ evaluated at $q^2=0$, where $q^2$ is the four momentum transferred to the system composed by the $\ell^{-}$ and $\bar{\nu}_{\ell}$. In Eq. (\[eq:Rpipi\]), our sensitivity to NP enters through the decay $B^+\rightarrow \pi^+\pi^0$ which is to a good degree of precision a pure tree level channel. We neglect hypothetical BSM effects in $\bar{B}^0_d\rightarrow \pi^{+} \ell^{-}\bar{\nu}_{\ell}$ for $\ell=e, \mu$, see e.g. [@Banelli:2018fnx] for a recent investigation of such a possibility. The observable $R_{\pi\pi}$ is theoretically clean since it does not depend on the CKM matrix element $|V_{ub}|$, which cancels in the ratio. Moreover, at leading order in $\alpha_s$ it is independent of the form factors $F^{B \rightarrow \pi}_+(0)= F^{B \rightarrow \pi}_0(0)$ which account for the hadronic transition $B\rightarrow \pi$. However, these parameters enter in the coefficients $\alpha_{1, 2}^{\pi\pi}$ once the spectator interaction contributions $H_{\pi \pi}$ are taken into account. More precisely, they appear in the ratio $B_{\pi\pi}/A_{\pi\pi}$ inside $H_{\pi\pi}$, see Eqs. (\[eq:GenPar\]) and (\[eq:HardScattering\]). Currently, the coefficients $\alpha_{1,2}^{\pi\pi}$ in Eq. (\[eq:Rpipi\]) are available up to NNLO in QCDF [@Beneke:2009ek; @Beneke:2005vv; @Bell:2007tv; @Bell:2009nk]. In order to optimize the computation time of our $\chi^2$-fit, we have accounted for the NNLO effects using the following formula $$\begin{aligned}
\frac{\alpha_{1,2}^{\pi\pi}}{\alpha_{1,2}^{\text{NNLO}, \pi\pi}}&=&
\frac{\alpha_{1,2}^{\text{NLO}, \pi\pi}(\mu_0)}{\alpha_{1,2}^{(0)~\text{NLO}, \pi\pi}}.
\label{eq:repipi}\end{aligned}$$ Where in Eq. (\[eq:repipi\]):
- $\alpha_{1,2}^{\text{NLO}, \pi\pi}(\mu_0)$ corresponds to the fully programmed NLO expression for the amplitude $\alpha^{\pi\pi}_{1,2}$. For this term, the renormalization scale is kept fixed to the value $\mu_0=m_b$ whereas the rest of the input parameters are allowed to float.
- $\alpha_{1,2}^{\text{(0)~NLO}, \pi\pi}$ are the NLO version of the amplitudes $\alpha^{\pi\pi}_{1,2}$ evaluated at the central value of all the input parameters and kept constant during the $\chi^2$-fit.
- $\alpha^{\rm NNLO, \pi\pi}_{1, 2}$ are the NNLO version of the amplitude $\alpha^{\pi\pi}_{1,2}$. We are interested in the NNLO results because of the reduction in the renormalisation scale dependency with respect to the NLO determination. Therefore during the $\chi^2$-fit we have treated the coefficients $\alpha^{\text{NNLO}, \pi\pi}_{1,2}$ as nuisance parameters given by [@Bell:2009fm] $$\begin{aligned}
\begin{split}
\alpha^{\text{NNLO}, \pi\pi}_{1}&=0.195^{ + 0.025 }_{ - 0.025} - \Bigl(0.101^{ + 0.021}_{ - 0.029}\Bigl)i,\\
\alpha^{\text{NNLO}, \pi\pi}_{2}&=1.013^{ + 0.008 }_{ - 0.011} + \Bigl( 0.027^{+ 0.020}_{ - 0.013}\Bigl)i,
\end{split}
\label{eq:NNLOa1a2_pipi}\end{aligned}$$ where the error indicated arises only from the renormalization scale uncertainty. Alternatively, we also tested the numerical values provided in [@Beneke:2009ek] which give consistent results once the uncertainties arising by varying $\mu$ and $\mu_{h}$ [^2] are taken into account.
We predict the SM value of $ R_{\pi\pi}$ to be $$\begin{aligned}
\label{eq:Rpipi_val}
R^{\rm SM}_{\pi\pi}=\Bigl(0.70 \pm 0.14\Bigl),\end{aligned}$$ with the partial contributions to the total error shown in Table \[table:Rpipi\].
Parameter Relative Error
---------------------------- ---------------- --
$X_{H}$ $16.86\%$
$\lambda_B$ $8.85\%$
$\mu$ $4.42\%$
$a^{\pi}_{2}$ $2.57\%$
$F^{B\rightarrow\pi}_+(0)$ $1.77\%$
$f_{\pi}$ $1.35\%$
$m_{s}$ $0.68\%$
$\Lambda^{QCD}_5$ $0.25\%$
$f_{B}$ $0.14\%$
$m_b$ $0.04\%$
$V_{us}$ $0.01\%$
Total $19.86\%$
: Error budget for the observable $R_{\pi\pi}$. Here $X_H$ accounts for the endpoint singularities from hard scattering spectator interactions. $F^{B\rightarrow \pi}_{+}(0)$ is the relevant form factor for the transitions $B\rightarrow \pi$. The parameter $\lambda_B$ is the inverse moment of the LCDA of the $B$ meson and $a^{\pi}_{2}$ is the second Gegenbauer moment for the $\pi$ meson. []{data-label="table:Rpipi"}
To calculate the experimental result, we consider the following updated value for the branching fraction for the process $B^{+}\rightarrow \pi^{+}\pi^{0}$ [@Tanabashi:2018oca] $$\begin{aligned}
\label{eq:Bpipi}
\mathcal{B}r(B^{+}\rightarrow \pi^{+}\pi^{0})&=&(5.5\pm 0.4) \cdot 10^{-6},\end{aligned}$$ together with the product [@Gonzalez-Solis:2018ooo] $$\begin{aligned}
\label{eq:VubFBpi}
|V_{ub}F^{B\rightarrow \pi}_+(0)|&=&(9.25 \pm 0.31)\cdot 10^{-4},\end{aligned}$$ which was extracted via a fit to data including experimental results from BaBar, Belle and CLEO [@delAmoSanchez:2010af; @Lees:2012vv; @Ha:2010rf; @Sibidanov:2013rkk; @Adam:2007pv] under the assumption of the SM, neglecting the mass of the light leptons and keeping the mass of the $B^*$ meson fixed. Using the inputs indicated in Eqs. (\[eq:Bpipi\]) and (\[eq:VubFBpi\]) we obtain the following result for the experimental value of $R_{\pi\pi}$ $$\begin{aligned}
R^{\rm Exp}_{\pi\pi}&=&\Bigl(0.83\pm 0.08\Bigl).\end{aligned}$$ This determination is in agreement with the result given in [@Beneke:2009ek], however, the uncertainty is reduced by nearly $50\%$ due to the update on the product $|V_{ub}F^{B\rightarrow \pi}_+(0)|$ shown in Eq. (\[eq:VubFBpi\]). The allowed regions for $\Delta C^{d, \, uu}_1(M_W)$ and $\Delta C^{d, \, uu}_2(M_W)$ are shown in Fig. \[fig:Rpipi\] - we note here rather stringent constraints on positive and real values of $\Delta C^{d, \, uu}_1(M_W)$ and $\Delta C^{d, \, uu}_2(M_W)$.
![Potential regions for the NP contributions $\Delta C^{d, uu}_1(M_W)$ and $\Delta C^{d, uu}_2(M_W)$ allowed by the observable $R_{\pi\pi}$ at $90\%$ C.L.. The black point corresponds to the SM value. The dependence of $ R_{\pi\pi}$ is symmetric in $C_1$ and $C_2$, therefore both Wilson coefficients are constrained in an almost identical way.[]{data-label="fig:Rpipi"}](Rpipi_C1.pdf "fig:"){height="5cm"} ![Potential regions for the NP contributions $\Delta C^{d, uu}_1(M_W)$ and $\Delta C^{d, uu}_2(M_W)$ allowed by the observable $R_{\pi\pi}$ at $90\%$ C.L.. The black point corresponds to the SM value. The dependence of $ R_{\pi\pi}$ is symmetric in $C_1$ and $C_2$, therefore both Wilson coefficients are constrained in an almost identical way.[]{data-label="fig:Rpipi"}](Rpipi_C2.pdf "fig:"){height="5cm"}
### $S_{\pi\pi}$ {#sec:Spipi}
Since our NP contributions are allowed to be complex, we are exploring the possibility of having new CP violating phases. We can constrain these effects through the time-dependent asymmetries $$\begin{aligned}
\mathcal{A}^{CP}_{f}(t)&=& \frac{d\Gamma [\bar{B}^0_q\rightarrow f](t)/dt - d\Gamma[B^0_q\rightarrow f](t)/dt}
{d\Gamma [\bar{B}^0_q\rightarrow f](t)/dt + d\Gamma[B^0_q\rightarrow f](t)/dt}\nonumber\\
&\simeq&S_f \sin \Delta M_q t - C_f \cos \Delta M_q t,
\label{eq:CPint}\end{aligned}$$ where we have neglected the effects of the observable $\Delta \Gamma_q$ entering in the denominator - this is only justified for the case of $B_d$-mesons. The symbol $f$ in Eq. (\[eq:CPint\]) denotes a final state to which both, the $B_q^0$ and the $\bar{B}_q^0$ meson can decay, for $q=d, s$. The mixing induced ($S_f$) and direct CP asymmetries ($C_f$) are defined as $$\begin{aligned}
S_f\equiv \frac{2~\rm Im (\lambda^q_f)}{1 + |\lambda^q_f|^2},
&&
C_f\equiv\frac{1-|\lambda^q_f|^2}{1 + |\lambda^q_f|^2}.
\label{eq:Sf}\end{aligned}$$ with the parameter $\lambda^q_f$ given by $$\begin{aligned}
\lambda^q_f:= \frac{q}{p}\Bigl|_{B_q} \frac{\bar{A}^q_f}{A^q_f}.
\label{eq:lambdaf}\end{aligned}$$ In Eq. (\[eq:lambdaf\]) the amplitude for the process $B^0_q\rightarrow f$ has been denoted as $A^q_f$ and the one for $\bar{B}^0_q\rightarrow f$ as $\bar{A}^q_f$ . Finally, $$\begin{aligned}
\frac{q}{p}\Bigl|_{B_q}=\frac{M^{q*}_{12}}{|M^{q}_{12}|},
\label{eq:qop}\end{aligned}$$ where $M^{d}_{12}$ is the contribution from virtual internal particles to the $B^0_q-\bar{B}^0_q$ mixing diagrams. For instance in the case of $B_d$ mesons we get $$\begin{aligned}
\frac{q}{p}\Bigl|_{B_d}=\left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2.
\label{eq:qopBeta}\end{aligned}$$ Notice that the observable $S_f$, in Eq. (\[eq:Sf\]), is particularly sensitive to the imaginary components of $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$.\
For the decays $\bar{B}_d^0\rightarrow \pi^+\pi^-$ and $B_d^0\rightarrow \pi^+\pi^-$ we get $$\begin{aligned}
\label{eq:Spipi_def}
S_{\pi\pi}=\frac{2~\rm Im \Bigl( \lambda^d_{\pi\pi} \Bigl)}
{1+| \lambda^d_{\pi\pi} |^2} \, , & \hspace{1cm}
\lambda^d_{\pi\pi}=\left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2\frac{\bar{\mathcal{A}}_{\pi^+\pi^-}}{\mathcal{A}_{\pi^+\pi^-}}.
$$ Here $\bar{\mathcal{A}}_{\pi^+\pi^-}$ and $\mathcal{A}_{\pi^+\pi^-}$ denote the transition amplitudes for the processes $\bar{B}_d^0\rightarrow \pi^+\pi^-$ and $B_d^0\rightarrow \pi^+\pi^-$ respectively. They have been calculated in [@Beneke:2003zv] using the QCDF formalism briefly described in Section \[sec:QCDF\]. The explicit expression for $\bar{\mathcal{A}}_{\pi^+\pi^-}$ is $$\begin{aligned}
\bar{\mathcal{A}}_{\pi^+\pi^-}&=
A_{\pi \pi}\Bigl(\lambda^{(d)}_u \alpha_2^{\pi\pi} + \lambda^{(d)}_u \beta_2^{\pi\pi} +
\sum_{p=u,c}\lambda^{(d)}_p\Bigl[\tilde{\alpha}_4^{p,\pi\pi}+\tilde{\alpha}^{p,\pi\pi}_{4,EW}\nonumber\\
& + \beta_3^{p,\pi\pi} -1/2\beta^{p, \pi\pi}_{3,EW} + 2 \beta^{p, \pi\pi}_4 + 1/2\beta^{p, \pi\pi}_{4,EW}\Bigl]\Bigl).
\label{eq:ABpipi}\end{aligned}$$ To determine the remaining amplitude $\mathcal{A}_{\pi^+\pi^-}$, the CP conjugate of the expression in Eq. (\[eq:ABpipi\]) has to be obtained. The parameters $\lambda^{(d)}_{u,c}$ in Eq. (\[eq:ABpipi\]) correspond to products of CKM matrix elements as defined in Eq. (\[eq:lambmdadef\]). Notice that our sensitivity towards NP in tree level enters mainly through $\alpha_2^{\pi\pi}$, which according to Eq. (\[eq:alphaGen0\]) has a leading dependency on $\Delta C^{d, \, uu}_2(M_W)$. Therefore, the observable $S_{\pi\pi}$ yields to strong constraints on $\Delta C^{d, \, uu}_2(M_W)$, while giving weak ones in $\Delta C^{d, \, uu}_1(M_W)$. Besides the TA $\alpha_2^{\pi\pi}$, which is introduced in our analysis at NNLO following the prescription shown in Eq. (\[eq:repipi\]), there are now also contributions from QCD and electroweak penguins given by $\tilde{\alpha_4}^{p,\pi\pi}$ and $\tilde{\alpha_4}^{p,\pi\pi}_{EW}$ respectively. Finally $\beta^{p, \pi\pi}_4$ accounts for QCD penguin annihilation and $\beta^{p, \pi\pi}_{4, EW}$ for electroweak penguin annihilation. All the TA can be calculated using Eq. (\[eq:alphaGen0\]) together with the information presented in Appendix \[Sec:QCDFact\]. At leading order in $\alpha_s$, the normalization factor $A_{\pi\pi}$ introduced in Eq. (\[eq:GenPar\]), which depends on the form factor $F^{B\rightarrow \pi}_+(0)$ and the decay constant $f_{\pi}$, cancels in the ratio given in Eq. (\[eq:Spipi\_def\]). However it appears again once interactions with the spectator are taken into account. This leads to small effects in the error budget of $\mathcal{O}(1~\%)$ and $\mathcal{O}(0.1~\%)$ from $F^{B\rightarrow \pi}_+(0)$ and $f_{\pi}$ respectively, see Table \[tab:tableSpipi\]. Our theoretical prediction for the SM value of the asymmetry $S_{\pi\pi}$ is $$\begin{aligned}
S^{\rm SM}_{\pi\pi} = -0.59 \pm 0.25.
\label{eq:Spipi}\end{aligned}$$
Parameter Relative Error
------------------------- ----------------
$X_{A}$ $41.76\%$
$\gamma$ $6.24\%$
$m_s$ $4.43\%$
$|V_{ub}/V_{cb}|$ $4.31\%$
$X_{H}$ $3.08\%$
$\mu$ $2.79\%$
$\Lambda^{QCD}_5$ $2.25\%$
$\lambda_B$ $1.55\%$
$F^{B\rightarrow\pi}_+$ $0.89\%$
$m_b$ $0.76\%$
$|V_{us}|$ $0.13\%$
$f_B$ $0.07\%$
$m_c$ $0.06\%$
$f_{\pi}$ $0.06\%$
$a^{\pi}_{2}$ $0.03\%$
Total $42.98\%$
: Error budget for the observable $S_{\pi\pi}$. Most of the inputs coincide with those for $R_{\pi\pi}$ described in Table \[table:Rpipi\]. Additionally the effects of annihilation topologies are accounted by $X_A$.[]{data-label="tab:tableSpipi"}
For the corresponding experimental value we have [@Amhis:2016xyh] $$\begin{aligned}
S^{\rm Exp}_{\pi\pi}&=-0.63 \pm 0.04,\end{aligned}$$ showing consistency with the SM estimation in Eq. (\[eq:Spipi\]). The relevant constraints on $\Delta C_2^{d, \, uu} (M_W)$ derived from $S_{\pi\pi}$ are presented in Fig. \[fig:Spipi\] - constraints on $\Delta C_1^{d, \, uu} (M_W)$ are very weak and will thus not be shown.
![ Potential regions for the NP contributions in $\Delta C^{d, uu}_2(M_W)$ allowed by the observable $S_{\pi\pi}$ at $90\%$ C.L., the shift in the Wilson coefficient $\Delta C^{d, uu}_1(M_W)$ is only weakly constrained and therefore not shown. The black point corresponds to the SM value.[]{data-label="fig:Spipi"}](Spipi_C2.pdf){height="5cm"}
### $S_{\rho\pi}$ {#sec:Srhopi}
![ Potential regions for the NP contributions in $\Delta C^{d, uu}_2(M_W)$ allowed by the observable $S_{\rho\pi}$ at $90\%$ C.L., the shift in the Wilson coefficient $\Delta C^{d, uu}_1(M_W)$ is only weakly constrained and therefore not shown. The black point corresponds to the SM value.[]{data-label="fig:Srhopi"}](Srhopi_C2.pdf){height="5cm"}
We also included the mixing induced CP asymmetry associated with the decays $B_d, \bar{B}_d \rightarrow \rho\pi$. Our evaluation is based in the following definition $$\begin{aligned}
S_{\pi\rho}&=\frac{1}{2}\Bigl( \tilde{S}_{\pi\rho} + \tilde{S}_{\rho\pi} \Bigl),\end{aligned}$$ with the partial contributions given by $$\begin{aligned}
\tilde{S}_{\pi\rho}=\frac{2~\rm Im \Bigl( \lambda^{d}_{\pi\rho} \Bigl)}{1 + |\lambda^d_{\pi\rho}|^2},
&&
\tilde{S}_{\rho\pi}=\frac{2~\rm Im \Bigl( \lambda^{d}_{\rho\pi} \Bigl)}{1 + |\lambda^d_{\rho\pi}|^2},\end{aligned}$$ with $$\begin{aligned}
\lambda^d_{\pi\rho}= \left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2 \frac{\bar{\mathcal{A}}_{\pi^+\rho^-}}{\mathcal{A}_{\rho^+ \pi^-}},
&&
\lambda^d_{\rho\pi}= \left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2 \frac{\bar{\mathcal{A}}_{\rho^+ \pi^-}}{\mathcal{A}_{ \pi^+ \rho^-}}.
\label{eq:ratiosArhopi}\end{aligned}$$ The individual amplitudes $\bar{\mathcal{A}}_{\pi^+\rho^-}$ and $\bar{\mathcal{A}}_{\rho^+ \pi^-}$ for the processes $\bar{B}^0_d\rightarrow \pi^+ \rho^-$ and $\bar{B}^0_d\rightarrow \rho^+ \pi^-$ are respectively $$\begin{aligned}
\bar{\mathcal{A}}_{\pi^+\rho^-}&= A_{\pi\rho}\Bigl(\lambda^{(d)}_u \alpha^{\pi\rho}_2 +
\sum_{p=u,c}\lambda^{(d)}_p\Bigl[ \tilde{\alpha}_4^{p, \pi\rho } + \tilde{\alpha}_{4,EW}^{p, \pi\rho}\nonumber\\
&+ \beta^{p, \pi\rho}_3 + \beta^{p, \pi\rho}_4 - \frac{1}{2}\beta^{p, \pi\rho}_{3,EW} -\frac{1}{2}\beta^{p, \pi\rho}_{4,EW} \Bigl] \Bigl)\nonumber\\
&+ A_{\rho\pi}\Bigl(\lambda^{(d)}_u\beta^{\rho\pi}_1 +
\sum_{p=u,c}\lambda^{(d)}_p\Bigl[\beta^{p, \rho\pi}_4 + \beta^{p, \rho\pi}_{4,EW} \Bigl]\Bigl),\nonumber\\
\bar{\mathcal{A}}_{\rho^+ \pi^-}&= A_{\rho\pi}\Bigl(\lambda^{(d)}_u \alpha^{\rho\pi}_2 +
\sum_{p=u,c}\lambda^{(d)}_p\Bigl[ \tilde{\alpha}_4^{p, \rho\pi} + \tilde{\alpha}_{4,EW}^{p, \rho\pi} + \beta^{p, \rho\pi}_3\nonumber\\
& + \beta_4^{p,\rho\pi} - \frac{1}{2}\beta^{p, \rho\pi}_{3,EW} -\frac{1}{2}\beta^{p, \rho\pi}_{4,EW} \Bigl] \Bigl) \nonumber\\
& + A_{\pi\rho}\Bigl(\lambda^{(d)}_u\beta^{\pi\rho}_1 + \sum_{p=u,c} \lambda^{(d)}_p\Bigl[\beta^{p, \pi\rho}_4 + \beta^{p, \pi\rho}_{4,EW}\Bigl]\Bigl),
\label{eq:pirho}\end{aligned}$$ with $\lambda^{(d)}_{u,c}$ given by Eq. (\[eq:lambmdadef\]). In analogy with $S_{\pi\pi}$, there are also tree level amplitudes given by $\{\alpha_2^{\pi\rho},~\alpha_2^{\rho\pi}\}$, together with QCD and electroweak penguin contributions introduced through $\{\alpha^{\pi\rho}_4,~\alpha^{\rho\pi}_4\}$ and $\{\tilde{\alpha}^{\pi\rho}_4,~\tilde{\alpha}^{\rho\pi}_4\}$ respectively. Moreover, the coefficients $\{\beta_{1}^{p, \pi\rho},~\beta_{1}^{p, \rho\pi}\}$ correspond to current-current annihilation, $\{\beta_{3,4}^{p, \pi\rho},~\beta_{3,4}^{p, \rho\pi}\}$ to QCD penguin annihilation and $\{\beta^{p, \pi\rho}_{4, EW},~\beta^{p, \rho \pi}_{4, EW}\}$ to electroweak penguin annihilation. The TA can be obtained using Eq. (\[eq:alphaGen0\]) and the information provided in Appendix \[Sec:QCDFact\]. Our SM determination of the mixing induced CP asymmetry reads $$\begin{aligned}
S^{\rm SM}_{\pi\rho}= -0.04 \pm 0.08,
\label{eq:Srhopi_value}\end{aligned}$$ which is compatible with the current experimental average [@Amhis:2016xyh] $$\begin{aligned}
S^{\rm Exp}_{\pi\rho}&=0.06 \pm 0.07.\end{aligned}$$
Parameter Relative Error
-------------------------- ----------------
$\gamma$ $142.75\%$
$X_A$ $96.41\%$
$X_H$ $58.85\%$
$|V_{ub}/V_{cb}|$ $46.96\%$
$m_s$ $37.31\%$
$\mu$ $20.58\%$
$a^{\rho}_{2}$ $18.34\%$
$\Lambda^{QCD}_5$ $13.16\%$
$\lambda_B$ $8.27\%$
$A^{B\rightarrow\rho}_0$ $7.06\%$
$a^{\pi}_{2}$ $6.26\%$
$m_b$ $5.22\%$
$F^{B\rightarrow\pi}_+$ $2.19\%$
$|V_{us}|$ $1.38\%$
$f_{\rho}$ $0.93\%$
: Error budget for the observable $S_{\pi\rho}$ (Part I). Here $A^{B\rightarrow \rho}_0$ is the form factor for the transition $B \rightarrow \rho$, $a^{\rho}_{2}$ is the Gegenbauer moment for the leading twist LCDA for the $\rho$ meson. []{data-label="tab:tableSrhopi1"}
Parameter Relative Error
-------------------- ----------------
$f_{\pi}$ $0.51\%$
$f_B$ $0.26\%$
$f^{\perp}_{\rho}$ $0.23\%$
$|V_{cb}|$ $0.06\%$
$m_c$ $0.02\%$
Total $194.57\%$
: Error budget for the observable $S_{\pi\rho}$ (Part II).[]{data-label="tab:tableSrhopi2"}
The relative errors from each one of the inputs for $S_{\pi\rho}$ are presented in Tables \[tab:tableSrhopi1\] and \[tab:tableSrhopi2\], it can be seen that this observable is highly sensitive to the CKM input $\gamma$ leading to a relative uncertainty of $\mathcal{O}(100 \%)$. This is related to the fact that in the ratio $\lambda_{\rho\pi}$ given in Eq. (\[eq:ratiosArhopi\]) we have: $$\begin{aligned}
\rm Re \left( \frac{\mathcal{A}_{\rho^+ \pi^-}}{A_{\pi^+\rho^-}} \right)
\approx
\rm Im \left( \frac{\mathcal{A}_{\rho^+ \pi^-}}{A_{\pi^+\rho^-}} \right),\end{aligned}$$ and $$\begin{aligned}
\rm Re \left(\left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2\right)
\approx
- \rm Im \left(\left[\frac{V_{td} V^*_{tb} }{|V_{td}V^*_{tb}|}\right]^2\right) \, ,\end{aligned}$$ which lead to a very strong cancellation on the resulting imaginary component. The allowed NP regions for $\Delta C^{d, uu}_2(M_W)$ are displayed in Fig. \[fig:Srhopi\]. Here we can see how, in spite of having an uncertainty of $\mathcal{O}(100 \%)$, the observable $S_{\pi \rho}$ rules out large sections in the complex plane of $\Delta C^{d, uu}_2(M_W)$ and consequently deserves to be included in the analysis of $C^{d, uu}_2$. In contrast we find weak bounds for $\Delta C^{d, uu}_1(M_W)$ that are not strong enough to be taken into account. This is explained by the strong dependence of the amplitudes in Eqs. (\[eq:pirho\]) on $C^{d, uu}_2(M_W)$, which enters through $\alpha^{\pi \rho}_2$ and $\alpha^{\rho \pi}_2$ as shown in Eq. (\[eq:alphaGen0\]).
### $R_{\rho\rho}$ {#sec:Rrhorho}
To obtain extra constraints on NP contributions to the tree level Wilson coefficients for the transition $b\rightarrow u\bar{u}d$ we include the ratio $$\begin{aligned}
R_{\rho\rho} &= &
\frac{\mathcal{B}r\left(B^{-}\rightarrow \rho_L^{-}\rho_L^{0}\right)}
{\mathcal{B}r\left(\bar{B}_d^0\rightarrow\rho_L^{+}\rho_L^{-}\right)}
=\frac{\left|\mathcal{A}_{\rho^-\rho^0}\right|^2}{\left|\mathcal{A}_{\rho^+\rho-}\right|^2} \, ,
\label{eq:Rrhorhodef}\end{aligned}$$ where $\mathcal{A}_{\rho^-\rho^0} $ and $\mathcal{A}_{\rho^+\rho-}$ are the amplitudes for the processes $B^-\rightarrow \rho^-_L\rho^0_L$ and $\bar{B}_d^0\rightarrow \rho^+_L\rho^-_L$ respectively. In terms of TAs they can be written as [@Bartsch:2008ps; @Beneke:2006hg] $$\begin{aligned}
\label{eq:A_rho_rho}
\mathcal{A}_{\rho^-\rho^0}&= \frac{A_{\rho\rho}}{\sqrt{2}} \Bigl[ \lambda^{(d)}_u\Bigl(\alpha^{\rho\rho}_{1}+\alpha^{\rho\rho}_{2}\Bigl)
+\frac{3}{2}\sum_{p=u,c}\lambda^{(d)}_{p}\Bigl(\alpha^{p, \rho\rho}_{7}+
\alpha^{p,\rho\rho}_{9}+\alpha^{p,\rho\rho}_{10}\Bigl)\Bigl],\nonumber\\
\mathcal{A}_{\rho^+\rho^-}&=A_{\rho\rho}\Bigl[\lambda^{(d)}_u\Bigl(\alpha^{\rho\rho}_{2} +\beta^{\rho\rho}_{2}\Bigl)+
\sum_{p=u,c}\lambda^{(d)}_p\Bigl(\alpha^{p,\rho\rho}_4 + \alpha^{p,\rho\rho}_{10}\nonumber\\
&+\beta^{p,\rho\rho}_3+2\beta^{p,\rho\rho}_{4}-\frac{1}{2}\beta^{p,\rho\rho}_{3,EW}
+\frac{1}{2}\beta^{p,\rho\rho}_{4,EW} \Bigl)\Bigl].\end{aligned}$$ Here we expect a stronger dependence on $C_1$ compared to $C_2$. As indicated in Eq. (\[eq:A\_rho\_rho\]), in addition to the tree level contributions $\alpha^{\rho\rho}_{1,2}$, we can also identify QCD $\alpha^{\rho\rho}_4$ and electroweak penguins $\alpha^{\rho\rho}_{7, 9, 10}$. Moreover QCD penguin annihilation topologies enter through $\beta^{p, \rho\rho}_{3,4}$. On the other hand electroweak penguin annihilation is given by $\beta^{p, \rho\rho}_{3,4, EW}$. The expressions for the topological amplitudes obey the structure indicated in Eq. (\[eq:alphaGen0\]) and can be calculated explicitly using the information provided in Appendix \[Sec:QCDFact\]. Currently $\alpha^{\rho\rho}_{1,2}$ are available up to NNLO, we introduce these effects following the same procedure used for the determination of $\alpha^{\pi\pi}_{1,2}$. Thus, we apply Eq. (\[eq:repipi\]) under the replacements $\alpha^{\rm NNLO, \pi \pi}_i \rightarrow \alpha^{\rm NNLO, \rho_L \rho_L}_i$, $\alpha^{\rm NLO, \pi\pi}_i \rightarrow \alpha^{\rm NLO, \rho_L\rho_L}_i$ and $\alpha^{\rm NLO, \pi\pi}_{i}
\rightarrow \alpha_{0,i}^{\rm NLO, \rho_L\rho_L}$, with $i=1,2$. For the corresponding NNLO components we use [@Bell:2009fm] $$\begin{aligned}
\alpha^{\text{NNLO},\rho_L\rho_L}_{1} &=0.177^{+ 0.025}_{- 0.029} - \Bigl( 0.097^{+ 0.021}_{-0.029}\Bigl)i,\nonumber\\
\alpha^{\text{NNLO},\rho_L\rho_L}_{2} &=1.017^{+ 0.010}_{- 0.011}+\Bigl(0.025^{+0.019}_{-0.013}\Bigl)i.
\label{eq:rhorhoNNLO}\end{aligned}$$ The uncertainty shown in Eq. (\[eq:rhorhoNNLO\]) has its origin in higher order perturbative corrections, we have taken this as the corresponding renormalization scale uncertainty when treating $\alpha^{\rm NNLO, \rho_L\rho_L}_{1,2}$ as nuisance parameters. Our SM determination for $R_{\rho\rho}$ is $$\begin{aligned}
R^{\rm SM}_{\rho\rho}=\Bigl( 67.50\pm 25.71\Bigl)\cdot 10^{-2}.\end{aligned}$$
Parameter Relative Error
--------------------------- ----------------
$X_A$ $26.40\%$
$X_H$ $23.33\%$
$\lambda_B$ $12.32\%$
$\mu$ $6.78\%$
$A^{B\rightarrow \rho}_0$ $2.54\%$
$a^{\rho}_{2}$ $2.24\%$
$f_{\rho}$ $0.46\%$
$\Lambda^{QCD}_5$ $0.45\%$
$\gamma$ $0.38\%$
$m_b$ $0.27\%$
$f_B$ $0.15\%$
$f^{\perp}_{\rho}$ $0.15\%$
$m_c$ $0.12\%$
$f^{\perp}_{\rho}$ $0.07\%$
$|V_{ub}/V_{cb}|$ $0.02\%$
Total $38.09\%$
: Error budget for the observable $R_{\rho\rho}$.[]{data-label="tab:Rrhorho"}
The experimental result for $R_{\rho\rho}$ is obtained by calculating the ratio of $\mathcal{B}r(B^{-}\rightarrow \rho_L^{-}\rho_L^{0})$ and $\mathcal{B}r(\bar{B}_d^0\rightarrow\rho_L^{+}\rho_L^{-})$ weighted by the corresponding longitudinal polarization fractions $f^{-0}_L$ and $f^{+-}_L$. Using the numerical values available in the PDG [@Tanabashi:2018oca] we obtain $$\begin{aligned}
R_{\rho\rho}^{\rm Exp}&=&\Bigl( 83.14\pm 8.98\Bigl)\cdot 10^{-2}.\end{aligned}$$ The partial contributions to the error budget are presented in Table \[tab:Rrhorho\] and the constraints derived for $\Delta C^{d, uu}_1(M_W)$ in Fig. \[fig:Rrhorho\]. We do not show the associated regions for $\Delta C^{d, uu}_2(M_W)$ because, for $R_{\rho\rho}$, the results are weaker than those derived from other observables in our study.
![ Potential regions for the NP contributions in $\Delta C^{d, uu}_1(M_W)$ allowed by the observable $R_{\rho\rho}$ at $90\%$ C.L.. The bounds on $\Delta C^{d, uu}_2(M_W)$ are very weak and hence not shown. The black point corresponds to the SM value.[]{data-label="fig:Rrhorho"}](Rrhorho_C1.pdf){height="5cm"}
Observables constraining $b\rightarrow c\bar{c} s$ transitions {#sec:bccs}
--------------------------------------------------------------
In this section we study bounds for $\Delta C_{1,2}^{s, cc}(M_W)$ stemming from $\mathcal{B}r(\bar{B}\rightarrow X_s \gamma)$, the mixing observable $\Delta \Gamma_s$, the CKM angle $\sin(2\beta_s)$ and the lifetime ratio $\tau_{B_s}/\tau_{B_d}$. These observables give very constrained regions for $\Delta C_{1,2}^{s, cc}(M_W)$.
### $\bar{B}\rightarrow X_s \gamma$ {#sec:Bsgamma}
The process $\bar{B}\rightarrow X_s \gamma$ is of mayor interest for BSM studies for several reasons. To begin with, within the SM it is generated mainly at the loop level (its branching fraction actually receives contributions below $0.4\%$ from the tree-level CKM-suppressed transitions $b\rightarrow u\bar{u} s\gamma$ when the energy of the photon is within the phenomenologically relevant range $E_{\gamma}\geq 1.6~\hbox{GeV}$ [@Kaminski:2012eb]). In the HQET, it corresponds to a flavour changing neutral current sensitive to new particles. Additionally, the experimental and theoretical precision achieved on its determination have an accuracy of the same order. Moreover, this transition is useful to constrain CKM elements involving the top quark.\
The experimental world average for $\mathcal{B}r(\bar{B}\rightarrow X_s \gamma)$ up to date combines measurements from CLEO, Belle and BaBar leading to [@Amhis:2016xyh] $$\begin{aligned}
\mathcal{B}r^{\rm Exp}(\bar{B}\rightarrow X_s \gamma)&=&\Bigl(3.32 \pm 0.15 \Bigl)\cdot 10^{-4}.\end{aligned}$$ On the theoretical side there has been a huge effort on the determination of this observable; the most precise results available are obtained at NNLO. Here we consider [@Czakon:2015exa] $$\begin{aligned}
\mathcal{B}r^{\rm SM}(\bar{B}\rightarrow X_s \gamma)&=&\Bigl(3.36 \pm 0.22 \Bigl)\cdot 10^{-4},
\label{eq:B_SM_NNLO}\end{aligned}$$ where the energy of the photon satisfies the cut $$\begin{aligned}
E_{\gamma}> E_0 = 1.6 \hbox{ GeV}.
\label{eq:PhotonE}\end{aligned}$$ The calculation of the branching ratio for the process $\bar{B}\rightarrow X_s \gamma$ can be written as [@Misiak:2006ab] $$\begin{aligned}
\label{eq:BXsgammafull}
\mathcal{B}r(\bar{B}\rightarrow X_s\gamma)_{E_{\gamma}>E_0}
&=&
\mathcal{B}r(\bar{B}\rightarrow X_c e \bar{\nu})_{\rm exp}
\Bigl|\frac{V^{*}_{ts} V_{tb}}{V_{cb}}\Bigl|^2
\frac{6\alpha_{\rm em}}{\pi C}
\left[P(E_0) + N(E_0)\right].
\nonumber
\\\end{aligned}$$ In Eq. (\[eq:BXsgammafull\]), $P(E_0)$ and $N(E_0)$ denote the perturbative and the non-perturbative contributions to the decay probability respectively. They depend on the lower cut for the energy of the photon in the Bremsstrahlung correction $E_0$ shown in Eq. (\[eq:PhotonE\]). Using the parameterisation given in Ref. [@Chetyrkin:1996vx] we write $E_0=m^{1S}_b/2\Bigl(1-\delta'\Bigl)$ and choose $\delta'$ such that the lower bound in Eq. (\[eq:PhotonE\]) is saturated. The perturbative contribution $P(E_0)$ is given by [@Misiak:2006ab] $$\begin{aligned}
P(E_0)=\sum^8_{i,j=1} C^{eff}_i(\mu_b) C^{eff*}_j(\mu_b) K_{ij}(E_0, \mu_b)
\label{eq:perturbative}\end{aligned}$$ with $K_{ij}=\delta_{i7}\delta_{j7} + \mathcal{O}(\alpha_s)$. The effective Wilson coefficients $C^{eff}_i$ are expressed in terms of linear combinations of the coefficients for the operators $\hat{Q^s_i}$ ($i=1,..,6$), $\hat{Q}^s_{7\gamma}$ and $\hat{Q}^s_{8g}$ introduced in Section \[sec:Heff\]. For the denominator of Eq. (\[eq:BXsgammafull\]) we have [@Misiak:2006ab] $$\begin{aligned}
C=\Bigl|\frac{V_{ub}}{V_{cb}}\Bigl|^2\frac{\Gamma(\bar{B}\rightarrow X_c e \bar{\nu})}{\Gamma(\bar{B}\rightarrow X_u e \bar{\nu})}.\end{aligned}$$ In order to account for the NNLO result in Eq. (\[eq:B\_SM\_NNLO\]) we write $$\begin{aligned}
\mathcal{B}r(\bar{B}\rightarrow X_s \gamma) &=&
\mathcal{B}r^{\rm SM, ~ NNLO}(\bar{B} \rightarrow X_s \gamma) \cdot
\frac{\mathcal{B}r^{\rm NLO}(\bar{B}\rightarrow X_s \gamma)(\mu_0)}{\mathcal{B}r^{(0)~\rm SM,~ NLO}_{0}(\bar{B} \rightarrow X_s \gamma )}
\, .
\nonumber
\\
\label{eq:BXsgammaNNLO}\end{aligned}$$
Where
- $\mathcal{B}r^{\hbox{NLO}}(\bar{B}\rightarrow X_s \gamma)$ is the branching ratio for the process $\bar{B}\rightarrow X_s \gamma$ calculated at NLO including NP effects from $\Delta C^{s, cc}_2(M_W)$. All inputs are allowed to float except the renormalisation scale, which is fixed at $\mu_0=m_b$. Our calculations are determined using the anomalous dimension matrices provided in [@Chetyrkin:1996vx]. NP contributions are introduced according to Eq. (\[eq:NPC12\]). They propagate to the rest of the Wilson coefficients $C_{i}$ after applying the renormalisation group equations, described in Section 2 of Ref. [@Chetyrkin:1996vx].
- $\mathcal{B}r^{\hbox{SM, NLO}}_{0}(\bar{B}\rightarrow X_s \gamma)$ is the SM branching ratio for the process $\bar{B}\rightarrow X_s \gamma$ calculated at NLO and evaluated at the central values of all the input parameters and then kept constant during the $\chi^2$-fit.
- $\mathcal{B}r^{\hbox{SM, NNLO}}(\bar{B}\rightarrow X_s \gamma)$ is the SM branching ratio for the process $\bar{B}\rightarrow X_s \gamma$ calculated at NNLO and allowed to float within the uncertainty associated with the renormalisation scale. In the case of the theoretical result given in Eq.(\[eq:B\_SM\_NNLO\]) this corresponds to $3\%$ of the central value [@Czakon:2015exa].
The partial contributions to the final error are described in Table \[tab:errorBXsgamma\]. The allowed regions for $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ are shown in Fig. \[fig:Bs\_gamma\], where it can be seen how this observable imposes strong constraints on $\Delta C^{s, cc}_2(M_W)$. The bounds in Fig. \[fig:Bs\_gamma\] are consistent with those reported in [@Jager:2019bgk] once a $68\%$ C.L. is taken into account.
Parameter Relative error
----------------------------------------------------- ----------------
$ N(E_0)$ $5.00\%$
$\mu$ $3.00\%$
$\mathcal{B}r(\bar{B}\rightarrow X_c e\bar{\nu}_e)$ $2.68\%$
$m_c(m_c)$ $1.10\%$
$m^{1S}_b$ $0.61\%$
$\Lambda^{QCD}_5$ $0.26\%$
$\gamma$ $0.10\%$
$|V_{ub}/V_{cb}|$ $0.04\%$
$|V_{us}|$ $0.01\%$
Total $6.55\%$
: Error budget for the observable $\mathcal{B}r(\bar{B}\rightarrow X_s \gamma)$. Here $N(E_0)$ determines the uncertainty arising from non-perturbative contributions. []{data-label="tab:errorBXsgamma"}
![ Potential regions for the NP contributions in $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by the observable $\mathcal{B}r(\bar{B}\rightarrow X_s \gamma)$ at $90\%$ C.L.. The black point corresponds to the SM value.[]{data-label="fig:Bs_gamma"}](Bs_gamma_C1_full_float.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions in $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by the observable $\mathcal{B}r(\bar{B}\rightarrow X_s \gamma)$ at $90\%$ C.L.. The black point corresponds to the SM value.[]{data-label="fig:Bs_gamma"}](Bs_gamma_C2_full_float.pdf "fig:"){height="5cm"}
### $\Delta \Gamma_s$: Bounds and SM update {#subsec:DGs}
The decay rate differences $\Delta \Gamma_{q}$ and the semileptonic asymmetries $a^{q}_{sl}$ arising from neutral $B_q$ meson mixing are sensitive to the tree-level transitions $b\rightarrow u\bar{u} q$, $b\rightarrow u\bar{c} q$ , $b\rightarrow c\bar{u} q$ and $b\rightarrow c\bar{c} q$ for $q=s,d$. We will, however, show below that for the decay rate difference of $B_s$-mesons our BSM study is completely dominated by the $b \to c \bar{c} s$ transition, yielding therefore strong constraints to $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$.\
The definitions of the observables $\Delta \Gamma_{q}$ and $a^{q}_{sl}$ in terms of $\Gamma^q_{12}/M^q_{12}$ were introduced in Eqs. (\[eq:dGammaq\]) and (\[eq:aslq\]). Since, as explained in Section \[sec:HQE\], the elements $\Gamma^q_{12}$ are determined from the double insertion of $\mathcal{H}_{eff}^{|\Delta B|=1}$ Hamiltonians, there are leading order contributions originating from the insertion of two current-current operators $\hat{Q}^{ab,q}_{j}$ for $ab= uu, uc, cc$ and $j=1, 2$, see Eq. (\[eq:mainbasis\]). Additionally, there are also double insertions from a single current-current $\hat{Q}^{ab}_{1,2}$ and a penguin operator $\hat{Q}_{3,4,5,6} $. In this section, we will only include NP effects to $\Gamma^q_{12}$, while we neglect tree level NP contributions to $M^q_{12}$ (these contribution are discussed in Section \[sec:sin2betaM12\] and they yield considerably weaker bounds for the observables $\Delta \Gamma_q$ and $a^q_{sl}$). To show the dominance of the $b \to c \bar{c} s$ contribution for $B_s$-mixing, we decompose $\Gamma^q_{12}$ into partial contributions $\Gamma^{q, ab}_{12}$, where the indices $ab=uu, uc, cc$ indicate which“up” type quarks are included inside the corresponding effective fermionic loops. Thus, the expression for $\Gamma^q_{12}/M_{12}^q$ becomes $$\begin{aligned}
\frac{ \Gamma^q_{12}}{M_{12}^q}
&=&
- \frac{ \left(\lambda^{(q)}_c\right)^2\Gamma^{q, cc}_{12} + 2 \lambda^{(q)}_u \lambda^q_c \Gamma^{q, uc}_{12}
+ \left(\lambda^{(q)}_u\right)^2 \Gamma^{q, uu}_{12} }{{M}_{12}^q}
\nonumber
\\
&=&
- \frac{
(\lambda^{(q)}_t)^2 \Gamma^{q, cc}_{12} + 2 \lambda^{(q)}_t \lambda^q_u \Bigl[\Gamma^{q, cc}_{12} - \Gamma^{q, uc}_{12}\Bigl]
+(\lambda^{(q)}_u)^2 \Bigl[ \Gamma_{12}^{q, cc} - 2 \Gamma_{12}^{q, uc} + \Gamma^{q, uu}_{12} \Bigl]
}
{(\lambda_t^{(q)})^2 \tilde{M}_{12}^q}
\nonumber
\\
&=& - 10^{-4} \left[ c^q + a^q \frac{\lambda_u^{(q)}}{\lambda_t^{(q)}} + b^q \left( \frac{\lambda_u^{(q)}}{\lambda_t^{(q)}} \right)^2 \right]
.\nonumber\\
\label{Eq:Gamma_d_cont}\end{aligned}$$ We have used here the unitarity of the CKM matrix: $\lambda_u^{(q)} + \lambda_c^{(q)} + \lambda_t^{(q)} = 0$ and we have split off the CKM dependence from ${M}_{12}^q$ by introducing the quantity $\tilde{M}_{12}^q$. The GIM suppressed [@Glashow:1970gm] terms $a$ and $b$ vanish in the limit $m_c \to m_u$ and the numerical values show a clear hierarchy $$\begin{aligned}
c^q \approx - 48\, , && a^q \approx 11 \, , \hspace{1cm} b^q \approx 0.23 \, .
\end{aligned}$$ For the ratio of CKM elements we obtain $$\begin{aligned}
\frac{\lambda_u^{(q)}}{\lambda_t^{(q)}} & \approx & \left\{
\begin{array}{cc}
1.7 \cdot 10^{-2} - 4.2 \cdot 10^{-1} I & \mbox{for} \, \, \,q=d
\\
-8.8 \cdot 10^{-3} + 1.8 \cdot 10^{-2} I & \mbox{for} \, \, \, q=s
\end{array}
\right.
\\
\left(\frac{\lambda_u^{(q)}}{\lambda_t^{(q)}}\right)^2 & = & \left\{
\begin{array}{cc}
-1.8 \cdot 10^{-1} - 1.5 \cdot 10^{-2} I & \mbox{for} \, \, \, q=d
\\
-2.5 \cdot 10^{-4} - 3.2 \cdot 10^{-4} I & \mbox{for} \, \, \, q=s
\end{array}
\right.\end{aligned}$$ Within the SM we find a very strong hierarchy of the three contributions in Eq. (\[Eq:Gamma\_d\_cont\]). The by far largest term is given by $c^q$ and it is real. The second term proportional to $a^q$ is GIM and CKM suppressed - slightly for the case of $B_d$ mesons and more pronounced for $B_s$. Since $\lambda_u^{(q)}/\lambda_t^{(q)}$ is complex, this contribution gives rise to an imaginary part of $\Gamma^q_{12}/M_{12}^q$. Finally $b^q$ is even further GIM suppressed and again slightly/strongly CKM suppressed for $B_d$/$B_s$ mesons - this contribution has also both a real and an imaginary part. According to Eqs. (\[eq:dGammaq\]) the decay rate difference $\Delta \Gamma_q$, given by the real part of $\Gamma^q_{12}/M_{12}^q$, is dominated by the coefficient $c^q$ - stemming from $b \to c \bar{c} q$ transitions - and the coefficients $a^q$ and $b^q$ yield corrections of the order of 2 per mille. The semi-leptonic asymmetries are given by the imaginary part of $\Gamma^q_{12}/M_{12}^q$ (c.f. Eq. (\[eq:aslq\])), which in turn is dominated by the coefficient $a^q$, with $b^q$ giving sub-per mille corrections and no contributions from $c^q$.\
Allowing new, complex contributions to $C_1$ and $C_2$ for individual quark level contributions we get the following effects:
1. The numerically leading coefficient $c^q$ can now also obtain an imaginary part.
2. The GIM cancellations in the coefficients $a^q$ and $b^q$ can be broken, if $b \to c \bar{c} q$, $b \to c \bar{u} q$ , $b \to u \bar{c} q$ and $b \to u \bar{u} q$ are differently affected by NP. If there is a universal BSM contribution then the GIM cancellation will stay.
3. The CKM suppression will not be affected by our BSM modifications.
For the real part of $\Gamma^s_{12}/M_{12}^s$, we expect at most a correction of 2 per cent due to $a^s$ and $b^s$, even if the corresponding GIM suppression is completely lifted - thus $\Delta \Gamma_s$ is even in our BSM approach, completely dominated by $c^s$ and gives therefore only bounds on $b \to c \bar{c} s$. In the case of $B_d$ mesons, the corrections due to $a^d$ and $b^d$ could be as large as 40 per cent - here all possible decay channels have to be taken into account - except we are considering universal BSM contributions to all decay channels. Since $\Delta \Gamma_d$ is not yet measured, we will revert our strategy and use the obtained bounds on the Wilson coefficients $C_1$ and $C_2$ to obtain potential enhancements or reductions of $\Delta \Gamma_d$ due to BSM effects in non-leptonic tree-level decays. Considering the imaginary part of $\Gamma^s_{12}/M_{12}^s$, we can get dramatically enhanced values for the semi-leptonic CP asymmetries, if $C_1$ or $C_2$ are complex, which will result in an imaginary part of the GIM-unsuppressed coefficient $c^q$. On the other hand new contributions to e.g. only $b \to c \bar{u} q$ or $b \to u \bar{c} q$ would have no effect on $c^q$, but they could lift the GIM suppression of the coefficient $a^q$ and thus lead to also large effects. Therefore the semileptonic CP asymmetries are not completely dominated by the $b \to c \bar{c} q$ transitions.\
Next we explain in detail how to implement BSM contributions to $C_1$ and $C_2$ in the theoretical description of $\Gamma_{12}^q$. Each one of the functions $\Gamma^{q, ab}_{12}$ in Eq. (\[Eq:Gamma\_d\_cont\]) are given by [@Lenz:2006hd] $$\begin{aligned}
\label{eq:Gamma12}
\Gamma^{q, ab}_{12}
&=& \frac{G^2_F m^2_b}{24 \pi M_{Bs}} \Bigl[ \Bigl(G^{q, ab} + \frac{1}{2}\alpha_2 G^{q, ab}_{S} \Bigl)
\langle B_q |\hat{Q}_1 |\bar{B}_q \rangle +
\alpha_1 G^{q, ab}_S \langle B_q| \hat{{Q}}_3 |\bar{B}_q\rangle\Bigl] + \tilde{\Gamma}^{q, ab}_{12, 1/m_b}.\nonumber\\ \end{aligned}$$ The coefficients $\alpha_1$ and $\alpha_2$ in Eq. (\[eq:Gamma12\]) include NLO corrections and are written in the $\overline{\hbox{MS}}$ scheme as $$\begin{aligned}
\alpha_1=1+\frac{\alpha_s(\mu)}{4\pi}C_F\Bigl(12\ln\frac{\mu}{m_b} + 6 \Bigl),&&
\alpha_2=1+\frac{\alpha_s(\mu)}{4\pi}C_F\Bigl(12\ln\frac{\mu}{m_b} + \frac{13}{2}\Bigl).\nonumber\\\end{aligned}$$ Furthermore, the expressions for $G^{q, ab}$ and $G^{q, ab}_S$ in Eq. (\[eq:Gamma12\]) are decomposed as $$\begin{aligned}
G^{q, ab}= F^{q, ab} + P^{q, ab}, && G_S^{q, ab}= - F_S^{q, ab} - P_S^{q, ab}.\end{aligned}$$ with $F^{q, ab}$ and $F^{q, ab}_S$ encoding the perturbative contributions resulting from the double insertion of current-current operators. Finally, $P^{q, ab}$ and $P^{q, ab}_S$ contain the perturbative effects from the combined insertion of a current-current and a penguin operators. In terms of the tree-level Wilson coefficients $C^{q, ab}_1$ and $C^{q, ab}_2$, the equations for $F^{q, ab}$ and $F^{q, ab}_S$ have the following generic structure $$\begin{aligned}
F^{q, ab}=F^{q, ab}_{11}\left[ C^{q, ab}_1(\mu) \right]^2 + F^{q, ab}_{12}C^{q, ab}_{1}(\mu)C^{q, ab}_{2}(\mu) + F^{q, ab}_{22} \left[ C^{q, ab}_2(\mu) \right]^2,
\label{eq:GenF}\end{aligned}$$ where the individual factors $F^{q, ab}_{11, 12, 22}$ are available in the literature up to NLO $$\begin{aligned}
F^{q, ab}_{ij}=F^{q, (0)}_{ij} + \frac{\alpha_s(\mu)}{4\pi}F^{q, (1)}_{ij}.\end{aligned}$$ To account for NP effects, the Wilson coefficients inside Eq. (\[eq:GenF\]) should be determined using Eq. (\[eq:NPC12\]) and applying the renormalization group equations introduced in Sec. \[sec:basic\]. Notice that Eq. (\[eq:GenF\]) is sensitive to the different transitions $b\rightarrow c\bar{c} q$, $b\rightarrow u\bar{c} q$, $b\rightarrow c\bar{u} q$ and $b\rightarrow u\bar{u} q$. To be consistent with the inclusion of NP effects $\Delta C^{q, ab}_1(M_W)$ and $\Delta C^{q, ab}_2(M_W)$ at LO only, we omit all the terms involving products between $\alpha_s(\mu)$ and the NP factors $\Delta C^{q, ab}_{1,2}(M_W)$ inside Eq. (\[eq:GenF\]). The penguin functions $P^{q,ab}$ and $P^{q,ab}_S$ also contain LO contributions from $C^{q, ab}_{1,2}$. For the purposes of illustration we will show the explicit expressions for the functions $P^{s, cc}$ and $P^{s, cc}_S$ corresponding to the $B^0_s-\bar{B}^0_s$ system. At NLO we have [@Beneke:1998sy] $$\begin{aligned}
\label{eq:FPFPS}
P^{s, cc}&=&\sqrt{1-4\bar{z}}\Bigl[(1-\bar{z})K'_1(\mu) + \frac{1}{2}(1-4\bar{z})K^{' cc}_2(\mu) + 3 \bar{z} K^{' cc}_3(\mu) \Bigl] \nonumber\\
&&+ \frac{\alpha_s(\mu)}{4\pi}F^{cc}_p(\bar{z})\Bigl[C^{s, cc}_2(\mu)\Bigl]^2,\nonumber\\
P^{s, cc}_S&=&\sqrt{1-4\bar{z}}\Bigl[1+ 2 \bar{z}\Bigl]\Bigl[K^{' cc}_1(\mu) - K^{' cc}_2(\mu)\Bigl]-\frac{\alpha_s(\mu)}{4\pi}8F_p(\bar{z})\Bigl[C^{s, cc}_2(\mu)\Bigl]^2. \nonumber\\\end{aligned}$$ Where the following definition for the ratio of the masses of the bottom and charm quarks, evaluated in the $\overline{\hbox{MS}}$ scheme [@Lenz:2006hd], has been used $$\begin{aligned}
\bar{z}&=& \Bigl[\overline{m}_c((\overline{m}_b)/(\overline{m}_b((\overline{m}_b)\Bigl]^2.\end{aligned}$$ The functions $K^{' cc}_{1,2,3}$ inside Eq. (\[eq:FPFPS\]) are given by $$\begin{aligned}
\label{eq:Kfunctions}
K^{' cc}_1(\mu)&=&2\Bigl[3 C^{s, cc}_1(\mu) C^{s, cc}_3(\mu) + C^{s, cc}_1(\mu) C^{s, cc}_4(\mu) + C^{s, cc}_2(\mu) C^{s, cc}_3(\mu)\Bigl],\nonumber\\
K^{' cc}_2(\mu)&=&2C^{s, cc}_2(\mu) C^{s, cc}_4(\mu),\nonumber\\
K^{' cc}_3(\mu)&=&2\Bigl[3 C^{s, cc}_1(\mu) C^{s, cc}_5(\mu) + C^{s, cc}_1(\mu) C_6(\mu) + C^{s, cc}_2(\mu) C^{s, cc}_5(\mu) +C^{s, cc}_2(\mu) C^{s, cc}_6(\mu)\Bigl],\nonumber\\\end{aligned}$$ and the expression for the NLO correction function $F^{cc}_p(z) $ is $$\begin{aligned}
F^{cc}_p(z)&=&-\frac{1}{9}\sqrt{1-4\bar{z}}\Bigl(1+2\bar{z}\Bigl)\Bigl[2\hbox{ln}\frac{\mu}{m_b} + \frac{2}{3} + 4\bar{z} - \hbox{ln}\bar{z} \nonumber\\
&&+ \sqrt{1-4\bar{z}}\Bigl(1 + 2\bar{z} \Bigl)\hbox{ln}\frac{1-\sqrt{1-4\bar{z}}}{1+\sqrt{1+4\bar{z}}} + \frac{3C_8(\mu)}{C^{s, cc}_2(\mu)} \Bigl].\end{aligned}$$ The Wilson coefficients inside Eqs. (\[eq:Kfunctions\]) should be calculated by introducing NP deviations at the scale $\mu=M_W$ and then running down their corresponding values to the scale $\mu\sim m_b$ through the renormalization group equations, for details see the discussion in Sec. \[sec:basic\]. In Appendix \[Sec:Inputs\], we provide details on the numerical inputs used. Since there was tremendous progress [@King:2019rvk; @DiLuzio:2019jyq] in the theoretical precision of the mixing observables we will present in this work numerical updates of all mixing observables: $\Delta \Gamma_q$ below, $\Delta M_q$ in Section \[sec:sin2betaM12\] and the semi-leptonic CP asymmetries $a_{sl}^q$ and $\phi_q$ in Section \[sec:multiple\_channels\]. For our numerical analysis we use results for $\Gamma_{12,3}^{q,(0)}$ ,$\Gamma_{12,3}^{q,(1)}$ and $\Gamma_{12,4}^{q,(0)}$, from [@Beneke:1998sy; @Beneke:2002rj; @Beneke:1996gn; @Dighe:2001gc; @Ciuchini:2003ww; @Beneke:2003az; @Lenz:2006hd] and for the hadronic matrix elements the averages presented in [@DiLuzio:2019jyq] based on [@Grozin:2016uqy; @Kirk:2017juj; @King:2019lal] and [@Christ:2014uea; @Bussone:2016iua; @Hughes:2017spc; @Bazavov:2017lyh], as well as the dimension seven matrix elements from [@Davies:2019gnp]. The new SM determinations for $\Delta \Gamma_s$ and $\Delta \Gamma_d$ are $$\begin{aligned}
\Delta \Gamma^{\rm SM}_s
&=& \Bigl(9.1 \pm 1.3 \Bigl)\cdot 10^{-2}~\hbox{ps}^{-1},
\\
\Delta \Gamma^{\rm SM}_d
&=& \Bigl( 2.6 \pm 0.4\Bigl)\cdot 10^{-3}~\hbox{ps}^{-1}.\end{aligned}$$ The error budgets of the mixing observables $\Delta \Gamma_s$ and $\Delta \Gamma_d$ are presented in Tabs. \[error:DGs\] and \[error:DGd\] respectively. Compared to the SM estimates for $\Delta \Gamma_s$ stemming from 2006 [@Lenz:2006hd], 2011 [@Lenz:2011ti] and 2015 [@Artuso:2015swg] we find a huge improvement in the SM precision.
$\Delta \Gamma_s^{\rm SM} $ $ \mbox{this work} $ $\mbox{ABL 2015} $ $\mbox{LN 2011} $ ${\mbox{LN 2006}}$
----------------------------- ------------------------------ ------------------------------ ---------------------------- ----------------------------
$\mbox{Central Value} $ $ 0.091 \, \mbox{ps}^{-1} $ $ 0.088 \, \mbox{ps}^{-1} $ $0.087 \, \mbox{ps}^{-1} $ $ 0.096 \, \mbox{ps}^{-1}$
$B^s_{\widetilde R_2}$ $ 10.9 \% $ $ 14.8 \% $ $17.2 \% $ $15.7 \%$
$\mu $ $ 6.6 \% $ $ 8.4 \% $ $ 7.8 \% $ $13.7 \%$
$V_{cb} $ $ 3.4 \% $ $ 4.9 \% $ $ 3.4 \% $ $ 4.9 \%$
$B^s_{R_0} $ $3.2 \%$ $ 2.1 \% $ $ 3.4 \% $ $ 3.0 \%$
$f_{B_s} \sqrt{B^s_1}$ $ 3.1 \% $ $ 13.9 \% $ $13.5 \% $ $34.0 \%$
$B^s_3$ $ 2.2 \% $ $ 2.1 \% $ $ 4.8 \% $ $ 3.1 \%$
$\bar z $ $0.9 \%$ $ 1.1 \% $ $ 1.5 \% $ $ 1.9 \%$
$m_b $ $0.9 \%$ $ 0.8 \% $ $ 0.1 \% $ $ 1.0 \%$
$B^s_{R_3} $ $0.5 \%$ $ 0.2 \% $ $ 0.2 \% $ $ ---$
$B^s_{\tilde{R}_3} $ - $ 0.6 \% $ $ 0.5 \% $ $ ----$
$m_s $ $0.3 \%$ $ 0.1 \% $ $ 1.0 \% $ $ 1.0 \%$
$B^s_{\tilde{R}_1}$ $0.2 \%$ $ 0.7\% $ $ 1.9 \% $ $ ---$
$\Lambda_5^{\rm QCD} $ $ 0.1 \% $ $ 0.1 \% $ $ 0.4 \% $ $ 0.1 \%$
$\gamma $ $ 0.1 \% $ $ 0.1 \% $ $ 0.3 \% $ $ 1.0 \%$
$B^s_{R_1} $ $ 0.1 \% $ $ 0.5 \% $ $ 0.8 \% $ $ ---$
$|V_{ub}/V_{cb}| $ $ 0.1 \% $ $ 0.1 \% $ $ 0.2 \% $ $ 0.5 \%$
$\bar{m}_t(\bar{m}_t) $ $ 0.0 \% $ $ 0.0 \% $ $ 0.0 \% $ $ 0.0 \%$
Total $14.1 \% $ $22.8 \% $ $ 24.5 \% $ $40.5 \%$
: List of the individual contributions to the theoretical error of the decay rate difference $\Delta \Gamma_s$ within the Standard Model and comparison with the values obtained in 2015 [@Artuso:2015swg], in 2011 [@Lenz:2011ti] and in 2006 [@Lenz:2006hd]. We have used equations of motion in the current analysis to get rid of the operator $\tilde{R}_3$.[]{data-label="error:DGs"}
In addition, the current SM predictions are based for the first time on a non-perturbative determination [@Davies:2019gnp] of the leading uncertainty due to dimension seven operators. All previous predictions had to rely on vacuum insertion approximation for the corresponding matrix elements. To further reduce the theory uncertainties, improvements in the lattice determination would be very welcome or a corresponding sum rule calculation. The next important uncertainty stems from the renormalisation scale dependence, to reduce this a NNLO calculation is necessary. First steps in that direction have been done in [@Asatrian:2017qaz].
$ \Delta \Gamma_d^{\rm SM} $ This work ABL 2015
------------------------------- ----------------------------------------- -----------------------------------------
$\mbox{Central Value}$ $2.61 \cdot 10^{-3} \, \mbox{ps}^{-1} $ $2.61 \cdot 10^{-3} \, \mbox{ps}^{-1} $
$B^d_{\widetilde R_2} $ $11.1\% $ $14.4\% $
$f_{B_d} \sqrt{B_1^d} $ $3.6\%$ $13.7\%$
$\mu $ $ 6.7\% $ $ 7.9\% $
$V_{cb} $ $ 3.4\%$ $ 4.9\%$
$B_3^d $ $ 2.4\% $ $ 4.0\% $
$B_{R_0}^d $ $ 3.3\% $ $ 2.5\% $
$\bar z $ $ 0.9\% $ $ 1.1\% $
$m_b $ $ 0.9\% $ $ 0.8\% $
$\tilde{B}_{{R}_3}^d $ - $ 0.5\% $
$B_{{R}_3}^d $ $ 0.5\% $ $ 0.2\% $
$\gamma $ $ 2.2 \%$ $ 2.5\%$
$\Lambda_5^{\rm QCD} $ $ 0.1 \% $ $ 0.1 \% $
$|V_{ub}/V_{cb}| $ $ 0.0 \% $ $ 0.1 \% $
$\bar{m}_t(\bar{m}_t)$ $ 0.0 \% $ $ 0.0 \% $
Total $14.7\%$ $22.7\%$
: List of the individual contributions to the theoretical error of the mixing quantity $\Delta \Gamma_d$ and comparison with the values obtained in 2015 [@Artuso:2015swg]. We have used equations of motion in the current analysis to get rid of the operator $\tilde{R}_3$.[]{data-label="error:DGd"}
In the ratio $\Delta \Gamma_q/\Delta M_q$ uncertainties due to the matrix elements of dimension six are cancelling - so for a long time this ratio was considerably better known than the individual value of $\Delta \Gamma_s$. Due to the huge progress in determining precise values for these non-perturbative parameter, this advantage is now considerably less pronounced, see Table \[error4\].
-----------------------------------------------------------------------------------------------------------------------------------------------------
$\Delta \Gamma_s^{\rm SM} / \Delta M_s^{\rm SM} $ $ \mbox{this work} $ $ \mbox{ABL 2015}$ $ \mbox{LN 2011} $ {\mbox{LN 2006}}$
$
----------------------------------------------------- ----------------------- ----------------------- ---------------------- ------------------------
$\mbox{Central Value} $ $48.2 \cdot 10^{-4}$ $48.1 \cdot 10^{-4}$ $50.4 \cdot 10^{-4}$ $ 49.7 \cdot 10^{-4}$
$B_{R_2}^s $ $ 10.9 \% $ $ 14.8 \% $ $ 17.2 \% $ $ 15.7 \%$
$\mu $ $ 6.6 \% $ $ 8.4 \% $ $ 7.8 \% $ $ 9.1 \%$
$B_{R_0}^s $ $ 3.2 \% $ $ 2.1 \% $ $ 3.4 \% $ $ 3.0 \%$
$B_3^s$ $ 2.2 \% $ $ 2.1 \% $ $ 4.8 \% $ $ 3.1 \%$
$\bar z $ $ 0.9 \% $ $ 1.1 \% $ $ 1.5 \% $ $ 1.9 \%$
$m_b $ $ 0.9 \% $ $ 0.8 \% $ $ 1.4 \% $ $ 1.0 \%$
$B_{R_3}^s $ $ 0.5 \% $ $ 0.2 \% $ $ 0.2 \% $ $ ---$
$B_{\tilde{R}_3}^s$ $ - $ $ 0.6 \% $ $ 0.5 \% $ $ ----$
$m_t $ $ 0.3 \% $ $ 0.7 \% $ $ 1.1 \% $ $ 1.8 \%$
$m_s $ $ 0.3 \% $ $ 0.1 \% $ $ 1.0 \% $ $ 0.1 \%$
$\Lambda_5^{\rm QCD} $ $ 0.2 \% $ $ 0.2 \% $ $ 0.8 \% $ $ 0.1 \%$
$B_{\tilde{R}_1}^s$ $ 0.2 \% $ $ 0.7 \% $ $ 1.9 \% $ $ ---$
$B_{R_1}^s $ $ 0.1 \% $ $ 0.5 \% $ $ 0.8 \% $ $ ---$
$\gamma $ $ < 0.1 \% $ $ 0.0 \% $ $ 0.0 \% $ $ 0.1 \%$
$|V_{ub}/V_{cb}|$ $ < 0.1 \% $ $ 0.0 \% $ $ 0.0 \% $ $ 0.1 \%$
$V_{cb} $ $ < 0.1 \% $ $ 0.0 \% $ $ 0.0 \% $ $ 0.0 \%$
Total $ 13.4 \% $ $ 17.3 \% $ $ 20.1 \% $ $ 18.9 \%$
-----------------------------------------------------------------------------------------------------------------------------------------------------
: List of the individual contributions to the theoretical error of the ratio $\Delta \Gamma_s$/$\Delta M_s$ within the Standard Model and comparison with the values obtained in 2015 [@Artuso:2015swg], in 2011 [@Lenz:2011ti] and in 2006 [@Lenz:2006hd]. We have used equations of motion in the current analysis to get rid of the operator $\tilde{R}_3$.[]{data-label="error4"}
For the corresponding experimental values we use the HFLAV averages $$\begin{aligned}
\Delta \Gamma^{\rm Exp}_s&=&\Bigl(8.8 \pm 0.6\Bigl)\cdot 10^{-2}~\hbox{ps}^{-1}, \hbox{\cite{Amhis:2016xyh}}\nonumber\\
\Delta \Gamma^{\rm Exp}_d&=&\Bigl(-1.32 \pm 6.58\Bigl)\cdot 10^{-3}\hbox{~ps}^{-1},
\label{eq:DGammaExp}\end{aligned}$$ where $\Delta \Gamma^{Exp}_d$ was obtained using [@Amhis:2016xyh] $$\begin{aligned}
\Bigl(\Delta \Gamma_d/\Gamma_d\Bigl)^{\rm Exp}=-0.002\pm 0.010, &&
\tau^{\rm Exp}_{B^0_d}=\Bigl(1.520 \pm 0.004\Bigl) \hbox{~ps}.\end{aligned}$$ The resulting regions for $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by $\Delta \Gamma_s$ are presented in Fig.\[fig:dGammas\].
![ Potential regions for the NP contributions in $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by the observable $\Delta \Gamma_s$ at $90\%$ C.L.. The black point corresponds to the SM value. []{data-label="fig:dGammas"}](dGammas_dC1.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions in $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by the observable $\Delta \Gamma_s$ at $90\%$ C.L.. The black point corresponds to the SM value. []{data-label="fig:dGammas"}](dGammas_dC2.pdf "fig:"){height="5cm"}
### $S_{J/\psi \phi}$ {#subsub:SJPsiPhi}
The mixing induced CP asymmetry for the decay $\bar{B}_s\rightarrow J/\psi \phi$, given as $$\begin{aligned}
\label{eq:SJPsiPhi}
S_{J/\psi \phi}&=&\frac{2\mathcal{I}m\Bigl(2\lambda^s_{J/\psi \phi}\Bigl)}{1+\Bigl|\lambda^s_{J/\psi \phi}\Bigl|^2}
=\sin(2\beta_s),\end{aligned}$$ can be used to constrain $\Delta C^{s, cc}_{1}(M_W)$. In Eq. (\[eq:SJPsiPhi\]), $\lambda^s_{J/\psi \phi}$ is determined according to Eq. (\[eq:lambdaf\]) considering the amplitudes $\bar{\mathcal{A}}_{J/\psi \phi}$ and $\mathcal{A}_{J/\psi \phi}$ for the decays $\bar{B}^0_s\rightarrow J/\psi \phi$ and $B^0_s\rightarrow J/\psi \phi$ respectively. The required theoretical expressions have been calculated explicitly within the QCDF formalism in [@Cheng:2001ez]. The equation for the decay amplitude obeys the structure $$\begin{aligned}
\mathcal{A}^{h}_{J/\psi \phi}&\propto& \alpha^{J/\psi \phi, h}_{1} + \alpha^{J/\psi \phi, h}_{3} + \alpha^{J/\psi \phi, h}_{5} +
\alpha^{J/\psi \phi, h}_{7} + \alpha^{J/\psi \phi, h}_{9},
\label{eq:masterAmpphi}\end{aligned}$$ where the the proportionality constant has been omitted since it cancels in the ratio $\lambda^s_{J/\psi \phi}$. The amplitudes $\alpha^{J/\psi \phi}_i$ appearing in Eq. (\[eq:masterAmpphi\]) obey the structure given in Eq. (\[eq:alphaGen0\]). The required expressions for the vertices and hard-scattering functions can be found in the appendix. The index $h=0,\pm$ indicated in Eq. (\[eq:masterAmpphi\]) makes reference to helicity of the particles in the final state. During our analysis we average over the different helicity contributions. Therefore we take $$\begin{aligned}
S_{J/\psi\phi}=\frac{S^0_{J/\psi\phi} + S^+_{J/\psi\phi} + S^-_{J/\psi\phi}}{3},\end{aligned}$$ where each one of the asymmetries $S^h_{J/\psi\phi}$, are determined individually considering the corresponding amplitude $\mathcal{A}^h_{J/\psi \phi}$ for $h=0,\pm$.\
Neglecting penguin contributions our theoretical evaluation leads to $$\begin{aligned}
\sin(2\beta^{\rm SM}_s)=0.037 \pm 0.001,\end{aligned}$$ which numerically coincides with $2\beta^{\rm SM}_s$ within the precision under consideration. The error budget is shown in Table \[tab:sinBetas\]. On the experimental side we use the average [@Amhis:2016xyh] $$\begin{aligned}
2\beta^{\rm Exp}_s=0.021 \pm 0.031.\end{aligned}$$ The effect of $S_{J/\psi\phi}$ on the allowed values for $\Delta C^{s, cc}_{1}(M_W)$ is not as strong as the results derived from other observables. However we included it in our analysis for completeness. For this reason we do not show the individual constraints from $S_{J/\psi\phi}$ and present only its effect in the global $\chi^2$-fit described in Section \[sec:Universal\_fit\].
Parameter Relative error
------------------- ---------------- --
$|V_{ub}/V_{cb}|$ $2.44\%$
$\gamma$ $1.39\%$
$|V_{us}|$ $0.07\%$
Total $2.81\%$
: Error budget for the observable $\sin(2\beta_s)$.[]{data-label="tab:sinBetas"}
### $\tau_{B_s}/\tau_{B_d}$
The lifetime ratio $\tau_{B_s}/\tau_{B_d}$ gives us sensitivity to $\Delta C^{s,cc}_1(M_W)$ and $\Delta C^{s,cc}_2(M_W)$ via the weak exchange diagram contributing to the $B_s$-lifetime as CKM leading part. We assumed here that no new effects are arising in the $B_d$-lifetime, where the CKM leading part is given by a $b \to c \bar{u} d$ transition. Allowing new effects in both $b \to c \bar{c} s$ and $b \to c \bar{u} d$ the individually large effects will hugely cancel. We also neglect the currently unknown contribution of the Darwin term. Using the results presented in [@Jager:2017gal] we write $$\begin{aligned}
\frac{\tau_{B_s}}{\tau_{B_d}}=\left(\frac{\tau_{B_s}}{\tau_{B_d}}\right)^{\rm SM}+\left(\frac{\tau_{B_s}}{\tau_{B_d}}\right)^{\rm NP} \, ,
\label{eq:TausTauSM+NP}\end{aligned}$$ for the SM value we take [@Kirk:2017juj] $$\begin{aligned}
\left(\frac{\tau_{B_s}}{\tau_{B_d}}\right)^{\rm SM}&=&1.0006\pm 0.0020.
\label{eq:TausTauSM}\end{aligned}$$ The experimental result for the ratio is [@Amhis:2016xyh] $$\begin{aligned}
\left(\frac{\tau_{B_s}}{\tau_{B_d}}\right)^{\rm Exp}&=& 0.994\pm 0.004.
\label{eq:TausTauExp}\end{aligned}$$ To estimate the NP contribution $(\tau_{B_s}/\tau_{B_d})^{\rm NP}$ we consider the following function [@Jager:2017gal] $$\begin{aligned}
F_{\tau_{B_s}/\tau_{B_d}}(C_1,C_2)
&=&
G^2_F |V_{cb} V_{cs}|^2 m^2_b M_{B_s} f^2_{B_s} \tau_{B_s}\frac{\sqrt{1-4 x^2_c}}{144\pi}\Biggl\{ (1-x^2_c)\Biggl[4 |C'|^2 B_1\nonumber\\
&& + 24 |C_{2}|^2 \epsilon_1 \Biggl] - \frac{M^2_{B_s} (1 + 2 x^2_c)}{(m_b+m_s)^2}\Biggl[4 |C'|^2 B_2+ 24 |C_{2}|^2 \epsilon_2 \Biggl]\Biggl\},\nonumber\\\end{aligned}$$ where $x_c = m_c/m_b$ and $C'$ denotes the following combination of tree-level Wilson coefficients $$\begin{aligned}
C'\equiv3 C_1 +C_2.\end{aligned}$$ The non-perturbative matrix elements of the arising four-quark $\Delta B = 0$ operators are parameterised in terms of the decay constant $f_{B_s}$ and the bag parameter $B_1$, $B_2$, $\epsilon_1$ and $\epsilon_2$, which we take from the recent evaluation in [@Kirk:2017juj]. The numerical values used are listed in Appendix \[Sec:Inputs\]. The NP contribution to the lifetime ratio can be written as $$\begin{aligned}
\Bigl(\frac{\tau_{B_s}}{\tau_{B_d}}\Bigl)^{\rm NP}&=&
F_{\tau_{B_s}/\tau_{B_d}}(C^{s, cc}_1(\mu),C^{s, cc}_2(\mu))\nonumber\\
&&-F_{\tau_{B_s}/\tau_{B_d}}(C^{s, cc}_1(\mu),C^{s, cc}_2(\mu))\Biggl|_{\rm SM},
\label{eq:LTRatio} \end{aligned}$$ where in the second term in Eq. (\[eq:LTRatio\]) we have dropped the NP contributions $
\Delta C^{s, cc}_1(\mu)$ and $\Delta C^{s, cc}_2(\mu)$.
![ Potential regions for the NP contributions in $\Delta C^{s, cc}_1(M_W)$ and $\Delta C^{s, cc}_2(M_W)$ allowed by the life-time ratio $\tau_{B_s}/\tau_{B_d}$ at $90\%$ C.L.. Here we assumed only BSM contributions to the decay channel $b \to c \bar{c}s$, but none to $b \to c \bar{u} d$. The black point corresponds to the SM value.[]{data-label="Fig:TausTau"}](Life_Time_Ratio_C1_float.pdf){height="5cm"}
Our bounds for $\Delta C^{s, cc}_1(M_W)$ are shown in Fig. \[Fig:TausTau\], the corresponding results for $\Delta C^{s, cc}_2(M_W)$ turn out to be weak and therefore we do not display them. We would like to highlight the consistency between our regions and those presented in [@Jager:2019bgk] which were calculated at the $68\%$ C. L..
Observables constraining $b\rightarrow c\bar{c}d$ transitions {#sec:bccd}
-------------------------------------------------------------
We devote this section to the derivation of bounds on $\Delta C^{d,cc}_{1}(M_W)$ and $\Delta C^{d,cc}_{2}(M_W)$ from $\sin(2\beta_d)$ and $B\rightarrow X_d \gamma$. In our final analysis we also included contributions from $a_{sl}^d$ which will be described in more detail in Section \[sec:multiple\_channels\].
### $\sin(2\beta_d)$ and SM update of $\Delta M_q$ {#sec:sin2betaM12}
In our BSM framework mixing induced CP asymmetries can be modified by changes in the tree-level decay or by changes to the neutral $B$-meson mixing. The first effect was studied in Section \[subsub:SJPsiPhi\] for the case of $B_s \to J/ \Psi \phi$ and found to give very weak bounds. Thus we will not consider them here. The second effect is also expected to give relatively weak bounds, but since the lack of strong bounds on new contributions to $b \to c \bar{c}d$ we will consider it here - in the $b \to c \bar{c} s$ we neglected it, because of much stronger constraints from other observables.\
We can constrain $\Delta C^{d, cc}_{2}(M_W)$ with the observable $$\begin{aligned}
\sin(2\beta_d)&=&-S_{J/\psi K_S}\end{aligned}$$ which can be evaluated by applying the generic definition of the CP asymmetry shown in Eq. (\[eq:Sf\]) and using $$\begin{aligned}
\lambda^d_{J/\psi K_S}&=&\frac{q}{p}\Bigl|_{B_d} \frac{\bar{\mathcal{A}}_{J/\psi K_S} }{\mathcal{A}_{J/\psi K_S}}.
\label{eq:lambdaJPsi}\end{aligned}$$ Where in Eq. (\[eq:lambdaJPsi\]), $\mathcal{A}_{J/\psi K_S}$ and $\bar{\mathcal{A}}_{J/\psi K_S}$ correspond to the amplitudes for the processes $B^0\rightarrow J/\psi K_S$ and $\bar{B}^0\rightarrow J/\psi K_S$ respectively.\
We study here modifications of $q/p|_{B_d}$, while we neglect the change of the amplitudes $\mathcal{A}_{J/\psi K_S}$ and $\bar{\mathcal{A}}_{J/\psi K_S}$ - since an exploratory study found much weaker bounds. The definition of $q/p|_{B_d}$ in terms of the $B_d$ matrix element $M^d_{12}$ is given in Eq. (\[eq:qop\]).\
In the SM we have $$\begin{aligned}
M^{d, \rm{SM}}_{12}=\frac{\langle B^0_d| \hat{\mathcal{H}_d}^{|\Delta B|=2,\rm{SM}}|\bar{B}^0_d \rangle}{2M_{B^0_d}},
\label{eq:M12}\end{aligned}$$ with $$\begin{aligned}
\label{eq:EffDelta2}
\mathcal{\hat{H}}^{|\Delta B| = 2, \rm{SM}}_d
&=&\frac{G^2_F}{16\pi^2}(\lambda_t^{(d)})^2 C^{|\Delta B| =2}(m_t, M_W, \mu) \hat{Q}^d_1+ h.c..\end{aligned}$$ The dimension six effective ${|\Delta B| =2}$ operator ${Q}^d_1$ in Eq. (\[eq:EffDelta2\]) is given by $$\begin{aligned}
\hat{Q}^d_1&=& \Bigl(\bar{\hat{d}} \hat{b} \Bigl)_{V-A} \Bigl( \bar{\hat{d}}\hat{b}\Bigl)_{V-A},
\label{eq:DeltaB2}\end{aligned}$$ and the Wilson coefficient $C^{|\Delta B| =2}(m_t, M_W, \mu)$ corresponds to $$\begin{aligned}
C^{|\Delta B| =2}(m_t, M_W, \mu)&=&\hat{\eta} M^2_W S_0(x_t),\end{aligned}$$ where the factor $\hat{\eta}$ accounts for the renormalization group evolution from the scale $m_t$ down to the renormalization scale $\mu \sim m_b$ [@Buras:1990fn] and $S_0(x_t)$ is the Inami-Lim function [@Inami:1980fz] $$\begin{aligned}
S_0(x_t)=\frac{x_t}{(1-x_t)^2}\Bigl(1-\frac{11}{4} x_t +\frac{x^2_t}{4} -\frac{3 x^2_t \ln x_t}{(1-x_t)}\Bigl).\end{aligned}$$ Using the new averages presented in [@DiLuzio:2019jyq] for the hadronic matrix elements (based on the non-perturbative calculations in [@Grozin:2016uqy; @Kirk:2017juj; @King:2019lal] and [@Christ:2014uea; @Bussone:2016iua; @Hughes:2017spc; @Bazavov:2017lyh]) we get the new updated SM results $$\begin{aligned}
\Delta M_s^{\rm SM} & = & \left( 18.77 \pm 0.86 \right) \mbox{ps}^{-1} \, ,
\\
\Delta M_d^{\rm SM} & = & \left( 0.543 \pm 0.029 \right) \mbox{ps}^{-1} \, ,\end{aligned}$$ where we observe a huge reduction of the theoretical uncertainty, see Tables \[error1\] and \[error7\]. Our numbers agree with the ones quoted in [@DiLuzio:2019jyq] - a tiny difference stems from a different treatment of the top quark mass, the CKM input and the symmetrisation of the error we have performed here.
$\Delta M_s^{\rm SM} $ $\mbox{This work} $ $\mbox{ABL 2015}$ $\mbox{LN 2011} $ $ {\mbox{LN 2006}} $
--------------------------- ---------------------------- --------------------------- ----------------------------- ----------------------------
$\mbox{Central Value} $ $18.77 \, \mbox{ps}^{-1} $ $18.3 \, \mbox{ps}^{-1} $ $ 17.3 \, \mbox{ps}^{-1 } $ $ 19.3 \, \mbox{ps}^{-1} $
$f_{B_s} \sqrt{B_1^s} $ $3.1 \%$ $ 13.9\% $ $ 13.5 \% $ $ 34.1 \% $
$V_{cb} $ $3.4 \%$ $ 4.9 \% $ $ 3.4 \% $ $ 4.9 \% $
$m_t $ $0.3 \%$ $ 0.7 \% $ $ 1.1 \% $ $ 1.8 \% $
$ \Lambda_5^{\rm QCD} $ $0.2 \%$ $ 0.1 \% $ $ 0.4 \% $ $ 2.0 \% $
$\gamma $ $0.1 \%$ $0.1 \% $ $ 0.3 \% $ $ 1.0 \% $
$|V_{ub}/V_{cb}| $ $<0.1 \%$ $0.1 \% $ $ 0.2 \% $ $0.5 \% $
$\overline{m}_b $ $<0.1 \%$ $<0.1 \% $ $ 0.1 \% $ $ --- $
Total $ 4.6 \% $ $ 14.8 \% $ $14.0 \% $ $34.6 \% $
: List of the individual contributions to the theoretical error of the mass difference $\Delta M_s$ within the Standard Model and comparison with the values obtained in 2015 [@Artuso:2015swg], in 2011 [@Lenz:2011ti] and in 2006 [@Lenz:2006hd].[]{data-label="error1"}
$ \Delta M_d^{\rm SM} $ This work
--------------------------- ------------------------------ ------------------------------
$\mbox{Central Value}$ $ 0.543 \, \mbox{ps}^{-1} $ $ 0.528 \, \mbox{ps}^{-1} $
$f_{B_d} \sqrt{B_1^d} $ $3.6\%$ $13.7\% $
$V_{cb} $ $ 3.4\%$ $ 4.9\% $
$m_b $ $ 0.1\%$ $ 0.1\% $
$\gamma $ $ 0.2\%$ $ 0.2 \% $
$\Lambda_5^{\rm QCD} $ $ 0.2\%$ $ 0.1 \% $
$|V_{ub}/V_{cb}| $ $ 0.1\%$ $ 0.1 \% $
$\bar{m}_t(\bar{m}_t$ $ 0.3\%$ $ 0.1 \% $
Total $5.3 \%$ $14.8\%$
: List of the individual contributions to the theoretical error of the mixing quantity $\Delta M_d$ and comparison with the values obtained in 2015 [@Artuso:2015swg].[]{data-label="error7"}
HFLAV [@Amhis:2016xyh] gives for the experimental values $$\begin{aligned}
\Delta M_s^{\rm Exp} & = & \left( 17.757 \pm 0.021 \right) \mbox{ps}^{-1} \, ,
\\
\Delta M_d^{\rm exp} & = & \left( 0.5064 \pm 0.0019 \right) \mbox{ps}^{-1} \, .\end{aligned}$$ We introduce BSM effects to Eq. (\[eq:M12\]) by adding to the SM expression in Eq. (\[eq:EffDelta2\]) the double insertion of the effective Hamiltonian $$\begin{aligned}
\hat{\mathcal{H}}_{eff}^{|\Delta B|=1}&=&\frac{G_F}{\sqrt{2}}\Bigl(\sum_{p,p'=u,c}
\lambda^{(d)}_{pp'} C^{pp', d}_2 \hat{Q}^{pp', d}_2 + h.c. \Bigl).\end{aligned}$$ Following [@Boos:2004xp] we evaluate the full combination at the scale $\mu_c=m_c$, where the extra contribution to the SM $|\Delta B|=2$ Hamiltonian in Eq. (\[eq:EffDelta2\]) is given by $$\begin{aligned}
\hat{\mathcal{H}}^{|\Delta B|=2}_{extra} &\approx&
\frac{G^2_F}{16\pi^2} \Biggl\{ C'_1(\mu_c) \hat{P}_1 + C'_2(\mu_c) \hat{P}_2
\nonumber\\
&&+ \Biggl[ \Bigl(2\lambda^{(d)}_c\lambda^{(d)}_t C_3(x_t^2)
+ (\lambda^{(d)}_c)^2\Bigl)
+ C'_3(\mu_c) \Biggl] \hat{P}_3 \Biggl\},\nonumber\\
\label{eq:H2extra}\end{aligned}$$ with $$\begin{aligned}
C'_{1}(\mu_c)&=&- \frac{2}{3} \hbox{ ln}\Bigl[\frac{\mu_c^2}{M^2_W}\Bigl] \Bigl\{\frac{(\lambda^{(d)}_c)^2}{2} \Bigl(C^{d, cc}_2\Bigl)^2 -
(\lambda^{(d)}_c)^2 C^{d, cu}_2 C_2^{d, uc} - \lambda^{(d)}_c \lambda^{(d)}_t C_2^{d, cu} C_2^{d, uc}\nonumber\\
&&+ \frac{(\lambda^{(d)}_c)^2}{2} \Bigl(C_2^{d, u u}\Bigl)^2 + \lambda^{(d)}_c \lambda^{(d)}_t \Bigl(C_2^{d, uu}\Bigl)^2 + \frac{(\lambda^{(d)}_t)^2}{2}
\Bigl(C_2^{d, uu} \Bigl)^2\Bigl\},\nonumber\\
C'_{2}(\mu_c)&=&\frac{2}{3} \hbox{ ln}\Bigl[\frac{\mu_c^2}{M^2_W}\Bigl]
\Bigl\{(\lambda^{(d)}_c)^2 \Bigl(C_2^{d, cc}\Bigl)^2 - 2 (\lambda^{(d)}_c)^2 C_2^{d, cu} C_2^{d, uc} - 2 \lambda^{(d)}_c \lambda^{(d)}_t C_2^{d, cu} C_2^{d, uc} \nonumber\\
&&+ (\lambda^{(d)}_c)^2 \Bigl(C_2^{d, uu}\Bigl)^2 + 2 \lambda_c \lambda^{(d)}_t \Bigl(C_2^{d, uu}\Bigl)^2 + (\lambda^{(d)}_t)^2 \Bigl(C_2^{d, uu}\Bigl)^2\Bigl\},\nonumber\\
C'_{3}(\mu_c)&=&\frac{2}{3} \hbox{ ln}\Bigl[\frac{\mu_c^2}{M^2_W}\Bigl]
\Bigl\{ 3 (\lambda^{(d)}_c)^2 \Bigl(C_2^{d, cc}\Bigl)^2 - 3 (\lambda^{(d)}_c)^2 C_2^{d, c u} C_2^{d, uc} - 3 \lambda^{(d)}_c \lambda^{(d)}_t C_2^{d, cu} C_2^{d, uc}\Bigl\}.
\nonumber\\\end{aligned}$$ The set of HQET operators required in Eq. (\[eq:H2extra\]) are $$\begin{aligned}
\hat{P}_0=(\bar{\hat{h}}^{(+)}\hat{d})_{V-A}(\bar{\hat{h}}^{(-)} \hat{d})_{V-A},&&
\hat{P}_1=m^2_b \hat{P}_0,\nonumber\\
&&\nonumber\\
\hat{P}_2=m^2_b\Bigl(\bar{\hat{h}}^{(+)}_{v}\Bigl[1-\gamma_5 \Bigl] \hat{d}\Bigl)\Bigl(\bar{\hat{h}}^{(-)}_{v} \Bigl[1-\gamma_5 \Bigl]\hat{d}\Bigl),&&\hat{P}_3= m^2_c\hat{P}_0.\end{aligned}$$ Thus, our full determination of $M^d_{12}$ is given by $$\begin{aligned}
M^d_{12}&=&\frac{\langle B^0_d|\hat{\mathcal{H}}^{|\Delta B|=2 ,SM}_d
+\hat{\mathcal{H}}_{extra}^{|\Delta B|=2}
|\bar{B^0_d}\rangle}{2 M_{B^0_d}},\end{aligned}$$ where the $|\Delta B|=2$ operator $\hat{Q}^d_1$ is matched at the scale $\mu_c=m_c$ into $\hat{P}_0$ [@Boos:2004xp]. The required matrix elements for the numerical evaluations are [@Kirk:2017juj] $$\begin{aligned}
\label{eq:matP0}
\langle B^0_d| \hat{P}_0 |\bar{B}^0_d \rangle &=&\frac{8}{3}f^2_{B_d}M^2_{B_d}B^d_1(\mu_c),\nonumber\\
\langle B^0_d| \hat{P}_2 |\bar{B}^0_d \rangle&=& -\frac{5}{3} m^2_b \Bigl(\frac{M_{B_d}}{m_b + m_d}\Bigl)^2f^2_{B_d}M^2_{B_d}B^{d}_2(\mu_c),\end{aligned}$$ with the values for the Bag parameters as indicated in Appendix \[Sec:Inputs\]. Our theoretical result - neglecting contributions from penguins - is $$\begin{aligned}
\sin(2\beta^{\rm SM}_d)=0.707\pm 0.030,
\label{eq:Sin2betadfitdir}\end{aligned}$$ the full error budget in the SM can be found in Table \[tab:MdMd\]. Notice that, the contributions from double insertions of the $|\Delta B|=1$ effective Hamiltonian are relevant only when $\Delta C^{d, cc}_{2}(M_W)\neq 0$, hence they do not appear in Table \[tab:MdMd\]. On the experimental side we use the average from direct measurements [@Amhis:2016xyh] $$\begin{aligned}
\sin(2\beta^{\rm Exp}_d)=0.699\pm 0.017,\end{aligned}$$ our results for the allowed regions on $\Delta C^{d, cc}_2(M_W)$ are shown in Fig. \[fig:M12d\].
Parameter Relative error
------------------- ---------------- --
$|V_{ub}/V_{cb}|$ $4.22\%$
$|V_{us}|$ $0.20\%$
$\gamma$ $0.04\%$
$\mu_c$ $0.02\%$
$|V_{cb}|$ $0.01\%$
Total $4.22\%$
: Error budget for the observable $\sin(2\beta_d)$.[]{data-label="tab:MdMd"}
![ Potential regions for the NP contributions in $\Delta C^{d, cc}_2(M_W)$ allowed by the observable $\sin(2\beta_d)$ from modifications in $M^d_{12}$ through double insertions of the $\Delta B=1$ effective Hamiltonian at $90\%$ C.L.. Due to the weakness of the current bounds, penguin pollution has been neglected in the analysis. The black point corresponds to the SM value. []{data-label="fig:M12d"}](M12_d_old_dC2.pdf){height="5cm"}
### $\bar{B}\rightarrow X_d \gamma$ {#sec:Bdgamma}
The branching ratio of the process $\bar{B}\rightarrow X_d \gamma$ allows us to impose further constraints on the NP contribution $\Delta C^{d, cc}_2(M_W)$. For the theoretical determination, we used the NNLO branching ratio for the transition $\bar{B}\rightarrow X_d \gamma$ given in [@Misiak:2015xwa] $$\begin{aligned}
\mathcal{B}^{\hbox{NNLO}}_r(\bar{B}\rightarrow X_d \gamma)=(1.73^{+0.12}_{-0.22})\cdot 10^{-5}\quad\hbox{ for }E_{\gamma}>1.6~\hbox{ GeV}.\end{aligned}$$ On the experimental side we consider [@Crivellin:2011ba; @delAmoSanchez:2010ae; @Wang:2011sn] $$\begin{aligned}
\mathcal{B}_{r}^{\rm Exp}(\bar{B}\rightarrow X_d \gamma)&=&\Bigl(1.41 \pm 0.57 \Bigl)\cdot 10^{-5}.
\label{eq:BdgammaExp}\end{aligned}$$ The NP regions on $\Delta C^{cc, d}_1(M_W)$ derived from $\mathcal{B}^{\hbox{NNLO}}_r(\bar{B}\rightarrow X_d \gamma)$ are shown in Fig. (\[fig:Bd\_gamma\]). Our treatment for $\bar{B}\rightarrow X_d \gamma$ is analogous for the one followed for $\bar{B}\rightarrow X_d \gamma$, therefore our discussion here is rather short and we refer the reader to the details provided in Section \[sec:Bsgamma\].
![ Potential regions for the NP contributions in $\Delta C^{d, cc}_2(M_W)$ allowed by the observable $\mathcal{B}r(\bar{B}\rightarrow X_d \gamma)$ at $90\%$ C.L.. The black point corresponds to the SM value.[]{data-label="fig:Bd_gamma"}](B_d_gamma_C2_full_float.pdf){height="5cm"}
Observables constraining multiple channels
------------------------------------------
Several observables like $\Delta \Gamma_q$, $\tau (B_s) / \tau (B_d)$ and the semi-leptonic CP asymmetries are affected by different decay channels. We have shown that $\Delta \Gamma_s$ is by far dominated by the $ b \to c \bar{c} s$ transition, $\Delta \Gamma_d$ has not yet been been measured. In $\tau (B_s) / \tau (B_d)$ a new effect in the $ b \to c \bar{c} s$ transition roughly cancels a similar size effect in a $ b \to c \bar{u} d$ transition, thus we have assumed for this observable only BSM effects in the $ b \to c \bar{c} s$ transition. Below we will study constraints stemming from $a_{sl}^q$, which is affected by the decay channels $b \to c \bar{c} q$, $b \to c \bar{u} q$, $b \to u \bar{c} q$ and $b \to u \bar{u} q$. \[sec:multiple\_channels\]
### $a^s_{sl}$ and $a^d_{sl}$: Bounds and SM update
The theoretical description of the semi-leptonic CP asymmetries was already presented in detail in Section \[subsec:DGs\]. Our SM predictions for the semileptonic asymmetries $a^s_{sl}$ and $a^d_{sl}$ are $$\begin{aligned}
a^{s,\rm SM}_{sl} &=& \Bigl(2.06 \pm 0.18 \Bigl)\cdot 10^{-5},
\\
a^{d,\rm SM}_{sl} &=& \Bigl(-4.73\pm 0.42 \Bigl)\cdot 10^{-4}.
\label{eq:aslasldSM}\end{aligned}$$ The error budgets of the mixing observables $a^s_{sl}$ and $a^d_{sl}$ within the SM are presented in Tabs. \[tab:asls\] and \[tab:asld\] respectively.
$a_{\rm sl}^{s,{\rm SM}} $ $ \mbox{this work} $ $ \mbox{ABL 2015} $ $ \mbox{LN 2011} $ $ {\mbox{LN 2006}} $
---------------------------- ------------------------ ----------------------------- -------------------------------- -------------------------
$\mbox{Central Value} $ $2.06 \cdot 10^{-5} $ $2.22 \cdot 10^{-5} $ $ 1.90\cdot 10^{-5} $ $ 2.06 \cdot 10^{-5} $
$\mu $ $6.7 \%$ $9.5 \% $ $ 8.9 \% $ $ 12.7 \% $
$\bar z $ $4.0 \%$ $4.6 \% $ $ 7.9 \% $ $ 9.3 \% $
$|V_{ub}/V_{cb}|$ $2.6 \%$ $5.0 \% $ $ 11.6 \% $ $ 19.5 \% $
$B_{R_3}^s $ $2.3 \%$ $ 1.1 \% $ $ 1.2 \% $ $ 1.1 \% $
$B_{\tilde{R}_3}^s$ - $2.6 \% $ $ 2.8 \% $ $ 2.5 \% $
$m_b $ $1.3 \%$ $ 1.0 \% $ $ 2.0 \% $ $ 3.7 \% $
$\gamma $ $1.1 \%$ $ 1.3 \% $ $ 3.1 \% $ $ 11.3 \% $
${ B}_{R_2}^s $ $0.8 \%$ $ 0.1 \% $ $ 0.1 \% $ $ --- $
$\Lambda_5^{\rm QCD} $ $0.6 \%$ $ 0.5 \% $ $ 1.8 \% $ $ 0.7 \% $
$m_t $ $0.3 \%$ $ 0.7 \% $ $ 1.1 \% $ $ 1.8 \% $
$B_{3}^s $ $0.3 \%$ $ 0.3 \% $ $ 0.6 \% $ $ 0.4 \% $
${B}_{R_0}^s $ $0.3 \%$ $ 0.2 \% $ $ 0.3 \% $ $ --- $
$m_s $ $<0.1 \%$ $ 0.1 \% $ $ 0.1 \% $ $ 0.1 \% $
$B_{\tilde{R}_1}^s$ $<0.1 \%$ $0.5 \% $ $ 0.2 \% $ $ --- $
${ B}_{R_1}^s $ $<0.1 \%$ $ <0.1 \% $ $ 0.0 \% $ $ --- $
$V_{cb} $ $<0.1 \%$ $ 0.0 \% $ $ 0.0 \% $ $ 0.0 \% $
Total $8.8 \%$ $ 12.2 \% $ $ 17.3 \% $ $ 27.9 \% $
: List of the individual contributions to the theoretical error of the semileptonic CP asymmetries $a_{sl}^s$ within the Standard Model and comparison with the values obtained in 2015 [@Artuso:2015swg], in 2011 [@Lenz:2011ti] and in 2006 [@Lenz:2006hd]. We have used equations of motion in the current analysis to get rid of the operator $\tilde{R}_3$.[]{data-label="tab:asls"}
$a_{\rm sl}^{d, \rm SM}$ This work
-------------------------- ------------------------- -------------------------
$\mbox{Central Value}$ $ -4.7 \cdot 10^{-4} $ $ -4.7 \cdot 10^{-4} $
$B_{\widetilde R_2}^d $ $0.8 \%$ $0.1 \%$
$\mu $ $ 6.7 \%$ $ 9.4 \%$
$V_{cb} $ $ 0.0 \%$ $ 0.0 \%$
$B_3^d $ $ 0.4 \%$ $ 0.6 \%$
$B_{R_0}^d $ $ 0.3 \%$ $ 0.2 \%$
$\bar z $ $ 4.1 \%$ $ 4.9 \%$
$m_b $ $ 1.3 \%$ $ 1.3 \%$
$B_{\tilde{R}_3}^d $ $ - \%$ $ 2.7 \%$
$B_{R_3}^d $ $ 2.3 \%$ $ 1.2 \%$
$\gamma $ $ 1.0 \%$ $ 1.1 \%$
$\Lambda_5^{\rm QCD} $ $ 0.8 \%$ $ 0.5 \%$
$|V_{ub}/V_{cb}| $ $ 2.7 \%$ $ 5.2 \%$
$\bar{m}_t(\bar{m}_t)$ $ 0.3 \%$ $ 0.7 \%$
Total $8.8 \%$ $12.3 \%$
: List of the individual contributions to the theoretical error of the mixing quantity $a_{\rm sl}^{d, \rm SM}$ in the $B^0$-sector and comparison with the values obtained in 2015 [@Artuso:2015swg]. We have used equations of motion in the current analysis to get rid of the operator $\tilde{R}_3$.[]{data-label="tab:asld"}
The current experimental bounds [@Amhis:2016xyh] are far above the SM predictions $$\begin{aligned}
a^{s,\rm Exp}_{sl}&=&\Bigl(60 \pm 280\Bigl)\cdot 10^{-5},\nonumber\\
a^{d,\rm Exp}_{sl}&=&\Bigl(-21 \pm 17 \Bigl)\cdot 10^{-4}.\end{aligned}$$ Nevertheless, these observables yield already, with the current experimental precision, strong bounds on $C_1$ and $C_2$ due to the pronounced sensitivity of Im$(\Gamma_{12}^q / M_{12}^q)$ on the imaginary components of the $\Delta B = 1$ Wilson coefficients. The regions for $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$ allowed by the observables $a^s_{sl}$ and $a^d_{sl}$ are presented in Figs.\[fig:asls\] and \[fig:asld\] respectively where for simplicity we have assumed the universal behaviour $$\begin{aligned}
\Delta C^{q, uu}_{j}(M_W)=\Delta C^{q, uc}_{j}(M_W)=\Delta C^{q, cc}_{j}(M_W).
\label{eq:asl_universal}\end{aligned}$$ for $j=1,2$. As discussed in Section \[subsec:DGs\] different BSM effects in individual decay channels could lift the severe GIM suppression and lead to large effects, while the scenario given in Eq.(\[eq:asl\_universal\]) is dominated by $b \to c \bar{c} q$ transitions. However, in Secs. \[sec:buudfit\], \[sec:bcudfit\] and \[sec:bccd\] we will also study the effects of $a_{sl}^d$ on the different $b$-quark decay channels $b\rightarrow u \bar{u} d$, $b\rightarrow c \bar{u} d$, and $b\rightarrow c \bar{c} d$ independently.
![ Potential regions for the NP contributions in $\Delta C^{s}_1(M_W)$ and $\Delta C^{s}_2(M_W)$ allowed by the semileptonic asymmetry $a^s_{sl}$ at $90\%$ C.L.. The black point corresponds to the SM value. For the purposes of illustration we have made the universality assumptions: $\Delta C^{s, uu}_{1}(M_W)=\Delta C^{s, cu}_{1}(M_W)=\Delta C^{s, uc}_1(M_W)=\Delta C^{s, cc}_{1}(M_W)=\Delta C^s_{1}(M_W)$ and similarly for $\Delta C^s_{2}(M_W)$.[]{data-label="fig:asls"}](asls_dC1.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions in $\Delta C^{s}_1(M_W)$ and $\Delta C^{s}_2(M_W)$ allowed by the semileptonic asymmetry $a^s_{sl}$ at $90\%$ C.L.. The black point corresponds to the SM value. For the purposes of illustration we have made the universality assumptions: $\Delta C^{s, uu}_{1}(M_W)=\Delta C^{s, cu}_{1}(M_W)=\Delta C^{s, uc}_1(M_W)=\Delta C^{s, cc}_{1}(M_W)=\Delta C^s_{1}(M_W)$ and similarly for $\Delta C^s_{2}(M_W)$.[]{data-label="fig:asls"}](asls_dC2.pdf "fig:"){height="5cm"}
![ Potential regions for the NP contributions in $\Delta C^{d}_1(M_W)$ and $\Delta C^{d}_2(M_W)$ allowed by the semileptonic asymmetry $a^d_{sl}$ at $90\%$ C.L.. The black point corresponds to the SM value. For the purposes of illustration we have made the universality assumptions: $\Delta C^{d, uu}_{1}(M_W)=\Delta C^{d, cu}_{1}(M_W)=\Delta C^{d, uc}_1(M_W)=\Delta C^{d, cc}_{1}(M_W)=\Delta C^d_{1}(M_W)$ and similarly for $\Delta C^d_{2}(M_W)$.[]{data-label="fig:asld"}](asld_dC1.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions in $\Delta C^{d}_1(M_W)$ and $\Delta C^{d}_2(M_W)$ allowed by the semileptonic asymmetry $a^d_{sl}$ at $90\%$ C.L.. The black point corresponds to the SM value. For the purposes of illustration we have made the universality assumptions: $\Delta C^{d, uu}_{1}(M_W)=\Delta C^{d, cu}_{1}(M_W)=\Delta C^{d, uc}_1(M_W)=\Delta C^{d, cc}_{1}(M_W)=\Delta C^d_{1}(M_W)$ and similarly for $\Delta C^d_{2}(M_W)$.[]{data-label="fig:asld"}](asld_dC2.pdf "fig:"){height="5cm"}
Global $\chi^2$-fit results {#sec:Globalfitresults}
===========================
So far, we have limited our discussion to constraints derived from individual observables. In this section, we present, as the main result of this work, the resulting regions for $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$ obtained after combining observables for the different exclusive $b$ quark transitions. We will investigate three consequences of BSM effects in non-leptonic tree-level decays.
1. The allowed size of BSM contributions to the Wilson coefficients $C_1$ and $C_2$, governing the leading tree-level decays.
2. The impact of these new effects on the possible size of the observable $\Delta \Gamma_d$, which has not been measured yet. Notice that, if one sigma deviations are considered, the current experimental uncertainty associated with $\Delta \Gamma_d$, see Eq. (\[eq:DGammaExp\]), allows enhancement factors within the interval $$\begin{aligned}
-3.40<\Delta \Gamma^{\rm Exp}_d/\Delta \Gamma^{\rm SM}_d<2.27.
\label{eq:enhacementsonDGammadonesigma}
\end{aligned}$$ On the other hand, if the confidence interval is increased up to 1.65 sigmas, i.e. $90\%$ C.L., then the potential effects in $\Delta \Gamma_d$ become $$\begin{aligned}
-5.97<\Delta \Gamma^{\rm Exp}_d/\Delta \Gamma^{\rm SM}_d<4.67.
\label{eq:enhacementsonDGammadonesixsigma}
\end{aligned}$$ The measured value of the dimuon asymmetry by the D0-collaboration [@Abazov:2010hv; @Abazov:2010hj; @Abazov:2011yk; @Abazov:2013uma] seems to be in conflict with the current experimental bounds on $a_{sl}^d$ and $a_{sl}^s$, see e.g. the discussion in [@Lenz:2014nka]. An enhanced value of $\Delta \Gamma_d$ could solve this experimental discrepancy [@Borissov:2013wwa], at the expense of introducing new physics in $\Delta \Gamma_d$ and potentially also in $a_{sl}^s$ and $a_{sl}^d$. If all BSM effects in the dimuon asymmetry are due to $\Delta \Gamma_d$, then an enhancement factor of 6 with respect to its SM value is required. On the other hand, if there are also be BSM contributions in $a_{sl}^s$ and $a_{sl}^d$, then the BSM enhancement factor in $\Delta \Gamma_d$ can be smaller.
3. The impact of these new effects on the determination of the CKM angle $\gamma$. Within the SM, this quantity can be extracted with negligible uncertainties from $B \to DK$ tree-level decays [@Bigi:1981qs; @Gronau:1990ra; @Gronau:1991dp; @Atwood:1996ci; @Atwood:2000ck; @Giri:2003ty]. This quantity is currently extensively tested by experiments, see e.g.[@Kenzie:2018oob; @Amhis:2019ckw] and future measurements will dramatically improve its precision to the one degree level [@Bediaga:2018lhg]. This observable is particular interesting since direct measurements, e.g. LHCb [@Kenzie:2018oob], seem to be larger than bounds from B-mixing [@King:2019rvk][^3]. $$\begin{aligned}
\gamma^{\rm LHCb}\hspace{0.5cm} & = & \left( 74.0^{+5.0}_{-5.8} \right)^\circ \, ,
\\
\gamma^{\rm B-mixing} & \leq & 66.9^\circ \, .
\end{aligned}$$
Therefore, in Sections \[sec:buudfit\] to \[sec:bccdfit\] we combine our bounds from the $b\rightarrow u \bar{u} d$, $b\rightarrow c \bar{u} d$ and $b\rightarrow c \bar{c} d$ transitions, and evaluate the corresponding potential enhancement in $\Delta \Gamma_d$. We do not present the allowed regions for the NP contributions related to the channel $b\rightarrow u\bar{c} d$, since the bounds are expected to be rather weak considering that our only bound will arise from $a^d_{sl}$. In Section \[sec:Universal\_fit\] we report the maximal bounds on $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$, assuming universal BSM contributions to all different quark level decays. Hence, we combine all our possible bounds regardless of the quark level transition and asses the implications on the measurement of the CKM angle $\gamma$. The target of this part of analysis, is to update the investigations reported in [@Brod:2014bfa] in the light of a far more detailed study of BSM effects in non-leptonic tree-level decays. In particular we account here for uncertainties neglected in the former study and we also make a very careful choice of reliable observables.
$\chi^2$-fit for the $b\rightarrow u\bar{u}d$ channel and bounds on $\Delta \Gamma_d$ {#sec:buudfit}
-------------------------------------------------------------------------------------
We perform a combined $\chi^2$-fit including $R_{\pi\pi}$, $S_{\pi\pi}$, $S_{\rho\pi}$, $R_{\rho\rho}$ and $a^d_{sl}$ with the aim of constraining $\Delta C^{d, uu}_1(M_W)$ and $\Delta C^{d, uu}_2(M_W)$. The resulting regions are shown in Fig. \[fig:Global\_fit\_uu\].
![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow u \bar{u}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result. []{data-label="fig:Global_fit_uu"}](dC1_uu_comb_new.pdf "fig:"){height="5cm"} ![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow u \bar{u}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result. []{data-label="fig:Global_fit_uu"}](dC2_uu_comb_new.pdf "fig:"){height="5cm"}
$\Delta C^{d, uu}_2(M_W)$ is considerably stronger constrained than $\Delta C^{d, uu}_1(M_W)$, but sizeable deviations can still not be excluded. Due to the irregular of the regions for $\Delta C^{d, uu}_1(M_W)$ and $\Delta C^{d, uu}_2(M_W)$, expressing the possible NP values for the tree level contributions in terms of simple inequalities is not possible. Instead, we limit ourselves to quote the minimum and maximum bounds for the real and the imaginary components of our NP regions. For $\Delta C^{d, uu}_1(M_W)$ we have $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, uu}_1(M_W)\Bigl]\Biggl|_{\rm min}=-2.23,&&
{\rm Im}~\Bigl[\Delta C^{d, uu}_1(M_W)\Bigl]\Biggl|_{\rm min}=-1.27,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, uu}_1(M_W)\Bigl]\Biggl|_{\rm max}=~~0.32,&&
{\rm Im}~\Bigl[\Delta C^{d, uu}_1(M_W)\Bigl]\Biggl|_{\rm max}=~1.40.\nonumber\\\end{aligned}$$ On the other hand for $\Delta C^{d, uu}_2(M_W)$ we get $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, uu}_2(M_W)\Bigl]\Biggl|_{\rm min}=-2.5,&&
{\rm Im}~\Bigl[\Delta C^{d, uu}_2(M_W)\Bigl]\Biggl|_{\rm min}=-0.44,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, uu}_2(M_W)\Bigl]\Biggl|_{\rm max}=~~0.28,&&
{\rm Im}~\Bigl[\Delta C^{d, uu}_2(M_W)\Bigl]\Biggl|_{\rm max}=~1.00.\nonumber\\\end{aligned}$$ We have also included the contour lines showing the potential enhancement of the observable $\Delta \Gamma_d$. Accounting for the uncertainties in theory and experiment we find the following $90\%$ C.L. intervals for $\Delta \Gamma_d$ due to NP at tree level: $$\begin{aligned}
\hbox{for $\Delta C^{d, uu}_1(M_W)$:}&&-0.39<\Delta \Gamma_d/\Delta \Gamma^{\rm SM}_d<1.30,\nonumber\\
\hbox{for $\Delta C^{d, uu}_2(M_W)$:}&&~~0.70<\Delta \Gamma_d/\Delta \Gamma^{\rm SM}_d<1.48. \end{aligned}$$ Thus only moderate enhancements of $\Delta \Gamma_d$ seem to be possible, while a reduction to up to $-39 \%$ of its SM values is still possible. This scenario could thus not be a solution for the dimuon asymmetry.
$\chi^2$-fit for the $b\rightarrow c\bar{u}d$ channel and bounds on $\Delta \Gamma_d$ {#sec:bcudfit}
-------------------------------------------------------------------------------------
To establish constraints on $\Delta C^{d, cu}_{1}(M_W)$ and $\Delta C^{d, cu}_{2}(M_W)$ we combine $R_{D^{*}\pi}$ together with $a^d_{sl}$. Our results are presented in Fig. \[fig:Global\_fit\_cu\].
![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow c \bar{u}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result.[]{data-label="fig:Global_fit_cu"}](dC1_cu_comb_new_v2.pdf "fig:"){height="5cm"} ![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow c \bar{u}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result.[]{data-label="fig:Global_fit_cu"}](dC2_cu_comb_new.pdf "fig:"){height="5cm"}
At the $90\%$ C.L. we find the possibility of huge enhancements/reductions of $\Delta \Gamma_d$: $$\begin{aligned}
\hbox{for $\Delta C^{d, cu}_1(M_W)$:}&&~ -5.97 <\Delta \Gamma_d/\Delta \Gamma^{\rm SM}_d<4.67,\nonumber\\
\hbox{for $\Delta C^{d, cu}_2(M_W)$:}&&~~-1.5<\Delta \Gamma_d/\Delta \Gamma^{\rm SM}_d<2.50.
\label{eq:CdcuDeltaGammad}\end{aligned}$$ Based on the bounds shown in Eq. (\[eq:CdcuDeltaGammad\]), we find that this scenario could solve the dimuon asymmetry. Since the experimental bounds for $\Delta \Gamma_d$ are saturated in the case of $\Delta C^{d, cu}_1(M_W)$ in Eq. (\[eq:CdcuDeltaGammad\]), it turns out that $\Delta \Gamma_d$ acts as a constraint in itself. Using this additional information we establish the following bounds for $\Delta C^{d, cu}_1(M_W)$ $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, cu}_1(M_W)\Bigl]\Biggl|_{\rm min}=-1.40,&&
{\rm Im}~\Bigl[\Delta C^{d, cu}_1(M_W)\Bigl]\Biggl|_{\rm min}=-2.17,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, cu}_1(M_W)\Bigl]\Biggl|_{\rm max}=~~0.32,&&
{\rm Im}~\Bigl[\Delta C^{d, cu}_1(M_W)\Bigl]\Biggl|_{\rm max}=~1.15.\nonumber\\\end{aligned}$$ The corresponding bounds for $\Delta C^{d, cu}_2(M_W)$ read $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, cu}_2(M_W)\Bigl]\Biggl|_{\rm min}=-2.14,&&
{\rm Im}~\Bigl[\Delta C^{d, cu}_2(M_W)\Bigl]\Biggl|_{\rm min}=-0.75,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, cu}_2(M_W)\Bigl]\Biggl|_{\rm max}=~~0.04,&&
{\rm Im}~\Bigl[\Delta C^{d, cu}_2(M_W)\Bigl]\Biggl|_{\rm max}=~0.53.\nonumber\\\end{aligned}$$
$\chi^2$-fit for the $b\rightarrow c\bar{c}d$ channel and bounds on $\Delta \Gamma_d$ {#sec:bccdfit}
-------------------------------------------------------------------------------------
Next we perform a $\chi^2$-fit including $\mathcal{B}r(B\rightarrow X_d \gamma)$, $a^d_{sl}$ and $\sin(2\beta_d)$. These observables give strong constraints for $\Delta C^{d, cc}_2(M_W)$ (see Fig. (\[fig:Global\_fit\_cc\])), which turn out to saturate the current experimental bounds on $\Delta \Gamma_d$.
![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow c \bar{c}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result.[]{data-label="fig:Global_fit_cc"}](b_to_cc_d.pdf "fig:"){height="5cm"} ![Global $\chi^2$-fit including observables constraining the inclusive transition $b\rightarrow c \bar{c}d$. The $90\%$ C.L. allowed regions correspond to the areas contained within the black contours. The colored curves indicate the possible enhancements on $\Delta \Gamma_d$ with respect to the SM value. The black dot corresponds to the SM result.[]{data-label="fig:Global_fit_cc"}](dC2_cc_test_v2.pdf "fig:"){height="5cm"}
At the $90\%$ C.L. we find $$\begin{aligned}
\hbox{for $\Delta C^{d, cc}_1(M_W)$ and $\Delta C^{d, cc}_2(M_W)$:}&&-5.97<\Delta \Gamma_d/\Delta \Gamma^{\rm SM}_d<4.67.\nonumber\\
\label{eq:CdccDeltaGammad}\end{aligned}$$ We find again that this scenario could solve the tension between theory and experiment found in the measurement of the dimuon asymmetry. Considering the results shown in Fig. (\[fig:Global\_fit\_cc\]) we see that $\Delta \Gamma_d$ is indeed a powerful constraint for $\Delta C^{d, cc}_1(M_W)$ and $\Delta C^{d, cc}_2(M_W)$, which together with $\mathcal{B}r(B\rightarrow X_d \gamma)$, $a^d_{sl}$ and $\sin(2\beta_d)$ defines the following limits
$$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, cc}_1(M_W)\Bigl]\Biggl|_{\rm min}=-1.66,&&
{\rm Im}~\Bigl[\Delta C^{d, cc}_1(M_W)\Bigl]\Biggl|_{\rm min}=-2.80,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, cc}_1(M_W)\Bigl]\Biggl|_{\rm max}=~~2.36,&&
{\rm Im}~\Bigl[\Delta C^{d, cc}_1(M_W)\Bigl]\Biggl|_{\rm max}=~2.74,\nonumber\\\end{aligned}$$
and
$$\begin{aligned}
{\rm Re}~\Bigl[\Delta C^{d, cc}_2(M_W)\Bigl]\Biggl|_{\rm min}=-2.70,&&
{\rm Im}~\Bigl[\Delta C^{d, cc}_2(M_W)\Bigl]\Biggl|_{\rm min}=-1.46,\nonumber\\
{\rm Re}~\Bigl[\Delta C^{d, cc}_2(M_W)\Bigl]\Biggl|_{\rm max}=~~0.58,&&
{\rm Im}~\Bigl[\Delta C^{d, cc}_2(M_W)\Bigl]\Biggl|_{\rm max}=~1.65.\nonumber\\\end{aligned}$$
As can be seen on the l.h.s. of Fig. \[fig:Global\_fit\_cc\] $C_1$ is only weakly constrained by the semi-leptonic CP asymmetries, here additional information stemming from $\Delta \Gamma_d$ will be important to shrink the allowed regions.
Universal fit on $\Delta C_1(M_W)$ and $\Delta C_2(M_W)$ {#sec:Universal_fit}
--------------------------------------------------------
In this section we work under the assumptions $$\begin{aligned}
\Delta C^{s, ab}_{1}(M_W)=\Delta C^{d, ab}_{1}(M_W)=\Delta C_{1}(M_W)\\
\Delta C^{s, ab}_{2}(M_W)=\Delta C^{d, ab}_{2}(M_W)=\Delta C_{2}(M_W)\end{aligned}$$ for $a=u,~d$ and $b= u,~d$. This procedure allows us to obtain the maximal constraints for our NP contributions. Making a combined $\chi^2$-fit is time and resource consuming, consequently we select the set of observables that give the strongest possible bounds. For $\Delta C_1(M_W)$ this includes: $R_{D^{*}\pi}$, $S_{\rho\pi}$, $\Delta \Gamma_s$, $\mathcal{B}r(\bar{B} \rightarrow X_s \gamma)$ and $a^d_{sl}$ and for $\Delta C_2(M_W)$ we use: $R_{D^{*}\pi}$, $R_{\pi\pi}$, $\Delta \Gamma_s$, $S_{J/\psi \phi}$ and $\tau_{B_s}/\tau_{B_d}$. We show in Fig. \[fig:Global\_fit\]
![ Potential regions for the NP contributions $\Delta C_{1}(M_W)$ and $\Delta C_{2}(M_W)$ allowed by the observables used in our analysis at $90\%$ C.L. assuming universal NP contributions.[]{data-label="fig:Global_fit"}](Combination_dC1.pdf "fig:"){height="5cm"} ![ Potential regions for the NP contributions $\Delta C_{1}(M_W)$ and $\Delta C_{2}(M_W)$ allowed by the observables used in our analysis at $90\%$ C.L. assuming universal NP contributions.[]{data-label="fig:Global_fit"}](Combination_dC2.pdf "fig:"){height="5cm"}
our resulting regions from which we extract $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C_1(M_W)\Bigl]\Biggl|_{\rm min}=-0.36,&&
{\rm Im}~\Bigl[\Delta C_1(M_W)\Bigl]\Biggl|_{\rm min}=-0.47,\nonumber\\
{\rm Re}~\Bigl[\Delta C_1(M_W)\Bigl]\Biggl|_{\rm max}=~~0.26,&&
{\rm Im}~\Bigl[\Delta C_1(M_W)\Bigl]\Biggl|_{\rm max}=~~~0.45,\nonumber\\
\label{eq:dC1}\end{aligned}$$ and $$\begin{aligned}
{\rm Re}~\Bigl[\Delta C_2(M_W)\Bigl]\Biggl|_{\rm min}=-0.11,&&
{\rm Im}~\Bigl[\Delta C_2(M_W)\Bigl]\Biggl|_{\rm min}=-0.04,\nonumber\\
{\rm Re}~\Bigl[\Delta C_2(M_W)\Bigl]\Biggl|_{\rm max}=~~0.02,&&
{\rm Im}~\Bigl[\Delta C_2(M_W)\Bigl]\Biggl|_{\rm max}=0.02.\nonumber\\
\label{eq:dC2}\end{aligned}$$ We can see from Eqs. (\[eq:dC1\]) and (\[eq:dC2\]) how severely constrained is $\Delta C_2(M_W)$ allowing deviations with respect to the SM point of a few percent at most. This behaviour is clearly in contrast with the results obtained for $\Delta C_1(M_W)$, where effects of almost up to $\pm 0.5$ are still possible. For completeness we present the implications of universal NP in $\Delta C_1(M_W)$ on $\Delta \Gamma_d$ in Fig. (\[fig:Global\_fit\_Delta\_Gammad\]). We find that at $90\%$ C.L. only $\mathcal{O}(20\%)$ deviations on $\Delta \Gamma_d$ with respect to its SM value can be induced, which is in a similar ballpark as the SM uncertainties of $\Delta \Gamma_d$ and can clearly not explain the D0 measurement of the dimuon asymmetry.
![Enhancements on $\Delta \Gamma_d$ when assuming universal NP effects in $C_1(M_W)$.[]{data-label="fig:Global_fit_Delta_Gammad"}](dGamma_d_enhancement_universal.pdf){height="5cm"}
NP in non-leptonic tree and its interplay with the CKM angle $\gamma$ {#sec:CKMgamma}
---------------------------------------------------------------------
As is well known [@Bigi:1981qs; @Gronau:1990ra; @Gronau:1991dp; @Atwood:1996ci; @Atwood:2000ck; @Giri:2003ty] the CKM phase $\gamma$ can be determined from the interference of the transition amplitudes associated with the quark tree level decays $b\rightarrow c\bar{u}s$ and $b\rightarrow u\bar{c}s$ with negligible theory uncertainty within the SM [@Brod:2013sga][^4]. At the exclusive level, this can be done with the decay channels $B^-\rightarrow D^0 K^-$ and $B^-\rightarrow \bar{D}^0K^-$. The ratio of the two corresponding decay amplitudes can be written as $$\label{eq:gamma_theoretical}
r_B e^{i (\delta_B - \gamma)} = \frac{\mathcal{A} (B^- \to \bar{D}^0 K^-)}
{\mathcal{A} (B^- \to D^0 K^-)} \;,$$ where the $r_B$ stands for the ratio of the modulus of the relevant amplitudes. The resulting phase has a strong component, denoted as $\delta_B$, and a weak one, which is precisely CKM $\gamma$. New effects in $C_1$ and $C_2$ can lead to huge shifts in $\gamma$: the left side of Eq. (\[eq:gamma\_theoretical\]) will be modified according to [@Brod:2014bfa] $$\begin{aligned}
r_B e^{i (\delta_B - \gamma)} &\to&
r_B e^{i (\delta_B - \gamma)} \cdot
\Biggl[
\frac{C_2 + \Delta C_2 + r_{A'} ( C_1 + \Delta C_1)}{C_2 + r_{A'} C_1}\nonumber\\
&&~~~~~~~~~~~~~~~\cdot
\frac{C_2 + r_A C_1}{C_2 + \Delta C_2 + r_A (C_1 +\Delta C_1)}
\Biggl] \; ,
\label{eq:exact}\end{aligned}$$ where $$\label{eq:rA}
\begin{split}
r_{A'} = \frac{\langle \bar{D}^0 K^-| Q_1^{\bar{u}cs} | B^- \rangle}
{\langle \bar{D}^0 K^-| Q_2^{\bar{u}cs} | B^- \rangle} \; ,
\quad r_A = \frac{\langle D^0 K^-| Q_1^{\bar{c}us} | B^- \rangle}
{\langle D^0 K^-| Q_2^{\bar{c}us} | B^- \rangle} \; .
\end{split}$$ The ratios of matrix elements in Eq. (\[eq:rA\]) have not been determined from first principles, to provide an estimation we use naive factorization arguments and colour counting to obtain [@Brod:2014bfa],[@Brod:2014qwa] $$\begin{aligned}
r_{A} = 0.4,&& r_A-r_A'=-0.6.
\label{eq:initial_values_matrix_elements}\end{aligned}$$ Eq.(\[eq:exact\]) gives a particularly strong dependence of the shift in $\gamma$ on the imaginary part of $C_1$; approximately we get [@Brod:2014bfa] $$\delta \gamma = \left(r_A - r_{A'}\right) \frac{\rm Im \left[\Delta C_1 \right] }{C_2} \, .
\label{eq:deltaCKMgamma}$$ We are now ready update the study presented in [@Brod:2014bfa] on the effects of NP in $C_1$ and $C_2$ on the precision for the determination of the CKM angle $\gamma$, our results are presented graphically in Fig. (\[fig:CKM\_gamma\]).
![Possible deviations on the CKM phase $\gamma$ due to NP at tree level in $C_1(M_W)$ assuming $r_A=0.4$ and $r'_{A}=1$. The black dot corresponds to the SM result.[]{data-label="fig:CKM_gamma"}](delta_CKM_gamma_C1_v2.pdf){height="7cm"}
The current uncertainties in our knowledge of the value of $C_1$ seem to indicate an uncertainty in the extraction of the CKM angle $\gamma$ of considerably more than $10^{\circ}$, thus much higher than the current experimental uncertainty of around five degrees[@Kenzie:2018oob; @Amhis:2019ckw]. Interestingly direct measurements give typically larger values than the ones obtained by CKM fits [@Charles:2004jd; @Bona:2006ah] or extracted from B-mixing [@King:2019rvk]. Even more interestingly future measurements will dramatically improve the precision of $\gamma$ to the one degree level [@Bediaga:2018lhg] and our BSM approach would offer a possibility of explaining large deviations in the extraction of the CKM angle $\gamma$. We would like, however, to add some words of cautions: for a quantitative reliable relation between the deviations of $C_1$ and the shifts in the CKM angle $\gamma$, the non-perturbative parameter $r_A$ and $r_{A'}$ have to be known more precisely. The values proposed in Eq. (\[eq:initial\_values\_matrix\_elements\]) correspond to an educated ansatz. We can explore the effects of modifying these values on CKM-$\gamma$. For instance, consider an alternative scenario where $r_{A}$ is twice the value presented in Eq. (\[eq:initial\_values\_matrix\_elements\]), while $r'_A$ remains fixed. This is equivalent to assigning an uncertainty of $100\%$ to $r_A$ and taking the upper limit. The results for this new scenario are presented in Fig. \[fig:CKM\_gammaalt\], where the shifts $\delta \gamma_{CKM}$ have been halved with respect to those found in Fig. \[fig:CKM\_gamma\], however the absolute numerical values of about $\pm 5^{\circ}$, still represent huge effects on the CKM angle $\gamma$ itself.
![Possible deviations on the CKM phase $\gamma$ due to NP at tree level in $C_1(M_W)$ assuming $r_A=0.8$ and $r'_{A}=1$. The black dot corresponds to the SM result.[]{data-label="fig:CKM_gammaalt"}](delta_CKM_gamma_C1_v2_play_rA_up.pdf){height="7cm"}
Here clearly more theoretical work leading to a more precise understanding of $r_A$ and $r_{A'}$ is highly desirable.
Future prospects {#sec:future}
================
In this section we will present projections for observables, that are particularly promising to further shrink the allowed regions of NP contributions to non-leptonic tree-level decays. We have already studied the impact of BSM effects in non-leptonic tree-level decays on the observables $\Delta \Gamma_d$ and the CKM angle $\gamma$ in detail. More precise experimental data on $\Delta \Gamma_d$ will immediately lead to stronger bounds on the $\Delta B = 1$ Wilson coefficients, it could also exclude the possibility of solving the D0 dimuon asymmetry with an enlarged value of $\Delta \Gamma_d$. Alternatively, if the measured values of $\Delta \Gamma_d$ will not be SM-like, we could get an intriguing hint for BSM physics. In order to make use of the extreme sensitivity of the CKM angle $\gamma$ on an imaginary part of $C_1$ more theory work is required to make this relation quantitatively reliable. If this is available, then already the current experimental uncertainty on $\gamma$ will exclude a large part of the allowed region on $\Delta C_1$ - or it will indicate the existence of NP effects. Below we will show projections for improved experimental values on the lifetime ratio $\tau_{B_s}/\tau_{B_d}$ and the semi-leptonic CP asymmetries, as well as commenting on consequences of our BSM approach to the recently observed flavour anomalies.
$\tau_{B_s}/\tau_{B_d}$
------------------------
As already explained, the lifetime ratio $\tau_{B_s}/\tau_{B_d}$ can pose very strong constraints on the Wilson coefficients $C_1$ and $C_2$, if we e.g. assume that BSM effects are only acting in the $b \to c \bar{c} s$ channel.
![Future scenarios concerning the behaviour of $\tau_{B_s}/\tau_{B_d}$. In the left panel the central experimental value of the lifetime ratio is assumed to remain unchanged in the future whereas the uncertainties will be reduced. In the right panel, the theoretical and experimental values for the lifetime ratio are supposed to become equal.[]{data-label="fig:Life_time_future"}](Life_time_uncertainty_reduced_central_values_unchanged.pdf "fig:"){height="5.0cm"} ![Future scenarios concerning the behaviour of $\tau_{B_s}/\tau_{B_d}$. In the left panel the central experimental value of the lifetime ratio is assumed to remain unchanged in the future whereas the uncertainties will be reduced. In the right panel, the theoretical and experimental values for the lifetime ratio are supposed to become equal.[]{data-label="fig:Life_time_future"}](Life_time_Theory_Equal_to_EXperiment.pdf "fig:"){height="5.0cm"}
In Fig. \[fig:Life\_time\_future\] we show future projections, assuming the errors will go down to 2 per mille or even one per mille. On the l.h.s. of Fig. \[fig:Life\_time\_future\] we assume that the current experiment value will stay - in this case a tension between the SM value and the experimental measurement will emerge. On the r.h.s. of Fig. \[fig:Life\_time\_future\] we assume that the future experimental value perfectly agrees with the SM prediction. In this case, the imaginary part of $\rm{Im} \Delta C_1$ will be considerably constrained, this is a very interesting possibility since according to Eq. (\[eq:deltaCKMgamma\]) $\rm{Im} \Delta C_1$ is precisely the driving force for large deviations in the CKM angle $\gamma$.
Semi-leptonic CP asymmetries
----------------------------
The experimental precision for the semi-leptonic CP asymmetries is still much larger than the tiny SM values for these quantities. Nevertheless already at this stage $a_{sl}^q$ provide important bounds on possible BSM effects in the Wilson coefficients. The experimental precision in the semi-leptonic CP-asymmetries will rise considerable in the near future, see e.g. Table 1 of [@Cerri:2018ypt] from where we take: $$\begin{aligned}
\delta \left( a_{sl}^s \right) & = & 1 \cdot 10^{-3} \hspace{1cm} \mbox{LHCb 2025}
\\
\delta \left( a_{sl}^s \right) & = & 3 \cdot 10^{-4} \hspace{1cm} \mbox{Upgrade II}
\end{aligned}$$ We show the dramatic impact of these future projections on the BSM bounds on the Wilson coefficients in Fig. \[fig:asls\_future\].
![Future scenarios for the precision in the observable $a_{sl}^s$ and resulting constraints on $\Delta C_1$ and $\Delta C_2$. The current uncertainty is expected to be reduced down to 1 per mille and later even to 0.3 per mille.[]{data-label="fig:asls_future"}](asls_css_evolution_dC1.pdf "fig:"){height="5cm"} ![Future scenarios for the precision in the observable $a_{sl}^s$ and resulting constraints on $\Delta C_1$ and $\Delta C_2$. The current uncertainty is expected to be reduced down to 1 per mille and later even to 0.3 per mille.[]{data-label="fig:asls_future"}](asls_css_evolution_dC2.pdf "fig:"){height="5cm"}
Rare decays
-----------
As discussed in [@Jager:2017gal; @Jager:2019bgk] NP effects in the $b\rightarrow c\bar{c} s$ transitions can induce shifts in the Wilson coefficient of the operator $$\begin{aligned}
\hat{Q}_{9 V}=\frac{\alpha}{4\pi}(\bar{\hat{s}}_L \gamma_{\mu} \hat{b}_L)
(\bar{\hat{\ell}} \gamma^{\mu} \hat{\ell}), \end{aligned}$$ leading to $$\begin{aligned}
\Delta C^{\rm eff}_9\Bigl |_{\mu=m_b} =\Bigl[8.48~\Delta C_1 + 1.96~ \Delta C_2 \Bigl]\Bigl |_{\mu=M_W}. \end{aligned}$$ This result offers an interesting link with the anomalous deviations in observables associated with the decay $B\rightarrow K^{(*)}\mu^+\mu^-$, where model independent explanations with physics only in $C_9$ require $\Delta C^{\rm eff}_9\Bigl |_{\mu=m_b} =-\mathcal{O}(1)$. In order to account for NP phases we use the results presented in [@Alok:2017jgr] where $\Delta C_9$ is allowed to take complex values leading to the constraints shown in Fig. \[Fig:dC9\]. Here both $C_1$ and $C_2$ get a shift towards negative values.
![Regions for NP, at $90\%$ C. L., in the ${\rm Re}~\Delta C^{s, cc}_1$ - ${\rm Im}~\Delta C^{s, cc}_1$ (left) and ${\rm Re}~\Delta C^{s, cc}_2$ - ${\rm Im}~\Delta C^{s, cc}_2$ (right) planes allowed by the B physics anomalies related with the decay $B\rightarrow K^{(*)}\mu^+\mu^-$. The black dot corresponds to the SM result.[]{data-label="Fig:dC9"}](dC9_C1_full_float.pdf "fig:"){height="5cm"} ![Regions for NP, at $90\%$ C. L., in the ${\rm Re}~\Delta C^{s, cc}_1$ - ${\rm Im}~\Delta C^{s, cc}_1$ (left) and ${\rm Re}~\Delta C^{s, cc}_2$ - ${\rm Im}~\Delta C^{s, cc}_2$ (right) planes allowed by the B physics anomalies related with the decay $B\rightarrow K^{(*)}\mu^+\mu^-$. The black dot corresponds to the SM result.[]{data-label="Fig:dC9"}](dC9_C2_full_float.pdf "fig:"){height="5cm"}
BSM in effects in non-leptonic tree-level can in principle explain the deviations seen in lepton-flavour universal observables, like the branching ratios or $P_5'$; they can, however, not explain the anomalous values of flavour universality violating observables like $R_K$. Future measurements will show, whether the bounds, obtained in Fig. \[Fig:dC9\] should be included in our full fit.
Conclusions and outlook {#sec:conclusion}
=======================
In this work we have questioned the well accepted assumption of having no NP in tree level decays, in particular we explored for possible deviations with respect to the SM values in the dimension six current-current operators $\hat{Q}_1$ (colour suppressed) and $\hat{Q}_2$ (colour allowed) associated with the quark level transitions $b\rightarrow q \bar{q}' s$ and $b\rightarrow q \bar{q}' d$ ($q, q'=u,c$). We evaluated the size of the NP effects by modifying the corresponding Wilson coefficients according to $C_1 \rightarrow C_1 + \Delta C_1$, $C_2 \rightarrow C_2 +\Delta C_2$, for $\Delta C_{1,2}\in \mathcal{C}$; we found that sizeable deviations in $\Delta C_{1,2}$ are not ruled out by the recent experimental data.\
Our analysis was based on a $\chi^2$-fit where we included different B-physics observables involving the decay processes: $\bar{B}^0_d\rightarrow D^{*}\pi$, $\bar{B}^0_d\rightarrow \pi\pi$, $\bar{B}^0_d\rightarrow \pi \rho$, $\bar{B}^0_d\rightarrow \rho\rho$, $\bar{B}\rightarrow X_s \gamma$, $\bar{B}_s\rightarrow J/\psi \phi$ and $\bar{B}\rightarrow X_d \gamma$. We also considered neutral B mixing observables: the semi-leptonic asymmetries $a^s_{sl}$ and $a^d_{sl}$ as well as the decay width difference $\Delta \Gamma_s$ of $B^0_s$ oscillations and the lifetime ratio of $B_s$ and $B_d$ mesons. Finally we also studied the CKM angles $\beta$, $\beta_s$ and $\gamma$.\
For the amplitudes of the hadronic transitions $\bar{B}_d^0\rightarrow D^{*}\pi$, $\bar{B}^0_d\rightarrow \pi\pi$, $\bar{B}^0_d\rightarrow \pi\rho$ and $\bar{B}^0_d\rightarrow \rho\rho$ and $\bar{B}_s\rightarrow J/\psi \phi$ we used the formulas calculated within the QCD factorization framework. We have identified a high sensitivity on $\Delta C_{1,2}$ with respect to the power corrections arising in the annihilation topologies and in some cases in those for the hard-spectator scattering as well. It is also important to mention that the uncertainty in the parameter $\lambda_B$ used to describe the inverse moment of the light cone distribution for the neutral B mesons is of special importance in defining the size of $\Delta C_1$ and $\Delta C_2$. For the mixing observables and the lifetime ratios we have benefited from the enormous progress achieved in the precision of the hadronic input parameters, thus we have also updated the corresponding SM predictions: $$\begin{aligned}
\Delta M_s = (18.77 \pm 0.86 ) \, \mbox{ps}^{-1},
&&
\Delta M_d = (0.543 \pm 0.029) \, \mbox{ps}^{-1},
\nonumber
\\
\Delta \Gamma_s = (9.1 \pm 1.3 ) \cdot 10^{-2} \, \mbox{ps}^{-1},
&&
\Delta \Gamma_d = (2.6 \pm 0.4 ) \cdot 10^{-3} \, \mbox{ps}^{-1},
\nonumber
\\
a_{sl}^s = (2.06 \pm 0.18) \cdot 10^{-5},
&&
a_{sl}^d = (-4.73 \pm 0.42) \cdot 10^{-4}. \end{aligned}$$ We have made a channel by channel study by combining different constraints for the decay chains $b\rightarrow u\bar{u}d$, $b\rightarrow c\bar{u}d$, $b\rightarrow c\bar{c}s$ and $b\rightarrow c\bar{c}d$; we also performed a universal $\chi^2$-fit where we have included observables mediated by $b\rightarrow q q's$ decays as well. The universal $\chi^2$-fit provides the strongest bounds on the NP deviations, we found that $$\begin{aligned}
|\hbox{Re} (\Delta C_1)|\leq \mathcal{O}(0.4),
&&
|\hbox{Re} (\Delta C_2)|\leq \mathcal{O}(0.1),
\\
|\hbox{Im} (\Delta C_1)|\leq \mathcal{O}(0.5),
&&
|\hbox{Im} (\Delta C_2)|\leq \mathcal{O}(0.04),\end{aligned}$$ whereas for the independent channel analyses the corresponding deviations can much larger.\
We have analysed the implications of having NP in tree level b quark transitions on the decay width difference of neutral $B^0_d$ mixing $\Delta \Gamma_d$ - note, that the most recent experimental average is still consistent with zero. We found that enhancements in $\Delta \Gamma_d$ with respect to its SM value of up to a factor of five are consistent with current the experimental data. Such a huge enhancement could solve the tension between experiment and theory in the D0 measurement for the dimuon asymmetry. Thus we strongly encourage further experimental efforts to measure $\Delta \Gamma_d$, see also [@Gershon:2010wx].\
Next we evaluated the impact of our allowed NP regions for $\Delta C_1$ and $\Delta C_2$ on the determination of the CKM phase $\gamma$, where the absence of penguins leads in principle to an exceptional theoretical cleanness. We found that $\gamma$ is highly sensitive to the imaginary components of $\Delta C_1$ and $\Delta C_2$ and our BSM effects could lead to deviations in this quantity by up to $10^{\circ}$. It has to be stressed, however, that for quantitative statements about the size of the shift $\delta \gamma$ the ratios of the matrix elements $\langle \bar{D}^0 K^-| Q_1^{\bar{u}cs} | B^- \rangle/
\langle \bar{D}^0 K^-| Q_2^{\bar{u}cs} | B^- \rangle$ and $\langle D^0 K^-| Q_1^{\bar{c}us} | B^- \rangle/
\langle D^0 K^-| Q_2^{\bar{c}us} | B^- \rangle$ have to be determined in future with more reliable methods. So far only naive estimates are available for these ratios.\
Finally we studied future projections for observables that will shrink the allowed region for NP effects - or identify a BSM region - in non-leptonic tree-level decays. Here $\tau(B_s) / \tau(B_d)$ and the semi-leptonic CP asymmetries seem to be very promising.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Martin Wiebusch for collaborating at early stages of the project. This work benefited from physics discussions with Christoph Bobeth, Tobias Huber, Joachim Brod, Michael Spannowsky, Marco Gersabeck, Thomas Rauh, Marcel Merk, Niels Tuning, Patrick Koppenburg and Laurent Dufour. We acknowledge Luiz Vale for his help with CKM-Fitter Live. We thank to Bert Schellekens for his help and support in accessing the Nikhef computing cluster “Stoomboot” during the development of this project and to Patrick Koppenburg for supporting the access in the final stages of the project as well. GTX acknowledges support from the NWO program 156, “Higgs as Probe and Portal” and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257. The work of AL was supported by STFC via the IPPP grant.
Numerical Inputs {#Sec:Inputs}
================
In this section we collect the numerical values of the input parameter used in this work.
[ Using the PDG value for the strong coupling $$\alpha_s(M_Z) = 0.1181 \pm 0.0011$$ we derive with $M_Z = 91.1876 \pm 0.0021$ GeV at NLO-QCD $$\Lambda_{QCD}^{(5)} = 228 \pm 14 \; \mbox{MeV}\; ,$$ While PDG gives $$\Lambda_{QCD}^{(5)} = 210 \pm 14 \; \mbox{MeV}\; ,$$ using 4-loop running, 3loop matching. We decided to use the latter value, the effects on $\alpha_s(m_b)$ are very small.\
For quark masses we use the PDG values in the MSbar definition, except for the b-quark, where we use a more conservative determination. The PDG value reads for comparison $$\begin{aligned}
m_b(m_b) & = & 4.18^{+0.04}_{-0.03} \; \mbox{GeV}\end{aligned}$$ The PDG value for $m_c(m_c)$ correspond to $m_c(m_b)= 0.947514$, which will be used [^5] for the analysis of the mixing quantities $\Delta \Gamma_q$ and $a_{sl}^q$.\
For the top quark pole mass we use the result obtained from cross-section measurements given in [@Tanabashi:2018oca] $$\begin{aligned}
m_t^{\rm Pole} & = & 173.1 \pm 0.9 \; \mbox{GeV}
\label{eq:mtpole}\end{aligned}$$ which is an average including measurements from D0, ATLAS and CMS.\
Entering Eq. \[eq:mtpole\] in the version 3 of the software [RunDec]{} [@Herren:2017osy] we obtain $$\begin{aligned}
\bar{m}_t (\bar{m}_t) & = & 163.3 \pm 0.9~\mbox{GeV}, \end{aligned}$$ and $$\begin{aligned}
m_t (M_W) & = & 172.6 \pm 1.0~\mbox{GeV}.\end{aligned}$$ ]{} We use the averages of the $B$ mixing bag parameters obtained in [@DiLuzio:2019jyq] based on the HQET sum rule calculations in [@King:2019lal; @Kirk:2017juj; @Grozin:2016uqy; @Grozin:2008nu] and the corresponding lattice studies in [@Carrasco:2013zta; @Aoki:2014nga; @Bazavov:2016nty; @Boyle:2018knm; @Dowdall:2019bea]: $$\begin{array}{ll}
B_1^s(\mu_b) = 0.849\pm0.023\,, &\hspace{1cm}B_1^d(\mu_b) = 0.835\pm0.028\,,\nonumber\\
B_2^s(\mu_b) = 0.835\pm0.032\,, &\hspace{1cm}B_2^d(\mu_b) = 0.791\pm0.034\,,\nonumber\\
B_3^s(\mu_b) = 0.854\pm0.051\,, &\hspace{1cm}B_3^d(\mu_b) = 0.775\pm0.054\,,\nonumber\\
B_4^s(\mu_b) = 1.031\pm0.035\,, &\hspace{1cm}B_4^d(\mu_b) = 1.063\pm0.041\,,\nonumber\\
B_5^s(\mu_b) = 0.959\pm0.031\,, &\hspace{1cm}B_5^d(\mu_b) = 0.994\pm0.037\,,
\end{array}
\label{eq:AberageBagParameters}$$ at the scale $\mu_b=\bar{m}_b(\bar{m}_b)$. For the first time we do not have to rely on vacuum insertion approximation for the dimension seven operators, instead we can now use the values obtained in [@Davies:2019gnp; @Dowdall:2019bea]. $$\begin{aligned}
B^q_{R_0} & = & 0.32 \pm 0.13 \, ,
\nonumber
\\
B^q_{R_1} & = & 1.031 \pm 0.035 \, ,
\nonumber
\\
B^q_{\tilde{R}_1} & = & 0.959 \pm 0.031 \, ,
\nonumber
\\
B^q_{R_2} & = & 0.27 \pm 0.10 \, ,
\nonumber
\\
B^q_{R_3} & = & 0.33 \pm 0.11 \, .\end{aligned}$$ Note that our notation for the dimension seven Bag parameter $B^q_{R_2}$ and $B^q_{R_3}$ corresponds to the primed bag parameter of [@Davies:2019gnp]. For the remaining two operators we are using equations of motion [@Beneke:1996gn] $$\begin{aligned}
B^q_{\tilde{R_2}} & = & -B^q_{R_2}
\nonumber
\\
B^q_{\tilde{R_3}} & = & \frac75 B^q_{R_3} - \frac25 B^q_{R_2} \, .\end{aligned}$$ For the determination of the uncertainties of the ratios of Bag parameter, we first symmetrized the errors of the individual bag parameter. Based on the updated value for the bag parameter $B^q_1$ given above and the lattice average ($N_f = 2+1+1$) for $f_{B_q}$ presented in [@Aoki:2019cca] - based on [@Christ:2014uea; @Bussone:2016iua; @Hughes:2017spc; @Bazavov:2017lyh] $$\begin{aligned}
f_{B_s}&=& (230.3 \pm 1.3)~\hbox{MeV},
\nonumber
\\
f_{B_d}&=&(190.0\pm 1.3)\hbox{MeV},\end{aligned}$$ we obtain after symmetrizing the uncertainties $$\begin{aligned}
f_{B_s}^2 B^s_1&=&(0.0452 \pm 0.0014)~\hbox{GeV}^2 ,
\nonumber
\\
f_{B_d}^2 B^d_1&=&(0.0305 \pm 0.0011)~\hbox{GeV}^2.\end{aligned}$$ Additionally, for the determination of the contributions of the double insertion of the $\Delta B=1$ effective Hamiltonians to $M^d_{12}$ we require the following Bag parameters at the scale $\mu_c=1.5~\hbox{GeV}$ (see [@Kirk:2017juj]) $$\begin{aligned}
B^d_1(1.5~\hbox{GeV})=0.910^{+0.023}_{-0.031},\quad\quad B^d_2(1.5~\hbox{GeV})=0.923^{+0.029}_{-0.035}.\end{aligned}$$ To calculate the CKM-elements in Eq. (\[eq:CKMElements\]) we require the renormalization group invariant bag parameter $\hat{B}^s_1$ which in the $\overline{\text{MS}}$-NDR scheme relates with $B^s_1$, via (see e.g. [@Buchalla:1995vs])
$$\begin{aligned}
\hat{B}^s_1&=&\alpha_s(\mu)^{-\gamma_0/(2\beta_0)}\Bigl[1 + \frac{\alpha_s(\mu)}{4\pi}\Bigl(\frac{\beta_1\gamma_0-\beta_0\gamma_1}{2\beta^2_0}\Bigl)\Bigl]B^s_1
\\
&=&\alpha_s(\mu)^{-\frac{6}{23}}\Bigl[1 + \frac{\alpha_s(\mu)}{4\pi} \frac{5165}{3174} \Bigl]B^s_1 = 1.52734 \, B^s_1,\end{aligned}$$
where we have used $$\begin{aligned}
C_F & = & \frac{N_c^2 - 1}{2 N_c},
\\
\beta_0 & = & \frac{11 N_c - 2 n_f}{3}, \hspace{0.35cm}
\beta_1 = \frac{34}{3} N_c^2 - \frac{10}{3} N_c n_f - 2 C_F n_f,
\\
\gamma_0& = & 6 \frac{N_c-1}{N_c} ,\hspace{1.cm}
\gamma_1= \frac{N_c-1}{2 N_c} \left(-21 +\frac{57}{N_c} -\frac{19}{3} N_c +\frac{4}{3} n_f \right) . \end{aligned}$$ Finally we take the lifetime bag parameter from the recent HQET sum rule evaluation in [@Kirk:2017juj] - here no corresponding up to date lattice evaluation exists $$\begin{aligned}
B_1(\mu=m_b)=1.028^{+0.064}_{-0.056},&&
B_2(\mu=m_b)=0.988^{+0.087}_{-0.079},\nonumber\\
~\epsilon_1(\mu=m_b)=-0.107^{+0.028}_{-0.029},&&
~\epsilon_2(\mu=m_b)=-0.033^{+0.021}_{-0.021}.\end{aligned}$$ Using CKMfitter-Live [@Charles:2004jd] online, we perform a fit to the CKM elements $|V_{us}|$, $|V_{ub}|$, $|V_{cb}|$ and the CKM angle $\gamma$ excluding in all the cases the direct determination of the CKM angle $\gamma$ itself. Our inputs coincide mostly with the CKMfitter-Summer 2018 analysis, however in order to be consistent with our main study we modify the following entries $\bar{m}_t(\bar{m}_t)$, $\bar{m}_c(\bar{m}_c)$, $\hat{B}^s_1$ and the ratios $$\begin{aligned}
\frac{\hat{B}^s_1}{\hat{B}^d_1}=0.987\pm 0.008 \hbox{\cite{King:2019lal}},&\quad&\frac{f_{B_s}}{f_{B_d}}=1.212\pm 0.011.\end{aligned}$$ Our results are $$\begin{aligned}
\label{eq:CKMElements}
|V_{us}|=0.224746^{+0.000253}_{-0.000058}, && |V_{ub}|=0.003741^{+0.000082}_{-0.000061}\nonumber \\
|V_{cb}|=0.04243^{+0.00036}_{-0.00088}, && \gamma=(65.17^{+0.26}_{-3.05})^{\circ},\end{aligned}$$ from which we obtain $$\begin{aligned}
\frac{|V_{ub}|}{|V_{cb}|}=0.08833\pm 0.00218.\end{aligned}$$ The full set of CKM matrix elements is then calculated under the assumption of the unitarity of the $3\times 3$ CKM matrix.
QCD-Factorization formulas {#Sec:QCDFact}
==========================
Generic parameters
------------------
$$\begin{aligned}
f^{\perp}_{\rho}(\mu)=f^{\perp}_{\rho}(\mu_0)\Bigl(\frac{\alpha_s(\mu)}{\alpha_s(\mu_0)}\Bigl)^{\frac{C_F}{\beta_0}},
&& r^{\pi}_{\chi}(\mu)=\frac{2m^2_{\pi}}{m_b(\mu)2m_q(\mu)},\nonumber\\
r^{\rho}_{\chi}(\mu)=\frac{2m_{\rho}}{m_b(\mu)}\frac{f^{\perp}_{\rho}(\mu)}{f_{\rho}},
&&r^{D^*}_{\perp}(\mu)=\frac{2m_{D^*}}{m_b(\mu)}\frac{f^{\perp}_{D^*}(\mu)}{f_{D^*}},\nonumber\\
r^{K}_{\chi}(\mu)=\frac{2m^2_K}{m_b(\mu)\Bigl(m_q(\mu) + m_s(\mu)\Bigl)},
&&A_{\pi\pi}=i\frac{G_F}{\sqrt{2}}m^2_BF^{B\rightarrow \pi}_{0}(0)f_{\pi},\nonumber\\
A_{\pi\rho}=-i\frac{G_F}{\sqrt{2}}m^2_{B}F^{B\rightarrow \pi}_{0}(0)f_{\rho},
&&A_{\rho\pi}=-i\frac{G_{F}}{\sqrt{2}}m^2_BA_{0}^{B\rightarrow \rho}(0)f_{\pi},\nonumber\\
A_{\rho\rho}=i\frac{G_F}{\sqrt{2}}m^2_BA^{B\rightarrow \rho}_{0}(0)f_{\rho},
&&B_{\pi\pi}=i\frac{G_F}{\sqrt{2}}f_Bf_{\pi}f_{\pi},\nonumber\\
B_{\pi\rho}=B_{\rho\pi}=-i\frac{G_F}{\sqrt{2}}f_Bf_{\pi}f_{\rho},
&&B_{\rho\rho}=i\frac{G_F}{\sqrt{2}}f_{B}f_{\rho}f_{\rho}\nonumber,\\
\tilde{\alpha}_4^{p,\pi\pi/\pi\rho}=\alpha_4^{p,\pi\pi/\pi\rho} + r^{\pi/\rho}_{\chi}\alpha^{p, \pi\pi/\pi\rho}_{6},
&&\tilde{\alpha}_4^{p,\rho\pi}=\alpha_4^{p,\rho\pi} - r^{\pi}_{\chi}\alpha^{p, \rho\pi}_{6},\nonumber\\
\tilde{\alpha}_{4,EW}^{\pi\pi/\pi\rho}=\alpha^{p, \pi\pi /\pi\rho}_{10} + r^{\pi/\rho}_{\chi}\alpha^{p, \pi\pi/\pi\rho}_{8},
&&\tilde{\alpha}_{4,EW}^{\rho\pi}=\alpha^{p, \rho\pi}_{10}-r^{\pi}_{\chi}\alpha^{p, \rho\pi}_{8}.
\label{eq:GenPar}\end{aligned}$$
Following [@Beneke:2001ev; @Beneke:2003zv] we take
$$\begin{aligned}
m_q(\mu)=\frac{m^2_{\pi}}{(2 m^2_K - m^2_{\pi})} m_s(\mu),\end{aligned}$$
which leads to the condition $r^{\pi}_{\chi}(\mu)=r^{K}_{\chi}(\mu)$.
### Vertices for the $B\rightarrow \pi\pi, \rho\pi, \pi\rho, \rho \rho$ decays
$$\begin{aligned}
V^{\pi}_{1, 2, 4, 10}&=12 \hbox{ln} \frac{m_b}{\mu}-18 +\Bigl[-\frac{1}{2} - 3 i \pi + \Bigl(\frac{11}{2} - 3i\pi \Bigl)a^\pi_1
-\frac{21}{20}a^\pi_2\Bigl],\nonumber\\
V^{\pi}_{6,8}&=-6,\nonumber\\
V^{\rho}_{1,2,3,9}&= V^{\rho}= 12 \hbox{ln} \frac{m_b}{\mu}-18 +\Bigl[-\frac{1}{2} - 3 i \pi + \Bigl(\frac{11}{2} - 3i\pi \Bigl)a^{\rho}_1
-\frac{21}{20}a^{\rho}_2\Bigl],\nonumber\\
V^{\rho}_{4} &=
\begin{cases}
V^{\rho} &\text{ for } \bar{B}^0\rightarrow \pi^{+}\rho^{-}, \\
\\
V^{\rho}-\frac{C_{5}}{C_{3}}r^{\rho}_{\chi}V^{\rho}_{\perp} &\text{ for } B\rightarrow \rho\rho, \\
\end{cases}\nonumber\\
V^{\rho}_{\perp}&= 9 - 6 i\pi +\Bigl(\frac{19}{6} - i\pi\Bigl) a^{\rho}_{2,\perp}, \nonumber\\
V^{\rho}_{7}&=-12 \hbox{ln} \frac{m_b}{\mu}+6 -\Bigl[-\frac{1}{2} - 3 i \pi - \Bigl(\frac{11}{2} - 3 i \pi\Bigl) a^{\rho}_1 - \frac{21}{20} a^{\rho}_{2} \Bigl], \nonumber\\
V^{\rho}_{6,8}&=9-6i\pi + \Bigl(\frac{19}{6}-i\pi\Bigl)a^{\rho}_{2,\perp},\nonumber\\
V^{\rho}_{10} &=
\begin{cases}
V^{\rho} &\text{ for } \bar{B}^0\rightarrow \pi^{+}\rho^{-}, \\
\\
V^{\rho}-\frac{C_7}{C_9}r^{\rho}_{\chi}V^{\rho}_{\perp} &\text{ for } B\rightarrow \rho\rho. \\
\end{cases}\nonumber\\
\label{eq:Vertices}\end{aligned}$$
### Vertices for the $B\rightarrow J/\psi \phi$ decay. {#eq:vertexJPsiPhi}
$$\begin{aligned}
V^i_{J/\psi \phi}&=
\begin{cases}
-18 -12 \ln\frac{\mu}{m_b}+ f^h_{I}& \text{ for $i=1,3,9$}\\
\\
-6 -12 \ln\frac{\mu}{m_b}+ f^h_{I}& \text{ for $i=5, 7$}\\
\end{cases}\nonumber\\\end{aligned}$$
$$\begin{aligned}
f^h_I&=&
\begin{cases}
f_I + g_I\cdot (1-\tilde{z})\frac{A^{BK*}_0}{A^{BK*}_3}& \text{ for $h=0$}\\
\\
f_I& \text{ for $h=\pm$}\\
\end{cases}\nonumber\\\end{aligned}$$
$$\begin{aligned}
f_{I}&=&\int^{1}_{0}d\xi \phi_{||}^{J/\psi}(\xi)\Bigl\{\frac{2 \tilde{z} \xi}{1-\tilde{z}(1-\xi)}+ (3-2\xi)\frac{\hbox{ln}\xi}{1-\xi} \nonumber\\
&& + \Bigl( -\frac{3}{1-\tilde{z}\xi} + \frac{1}{1-\tilde{z}(1-\xi)} - \frac{2 \tilde{z} \xi}{[1-\tilde{z}(1-\xi)]^2}\Bigl)\cdot
\tilde{z}\xi \ln[\tilde{z} \xi] \nonumber\\
&&+ \Bigl(3(1-\tilde{z}) + 2 \tilde{z} \xi + \frac{2 \tilde{z}^2 \xi^2}{1-\tilde{z}(1-\xi)}\Bigl)\cdot \frac{\ln(1-\tilde{z})-i \pi }{1-\tilde{z}(1-\xi)}\Bigl\},\nonumber\\
&&+\int^{1}_{0}d\xi \phi_{\perp}^{J/\psi}(\xi)\Bigl\{-4r\frac{\ln\xi}{1-\xi}+\frac{4\tilde{z} r\ln[\tilde{z}\xi]}{1-\tilde{z}(1-\xi)}\nonumber\\
&&-4\tilde{z}r\frac{\hbox{ln}(1-\tilde{z})-i\pi}{1-\tilde{z}(1-\xi)}\Bigl\}
\label{eq:VIh}\end{aligned}$$
$$\begin{aligned}
g_I&=&\int^1_0 \Phi^{J/\Psi}_{||}(\xi)\Bigl\{
\frac{-4\xi}{(1-\tilde{z})(1-\xi)}\ln\xi + \frac{\tilde{z}\xi}{(1-\tilde{z}(1-\xi))^2}\ln(1-\tilde{z})\nonumber\\
&&+\Biggl(\frac{1}{(1-\tilde{z}\xi)^2} - \frac{1}{(1-\tilde{z}(1-\xi))^2}
+ \frac{2(1+\tilde{z}-2\tilde{z}\xi)}{(1-\tilde{z})(1-\tilde{z}\xi)^2}\Biggl)\cdot\tilde{z}\xi
\ln[\tilde{z}\xi]\nonumber\\
&&-i\pi\frac{\tilde{z}\xi}{(1-\tilde{z}(1-\xi))^2}\Bigl\}
+\int^1_0 d\xi \Phi^{J/\Psi}_{\perp}(\xi)\Bigl\{\frac{4r}{(1-\tilde{z})(1-\xi)}\ln\xi\nonumber\\
&&-\frac{4r\tilde{z}}{(1-\tilde{z})(1-\tilde{z}\xi)}\ln[\tilde{z}\xi]\Bigl\}
\label{eq:VIh2}\end{aligned}$$
for
$$\begin{aligned}
\tilde{z}= \frac{m^2_{J/\Psi}}{m^2_B}, && r=2\cdot \Bigl(\frac{m_c}{m_{J/\Psi}}\Bigl)^2.\end{aligned}$$
### Penguin functions
To simplify the following equations we have denoted $M=\pi, \rho$ when the corresponding expressions apply to both $\pi$ and $\rho$ mesons. In addition we have used
$$\begin{aligned}
s_p=\Bigl(\frac{m_p}{m_b}\Bigl)^2,\end{aligned}$$
for $p=u,c$, although in practice we consider $s_u=0$.
$$\begin{aligned}
P^{p,M}_{1, 2, 3}&=P^{M}_{1, 2, 3} =0,\nonumber\\
P^{p,\pi}_{4}&=\frac{C_F \alpha_s}{4\pi N_c}\Bigl\{C_2\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu}+\frac{2}{3} - G_{\pi}(s_p) \Bigl] + C_{3}\Bigl[\frac{8}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{4}{3} - G_{\pi}(0) -
G_{\pi}(1) \Bigl] \nonumber\\
&+ \Bigl(C_4 + C_6 \Bigl)\Bigl[\frac{4 n_f}{3}\hbox{ln}\frac{m_b}{\mu} - (n_f -2) G_{\pi}(0) -G_{\pi}(s_c) - G_{\pi}(1) \Bigl] \nonumber\\
&- 6C^{eff}_{8g}\Bigl( 1 +\alpha^{\pi}_{1} +\alpha^{\pi}_2 \Bigl)\Bigl\},\nonumber\\
P^{p,M}_{6}&= \frac{C_F \alpha_s}{4\pi N_c}\Bigl\{ C_2\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu}+\frac{2}{3} - \hat{G}_{M}(s_p) \Bigl] + C_{3}\Bigl[\frac{8}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{4}{3} - \hat{G}_{M}(0) -
\hat{G}_{M}(1) \Bigl] \nonumber\\
&+ \Bigl(C_4 + C_6 \Bigl)\Bigl[\frac{4 n_f}{3}\hbox{ln}\frac{m_b}{\mu} - (n_f - 2) \hat{G}_{M}(0) -\hat{G}_{M}(s_c) - \hat{G}_{M}(1) \Bigl]
- 2C^{eff}_{8g}\Bigl\},\nonumber\\
P^{p,\pi}_{8}&= \frac{\alpha}{9\pi N_c}\Bigl\{\Bigl(N_c C_1 + C_2\Bigl) \Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{2}{3}-
\hat{G}_{\pi}(s_p) \Bigl] - 3C^{eff}_{7} \Bigl\},\nonumber\\
P^{p,M}_{10}&= \frac{\alpha}{9\pi N_c}\Bigl\{\Bigl(N_c C_1 + C_2\Bigl)\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{2}{3}-
G_{M}(s_p)\Bigl] - 9C^{eff}_{7} \Bigl(1 + \alpha^{M}_1 + \alpha^{M}_2 \Bigl)\Bigl\},\nonumber\\
P^{p, \rho}_{4} &=
\begin{cases}
P'^{p,\rho}_{4} &\text{ for } \bar{B}^0\rightarrow \pi^{+}\rho^{-}, \\
\\
P'^{p,\rho}_{4} - r^{\rho}_{\chi}P''^{p,\rho}_{4} &\text{ for } B\rightarrow \rho\rho, \\
\end{cases}\nonumber\\
P'^{p,\rho}_{4}&=\frac{C_F \alpha_s}{4\pi N_c}\Bigl\{C_2\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu}+\frac{2}{3} - G_{\rho}(s_p) \Bigl] + C_{3}\Bigl[\frac{8}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{4}{3} - G_{\rho}(0) -
G_{\rho}(1) \Bigl]\nonumber\\
&+ \Bigl(C_4 + C_6 \Bigl)\Bigl[\frac{4 n_f}{3}\hbox{ln}\frac{m_b}{\mu} - (n_f -2) G_{\rho}(0) -G_{\rho}(s_c) - G_{\rho}(1) \Bigl] \nonumber\\
&- 6C^{eff}_{8g}\Bigl( 1 +\alpha^{\rho}_{1} +\alpha^{\rho}_2 \Bigl)\Bigl\},\nonumber\\P''^{p,\rho}_{4}&=-\Bigl[C_2 \hat{G}_{\rho}(s_p) + C_{3}\Bigl(\hat{G}_{\rho}(0)+\hat{G}_{\rho}(1)\Bigl) \nonumber\\
&+ \Bigl(C_4 + C_6\Bigl)\Bigl(3\hat{G}_{\rho}(0) + \hat{G}_{\rho}(s_p) + \hat{G}_{\rho}(1)\Bigl) \Bigl],\nonumber\\
P_{7,9}^{u,\rho}&=\frac{\alpha}{9\pi}\Bigl\{\Bigl(N_c C_1 + C_2\Bigl) \Bigl[\frac{ 4}{ 3}\frac{m_b}{\mu} - \frac{10}{9} +\frac{4\pi^2}{3}\sum\limits_{r=\rho,\omega}\frac{f^2_r }{m^2_{\rho} - m^2_{r} + i m_r \Gamma_r }\nonumber\\
& -\frac{2\pi}{3}\frac{m^2_{\rho}}{ t_c} i + \frac{2}{ 3}\hbox{ln}\frac{m^2_{\rho}}{m^2_{b}} + \frac{2}{3}\frac{t_c - m^2_{\rho} }{t_c}\hbox{ln}\frac{t_c -m_{\rho}^2}{m^2_{\rho}}\Bigl] - 3C^{eff}_{7, \gamma}\Bigl\},\nonumber
$$
$$\begin{aligned}
P_{7,9}^{c,\rho}&= \frac{\alpha}{9\pi}\Bigl\{ \Bigl(N_c C_1 + C_2\Bigl)\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu} +\frac{2}{3} + \frac{4}{3}\hbox{ln}\frac{m_c}{m_b}\Bigl]-3C^{eff}_{\gamma}\Bigl\},\nonumber\\
P^{p,\rho}_{8}&=-\frac{\alpha}{9\pi N_c}\Bigl( N_c C_1 + C_2\Bigl) \hat{G}_{\rho}(s_p), \nonumber\\
P^{p,\rho}_{10}&=\frac{\alpha}{9\pi N_c }\Bigl( P'^{p, \rho}_{10} + r^{\rho}_{\chi}P''^{p,\rho}_{10} \Bigl),\nonumber\\
P'^{p, \rho}_{10}&=\Bigl(N_c C_1 + C_2 \Bigl)\Bigl[\frac{4}{3}\hbox{ln}\frac{m_b}{\mu} + \frac{2}{3} -G_{\rho}(s_p)\Bigl]-9C^{eff}_{7,\gamma}\Bigl( 1 +\alpha^{\rho}_{1}
+ \alpha^{\rho}_{2}\Bigl),\nonumber\\
P''^{p,\rho }_{10}&=\Bigl( N_c C_1 + C_2\Bigl)\hat{G}_{\rho}(s_p).
\label{eq:Penguins}\end{aligned}$$
For the calculation of $P_{7,9}^{u,\rho}$ above the symbol $t_c$ denotes
$$\begin{aligned}
t_c=4\pi^2 (f^2_{\rho} + f^2_{\omega}).\end{aligned}$$
Extra functions required for the evaluation of the penguin contributions
$$\begin{aligned}
G_{M}(s_c)&=&\frac{5}{3}-\frac{2}{3}\hbox{ln}(s_c) +\frac{\alpha^{M}_1}{2}+\frac{\alpha_{2}^{M}}{5} + \frac{4}{3}\Bigl( 8 +9\alpha^{M}_{1}+9\alpha^{M}_{2}\Bigl)s_c\nonumber\\
&&+2\Bigl(8 + 63\alpha^{M}_1 + 214 \alpha^{M}_{2} \Bigl)s_c^2 - 24 \Bigl( 9\alpha^{M}_{1}+80\alpha^{M}_{2}\Bigl)s_c^3 \nonumber\\
&&+ 2880\alpha^{M}_{2}s_c^4-\frac{2}{3}\sqrt{1-4s_c}\Bigl(2\hbox{arctanh}\sqrt{1-4s_c} -i \pi \Bigl) \Bigl[1+2s_c \nonumber\\
&&+ 6\Bigl(4 + 27 \alpha^{M}_{1} + 78 \alpha^{M}_{2}\Bigl)s_c^2- 36 \Bigl( 9 \alpha_1^{M}+ 70 \alpha^{M}_{2}\Bigl)s^3_{c} + 4320 \alpha_2^{M}s_c^4\Bigl] \nonumber\\
&& + 12 s^2_c \Bigl(2\hbox{arctanh}\sqrt{1-4s_c} -i \pi \Bigl)^2 \Bigl[1+3\alpha^{M}_{1}+6\alpha_2^{M}-\frac{4}{3}\Bigl(1+9\alpha^{\rho}_{1} \nonumber\\
&&+ 36\alpha^{M}_2\Bigl)s_c + 18 \Bigl(\alpha_1^{M} + 10 \alpha^{M}_{2}\Bigl)s^2_c -240 \alpha^{M}_2s_c^3\Bigl],\nonumber\\
G_{M}(0)&=&\frac{5}{3}+\frac{2 i \pi}{3}+\frac{\alpha^{M}_{1}}{2}+\frac{\alpha^{M}_2}{5},\nonumber\\
G_{M}(1)&=&\frac{85}{3}- 6\sqrt{3}\pi+\frac{4\pi^2}{9}-\Bigl( \frac{155}{2}-36\sqrt{3}\pi +12\pi^2\Bigl)\alpha^{M}_1+\Bigl(\frac{7001}{5} \nonumber\\
&& -504\sqrt{3}\pi + 136\pi^2\Bigl)\alpha^{M}_2,\nonumber\\
\hat{G}^{p}_{\pi}(s_c)&=& \frac{16}{9} \Bigl( 1 - 3 s_c\Bigl) - \frac{2}{3}\Bigl[\hbox{ln}(s_c) + \Bigl(1-4s_c\Bigl)^{3/2}\Bigl(2\hbox{arctan}\sqrt{1-4s_c}-i\pi \Bigl)\Bigl],\nonumber\\\end{aligned}$$
$$\begin{aligned}
\hat{G}^{p}_{\pi}(0)&=& \frac{16}{9} + \frac{2\pi i}{3}, \nonumber\\
\hat{G}^{p}_{\pi}(1)&=& \frac{2\pi}{\sqrt{3}}-\frac{32}{9},\nonumber\\
\hat{G}_{\rho}(s_c) &=& 1 + \frac{\alpha^{\rho}_{1,\perp}}{3} + \frac{\alpha^{\rho}_{2,\perp}}{6} - 4 s_c \Bigl( 9 + 12 \alpha^{\rho}_{1,\perp} +
14 \alpha^{\rho}_{2,\perp} \Bigl)\nonumber -6 s^2_c \Bigl(8 \alpha^{\rho}_{1,\perp} \nonumber\\
&&+ 35 \alpha^{\rho}_{2,\perp} \Bigl) + 360 s^3_c \alpha^{\rho}_{2,\perp} + 12 s_c \sqrt{1-4s_c}\Bigl(1 +\Bigl[1 + 4 s_c\Bigl] \alpha^{\rho}_{1,\perp}\nonumber\\
&&+ \Bigl[1+ 15 s_c - 30 s^2_c \Bigl] \alpha^{\rho}_{2,\perp} \Bigl) \Bigl(2\hbox{arctanh}\sqrt{1-4s_c} -i \pi \Bigl)\nonumber\\
&&- 12 s^2_c \Bigl(1 + \Bigl[3-4 s_c\Bigl]\alpha^{\rho}_{1,\perp}+ 2 \Bigl[3-10 s_c + 15 s^2_c \Bigl] \alpha^{\rho}_{2,\perp} \Bigl)\cdot \nonumber\\
&&\Bigl(2\hbox{arctanh}\sqrt{1-4s_c} -i \pi \Bigl)^2, \nonumber\\
\hat{G}_{\rho}(0)&=&1 + \frac{1}{3}\alpha^{\rho}_{1,\perp} + \frac{1}{6}\alpha^{\rho}_{2,\perp},\nonumber\\
\hat{G}_{\rho}(1)&=&-35 + 4\sqrt{3}\pi + \frac{4\pi^2}{3} +\Bigl(-\frac{287}{3} + 20\sqrt{3}\pi - \frac{4\pi^2}{3}\Bigl)\alpha^{\rho}_{1,\perp}\nonumber\\
&&+\Bigl(\frac{565}{6} - 56\sqrt{3}\pi + \frac{64\pi^2}{3} \Bigl)\alpha^{\rho}_{2,\perp}.\end{aligned}$$
### Hard Scattering functions for the $B\rightarrow \pi\pi, \rho\pi, \pi\rho, \rho \rho$ decays.
$$\begin{aligned}
H_{1, 2, 4, 10}^{\pi\pi}(\mu)&=&\frac{B_{\pi\pi}}{A_{\pi\pi}}\frac{m_B}{\lambda_B}\Bigl( 9 \Bigl[1+ a^{\pi}_1 + a^{\pi}_2\Bigl]^2 + 3r^{\pi}_{\chi}(\mu)\Bigl[1-a^{\pi}_1+a^{\pi}_{2}\Bigl]X_H \Bigl),\nonumber\\
H_{6,8}^{\pi\pi}(\mu)&=&0,\nonumber\\
H_{2, 4, 10}^{\pi\rho}(\mu)&=& \frac{B_{\pi\rho}}{A_{\pi\rho}}\frac{m_B}{\lambda_B}\Bigl( 9\Bigl[1+a^{\pi}_1 + a^{\pi}_2\Bigl]\Bigl[1+a^{\rho}_1 + a^{\rho}_2\Bigl] + 3 r^{\pi}_{\chi}(\mu) \Bigl[1 - a^{\rho}_1\nonumber\\
&& + a^{\rho}_2\Bigl]X_H\Bigl),\nonumber\\
H_{6,8}^{\pi\rho}(\mu)&=&0,\nonumber\\
H_{2, 4, 10}^{\rho\pi}&=& \frac{B_{\rho\pi}}{A_{\rho\pi} }\frac{m_B}{\lambda_B}\Bigl( 9\Bigl[1+a^{\pi}_1 + a^{\pi}_2\Bigl]\Bigl[1+a^{\rho}_1 + a^{\rho}_2\Bigl]
+ 3 r^{\rho}_{\chi}(\mu) \Bigl[1 - a^{\pi}_1 \nonumber\\
&&+ a^{\pi}_2\Bigl]\Bigl[3(1 + a^{\rho}_{1,\bot} + a^{\rho}_{2,\bot})X_H -(6 +9 a^{\rho}_{1,\bot} + 11 a^{\rho}_{2,\bot})\Bigl]\Bigl),\label{eq:Hi_rhopi}\nonumber\end{aligned}$$
$$\begin{aligned}
H_{6,8}^{\rho\pi}(\mu)&=&0,\nonumber\\
H_{1,2,4,9,10}^{\rho\rho}(\mu)&=&\frac{B_{\rho\rho}}{A_{\rho\rho}}\Bigl[\frac{m_{B_d}}{\lambda_B}\Bigl]\Bigl[9\Bigl(1+a_1^{\rho}+a^{\rho}_2\Bigl)^2 + 9r^{\rho}_{\chi}(\mu)\Bigl(1-a^{\rho}_{1}+a_{2}^{\rho}\Bigl)\cdot\nonumber\\
&&\Bigl(X_{H}-2\Bigl)\Bigl],\nonumber\\
H_{7}^{\rho\rho}(\mu)&=&-\frac{B_{\rho\rho}}{A_{\rho\rho}}\Bigl[\frac{m_{B_d}}{\lambda_B}\Bigl]\Bigl[9\Bigl(1+a_1^{\rho}+a^{\rho}_2\Bigl)\Bigl(1-a_1^{\rho}+a^{\rho}_2\Bigl) + 9r^{\rho}_{\chi}(\mu)\cdot\nonumber\\
&&\Bigl(1+a^{\rho}_{1} +a_{2}^{\rho}\Bigl)\Bigl(X_{H}-2\Bigl)\Bigl].
\label{eq:HardScattering}\end{aligned}$$
### Hard scattering function for the $B\rightarrow J/\psi \phi$
For the amplitudes of the decay $B\rightarrow J/\psi \phi$, the spectator interaction functions depend on the polarization of the final states, for $h=0,\pm$ we have
$$\begin{aligned}
\label{eq:HJPsiphi}
H^{J/\psi \phi, 0}_{1,3,9}&=&\frac{f_B f_{J/\psi} f_{\phi}}{\tilde{h}^0}
\int^1_0d\xi
\frac{\Phi^B_1(\xi)}{\xi}\int^1_0 d\tilde{\xi}\frac{\Phi^{J/\Psi}(\tilde{\xi})}{\tilde{\xi}}\int^1_0 d\bar{\eta}\frac{\Phi^{\phi}(\bar{\eta})}{\bar{\eta}},\nonumber\\
H^{J/\psi \phi, \pm}_{1,3,9}&=&
\frac{2f_B f_{J/\Psi} f_{\phi} m_{J/\Psi}m_{\phi}}{m^2_B \tilde{h}^{\pm}(1-\tilde{z}) }\int^1_0d\xi
\frac{\Phi^B_1(\xi)}{\xi}\int^1_0 d\tilde{\xi}\frac{\Phi^{J/\Psi}(\tilde{\xi})}{\tilde{\xi}}\cdot\nonumber\\
&&\int^1_0 d\bar{\eta}\Biggl[\frac{\Phi^{\phi, v}_{\perp}(\bar{\eta})}{\bar{\eta}} \pm \frac{\Phi^{\phi, a}_{\perp}(\bar{\eta})}{4\bar{\eta}^2}\Biggl],\nonumber\\
H^{J/\psi \phi, h}_{5,7}&=&-H^{J/\psi \phi, h}_{1,3,9}. \end{aligned}$$
The helicity functions in the denominators of Eqs. (\[eq:HJPsiphi\]) are
$$\begin{aligned}
\tilde{h}^0&=&\frac{f_{J/\psi}}{2m_{\phi}}\Biggl[\Bigl(m^2_{B} - m^2_{J/\Psi} -m^2_{\phi}\Bigl)\Bigl(m_{B} + m_{\phi}\Bigl)
A^{B\rightarrow \phi}_{1}(m^2_{J/\psi}) - \frac{4 m^2_{B} p^2_c}{m_B + m_{\phi}} A^{B\rightarrow \phi}_{2}(m^2_{J/\psi}) \Biggl],\nonumber\\
\tilde{h}^{\pm}&=&m_{J/\psi}f_{J/\psi}\Biggl[ \Bigl(m_B + m_{\phi} \Bigl)A^{B\rightarrow \phi}_{1} (m^2_{J/\psi})
\pm \frac{2m_B p_c}{m_B + m_{\phi}} V^{B\rightarrow \phi}(m^2_{J/\psi}) \Biggl],\end{aligned}$$
with
$$\begin{aligned}
p_c=\frac{\sqrt{\Bigl(m^2_{\phi}-m^2_{J/\psi}\Bigl)^2 + m^2_B\Bigl(m^2_B-2\Bigl[m_{J/\psi} + m^2_{\phi}\Bigl]\Bigl)}}{2m_B}. \end{aligned}$$
The form factors $A^{B\rightarrow \phi}_{1,2}(m^2_{J/\psi})$ and $V^{B\rightarrow \phi}(m^2_{J/\psi})$ used for the evaluation of the functions $\tilde{h}^0$ and $\tilde{h}^{\pm}$ were calculated based on [@Bharucha:2012wy], the corresponding numerical values can be found in Appendix \[Sec:Inputs\].\
The twist-3 distribution amplitudes of the $\phi$ meson in Eqs. (\[eq:HJPsiphi\]) have been denoted by $\Phi^{\phi, a}_{\perp}(x)$ and $\Phi^{\phi, v}_{\perp}(x)$, they are given explicitly by
$$\begin{aligned}
\Phi^{\phi, a}_{\perp}(x)&=&6x(1-x)\Biggl[1 + a^{||}_1 \Bigl[2x-1\Bigl] +\Biggl\{\frac{1}{4}a^{||}_2 +
\frac{5}{3} \zeta_3 \Bigl(1-\frac{3}{16}\omega^{A,\phi}_3\nonumber\\
&&+ \frac{9}{16}\omega^{V,\phi}_3 \Bigl)\Biggl\}\Biggl(5\Bigl[2x-1\Bigl]^2 - 1\Biggl) + 6\delta_+\Biggl\{3x(1-x)\nonumber\\
&&+ (1-x)\ln(1-x)
+ x\ln x\Biggl\} + 6\delta_{-}\Biggl\{(1-x)\ln(1-x) -x\ln x \Biggl\}\Biggl],\nonumber\\
\Phi^{\phi, v}_{\perp}(x)&=&\frac{3}{4}\Biggl\{1+\Bigl[2x-1\Bigl]^2 \Biggl\} + \frac{3}{2}a^{||}_{1}\Bigl[2x-1\Bigl]^3
+ \Biggl\{\frac{3}{7}a^{||}_{2} + 5\zeta_3\Biggl\}\Biggl\{3 \Bigl[2x-1\Bigl]^2 - 1\Biggl\}\nonumber\\
&& + \Biggl\{\frac{9}{112}a^{||}_{2} + \frac{15}{64}\zeta_3\Biggl[3\omega^V_3 - \omega^A_3\Biggl]\Biggl\}
\Biggl\{3 - 30 \Bigl[2x-1\Bigl]^2 + 35 \Bigl[2x-1\Bigl]^4\Biggl\} \nonumber\\
&& + \frac{3}{2}\delta_{+}\Biggl\{2 + \ln x + \ln[1-x]\Biggl\} +\frac{3}{2}\delta_{-}\Biggl\{ 2 \Bigl[2x-1\Bigl] +\ln(1-x)
-\ln x \Biggl\}.\nonumber\\
\label{eq:gphi}\end{aligned}$$
For the rest of the LCD amplitudes of the vector mesons $J/\psi$ and $\phi$ in Eqs. (\[eq:VIh\]), (\[eq:VIh2\]) and (\[eq:HJPsiphi\]) we use the leading term in the Gegenbauer expansion
$$\begin{aligned}
\Phi^V(\xi)&=&6\xi(1-\xi).\end{aligned}$$
For different hadronic parameters required for the numerical evaluation of Eq. (\[eq:gphi\]) we use [@Ball:1998fj]
$$\begin{aligned}
\zeta_3=0.023,\quad \quad\omega^{A}_{3}=0,\quad\quad\omega^V_3=3.7,\quad \quad \delta_{+}=0.41,\quad \quad \delta_{-}=0. \end{aligned}$$
The divergences encountered when integrating the twist-3 distribution amplitudes in Eqs. (\[eq:HJPsiphi\]) are parameterized following the model in Eq. (\[eq:XH\]).
Annihilation coefficients
-------------------------
$$\begin{aligned}
\beta_{i}^{p, M_1 M_2 }&=&\frac{B_{M_1M_2}}{A_{M_1M_2}}b^{p, M_1 M_2}_{i}\nonumber\\
b^{M_1M_2}_{1}&=&\frac{C_F}{N^2_c}C_{1}A^{i, M_1 M_2}_1 \nonumber\\
b^{M_1M_2}_{2}&=&\frac{C_F}{N^2_c}C_{2}A^{i, M_1 M_2}_1 \nonumber\\
b^{p, M_1 M_2}_{3}&=&\frac{C_F}{N^2_c}\Bigl[C_3 A^{i,M_1 M_2}_1 + C_5\Bigl(A^{i, M_1 M_2}_3+A^{f, M_1 M_2}_3\Bigl)+N_cC_6A^{f, M_1 M_2}_3\Bigl]\nonumber\\
b^{p, M_1 M_2}_{4}&=&\frac{C_F}{N^2_c}\Bigl[C_4A^{i, M_1 M_2}_1+C_6A^{i, M_1 M_2}_2\Bigl]\nonumber\\
b^{p, M_1 M_2}_{3,EW}&=&\frac{C_F}{N^2_c}\Bigl[C_9A^{i, M_1 M_2}_1+C_7\Bigl(A^{i, M_1 M_2}_3+A^{f, M_1 M_2}_3\Bigl)+N_c C_8 A^{f, M_1 M_2}_3\Bigl]\nonumber\\
b^{p, M_1 M_2}_{4,EW}&=&\frac{C_F}{N^2_c}\Bigl[C_{10}A^{i, M_1 M_2}_1+C_8A^{i, M_1 M_2}_2\Bigl]
\label{eq:annihilation}\end{aligned}$$
Annihilation kernels
--------------------
$$\begin{aligned}
A^{i,\pi\pi}_{1}&\approx& A^{i, \pi\pi}_{2}\approx 2\pi\alpha_s(\mu_h)\Bigl[9\Bigl(X_A-4+\frac{\pi^2}{3}\Bigl) + r^{\pi}_{\chi}r^{\pi}_{\chi}X^2_{A}\Bigl]\nonumber\\
A^{i, \pi\rho}_{1}&=&A^{i, \rho\pi}_{1}\approx6\pi\alpha_s\Bigl[3\Bigl(X_A-4+\frac{\pi^2}{3}\Bigl)+
r^{\rho}_{\chi}r^{\pi}_{\chi}\Bigl(X_A^2-X_A\Bigl)\Bigl]\nonumber\\
A^{i, \pi\rho}_{2}&=&A^{i,\rho\pi}_{2}\approx-A^{i, \pi\rho}_{1} \nonumber\\
A^{i,\pi\pi}_{3}&\approx& 0\nonumber\\
A^{i,\pi\rho}_{3}&=& A^{i,\rho\pi}_{3}\approx 6\pi\alpha_s\Bigl[-3r^{\rho}_{\chi}\Bigl(X^2_A-2X_A-\frac{\pi^2}{3}+4\Bigl)+
r^{\pi}_{\chi}\Bigl(X^2_A-2X_A+\frac{\pi^2}{3}\Bigl)\Bigl]\nonumber\\
A^{f,\pi\rho}_{1}&=&A^{f,\pi\rho}_{2}= A^{f,\rho\pi}_{1}=A^{f,\rho\pi}_{2}= 0\nonumber\\
A^{f,\pi\pi}_{3}&\approx& 12\pi\alpha_sr^{\pi}_{\chi}\Bigl(2X^2_A-X_A\Bigl)\nonumber\\
A^{f,\pi\rho}_{3}&\approx&-6\pi\alpha_s\Bigl[3r^{\pi}_{\chi}\Bigl(2X_A-1\Bigl)\Bigl(X_A-2\Bigl)+r^{\rho}_{\chi}\Bigl(2X^2_A-X_A\Bigl)\Bigl]\nonumber\\
A^{f,\rho\pi}_{3}&=&-A^{f,\pi\rho}_{3}\approx 6\pi\alpha_s\Bigl[3r^{\rho}_{\chi}\Bigl(2X_A-1\Bigl)\Bigl(2 - X_A\Bigl) -r^{\pi}_{\chi}\Bigl(2X^2_A-X_A\Bigl)\Bigl]\nonumber\\
A^{i,\rho\rho}_{1}&=& A^{i,\rho\rho}_{2} \approx 18\pi\alpha_s\Bigl[\Bigl(X_A -4 + \frac{\pi^2}{3} \Bigl) + (r^{\rho}_{\chi})^2(X_A-2)^2 \Bigl]\nonumber\\
A^{i,\rho\rho}_{3}&=&0 \nonumber\\
A^{f,\rho \rho}_{3}&\approx& -36\pi \alpha_s r^{\rho}_{\chi}\Bigl(2X^2_A -5 X_A + 2 \Bigl)\nonumber\\
\label{eq:annihilation2}\end{aligned}$$
[^1]: A meson with mass $m$ is considered “heavy" if $m$ scales with $m_b$ in the heavy quark limit such that $m/m_b$ remains fixed. On the other hand a meson is regarded as “light” if its mass remains finite in the heavy quark limit, for a light meson $m\sim\mathcal{O}(\Lambda_{QCD})$ [@Beneke:2000ry].
[^2]: T. Huber, private communication.
[^3]: Similar observations were made in e.g.[@Blanke:2016bhf; @Blanke:2018cya].
[^4]: Due to the absence of penguins and the fact that the relevant hadronic matrix elements cancel, the extraction of CKM $\gamma$ is extremely clean. The irreducible theoretical uncertainty is due to higher-order electroweak corrections and has been found to be negligible. For instance, when the modes $B \to D K$ are used the correction effect is $|\delta \gamma/\gamma| <\mathcal{O}(10^{-7})$ [@Brod:2013sga]. On the other hand, if CKM $\gamma$ is obtained using $B \to D \pi$ decays instead, then $|\delta \gamma/\gamma| <\mathcal{O}(10^{-4})$ [@Brod:2014qwa].
[^5]: Actually $\bar{z} := m_c^2(m_b)/m_b^2(m_b) = 0.0505571$ is used.
|
---
abstract: 'In this paper we develop a polymer expansion with large/small field conditions for the mean resolvent of a weakly disordered system. Then we show that we can apply our result to a two-dimensional model, for energies outside the unperturbed spectrum or in the free spectrum provided the potential has an infra-red cut-off. This leads to an asymptotic expansion for the density of states. We believe this is an important first step towards a rigorous analysis of the density of states in the free spectrum of a random Schr[ö]{}dinger operator at weak disorder.'
author:
- |
Gilles Poirot\
Centre de Physique Th[é]{}orique, Ecole Polytechnique\
91128 Palaiseau Cedex, FRANCE\
poirot@cpth.polytechnique.fr
title: |
Mean Green’s function of the Anderson model\
at weak disorder with an infra-red cut-off
---
=-0.54 truecm =-0.54 truecm =0 truecm =0 truecm =0 truecm =0 truecm =23 truecm =17 truecm
amssym.def amssym.tex
Introduction
============
In the one-body approximation, the study of disordered systems amounts to the study of random Schr[ö]{}dinger operators of the form $$H = H_{0} + \lambda V$$ where $H_{0}$ is a kinetic term ([*i.e.*]{} a self-adjoint or essentially self-adjoint operator corresponding to some dispersion relation, typically a regularized version of $-\Delta$) and $V$ is a real random potential (in the simplest case, $V$ is a white noise). We work on a ultra-violet regular subspace of ${\cal L}^{2} (\mbox{I}\!\mbox{R}^{d})$ and we restrict ourselves to $\lambda$ small so as to see $\lambda V$ as a kind of perturbation of the free Hamiltonian.
The properties of $H$ are usually established through the behavior of the kernel of the resolvent operator or Green’s function ([@Tho], [@Aiz], [@Fra]) $$G_{E}(x, y) = \, < \! x | \frac{1}{H-E} | y \! >$$ For instance, the density of states is given by $$\rho(E) = \frac{1}{\pi} \lim_{\varepsilon \rightarrow 0} \mbox{Im }
G_{E+i \varepsilon}(x, x)$$
The important point is that, in the thermodynamic limit, the system is self-averaging, [*i.e.*]{} mean properties are often almost sure ones. Thus the problem can be seen as a statistical field theory with respect to the random field $V$. In Statistical Mechanics, functional integrals in the weakly coupled regime are controled through a cluster expansion (or polymer expansion) with small field versus large field conditions, the problem being then to control a Boltzmann weight ([@Bry], [@Riv1]).
In the first part of this paper, we derive a resolvent cluster expansion with large field versus small field conditions assuming that $V$ satisfies some large deviation estimates. This would allow to prove the existence and the regularity of the mean Green’s function (theorem \[thRE\]) and to get an asymptotic expansion for the density of states.
In the second part, we show that the hypothesis of theorem \[thRE\] are satisfied in the case of a 2 dimensional model with a rotation invariant dispersion relation and an infra-red cut-off on the potential. From the point of view of [*Renormalization Group*]{} analysis, our results allows to control the model away from the singularity, [*i.e.*]{} to perform the first renormalization group steps and therefore to generate a fraction of the expected “mass”.
Model and results
=================
The model
---------
In $\mbox{I}\!\mbox{R}^{d}$ we consider $$H = H_{0} + \lambda V$$ where $V$ is a gaussian random field with covariance $\xi$ whose smooth translation invariant kernel is rapidly decaying (we will note the associated measure $d\mu_{\xi}$). Because $\xi$ is smooth, $d\mu_{\xi}$ as a measure on tempered distributions is in fact supported on ${\cal C}^{\infty}$ functions. We suppose also that $\hat{H}_{0}^{-1}$ has compact support so that we do not have to deal with ultra-violet problems. We construct the finite volume model in $\mbox{I}\!\mbox{R}^{d}/_{\!
\displaystyle \Lambda \mbox{\sf Z\hspace{-5pt}Z}^{\scriptstyle d}}$ by replacing $\xi$ and $H_{0}$ by their “$\Lambda$-periodization” $$\begin{aligned}
\xi_{\Lambda}(x,y) &=& \frac{1}{\Lambda^{d}} \sum_{p \in
\frac{2\pi}{\Lambda}\mbox{\scriptsize \sf Z\hspace{-3.5pt}Z}^{d}}
e^{ip(x-y)} \hat{\xi}(p) = \sum_{z \in \Lambda \mbox{\scriptsize \sf
Z\hspace{-3.5pt}Z}^{d}} \xi(x-y+z) \\
H_{0}^{(\Lambda)}(x,y) &=& \ldots = \sum_{z \in \Lambda \mbox{\scriptsize \sf
Z\hspace{-3.5pt}Z}^{d}} H_{0}(x-y+z) \end{aligned}$$
Then we define $$\begin{aligned}
G_{\Lambda, \varepsilon} (E, \lambda, V) &=&
\frac{1}{H_{0}^{(\Lambda)} + \lambda V - (E + i\varepsilon)} \\
G_{\Lambda, \varepsilon} (E, \lambda) &=& \int \! d\mu_{\xi_{\Lambda}}(V) \,
G_{\Lambda, \varepsilon} (E, \lambda, V)\end{aligned}$$ where $d\mu_{\xi_{\Lambda}}$ can be considered either as a measure on ${\cal C}^{\infty}\! \left(\mbox{I}\!\mbox{R}^{d}/_{\!
\displaystyle \Lambda \mbox{\sf Z\hspace{-5pt}Z}^{\scriptstyle d}} \right)$ or as a measure on ${\cal C}^{\infty} (\mbox{I}\!\mbox{R}^{d})$ which is supported by the space of $\Lambda$-periodic functions. In the same way, $G_{\Lambda, \varepsilon}$ will be considered as an operator either on ${\cal L}^{2} \! \left(\mbox{I}\!\mbox{R}^{d}/_{\! \displaystyle \Lambda
\mbox{\sf Z\hspace{-5pt}Z}^{\scriptstyle d}} \right)$ or on ${\cal
L}^{2}(\mbox{I}\!\mbox{R}^{d})$. One can note that in momentum space, because of the cut-off, the problem reduces to a finite dimensional one.
Because $V$ is almost surely regular, its operator norm as a multiplicative operator is equal to its ${\cal L}^{\infty}$ norm (it is easy to see that $\|
V\| \leqslant \| V\|_{\infty}$, equality can be obtained by taking test functions $f_{n, x}$ such that $f_{n, x}^{2} \rightarrow \delta_{x}$). Therefore $V$ is bounded and self-adjoint. Then $G_{\Lambda, \varepsilon} (E, \lambda, V)$ is almost surely an analytic operator-valued function of $\lambda$ in a small domain (depending on $V$) around the origin. This domain can be extended to a $V$-dependent neighborhood of the real axis thanks to the identity (for $|\lambda - \mu|$ small enough) $$G_{\Lambda, \varepsilon} (E, \mu, V) = G_{\Lambda, \varepsilon} (E, \lambda,
V) \left\{ I + \sum_{n=1}^{\infty} (\lambda- \mu)^{n} \left[V G_{\Lambda,
\varepsilon} (E, \lambda, V) \right]^{n} \right\}$$ In the same way, $G_{\Lambda, \varepsilon}(E, \lambda, V)$ is analytic in $E$. One can also check that $G_{\Lambda, \varepsilon}(E, \lambda, V)$ has a smooth kernel and is integrable with respect to $d\mu_{\xi_{\Lambda}}$. Furthermore, $G_{\Lambda, \varepsilon}(E, \lambda)$ will have a translation invariant kernel because $d\mu_{\xi_{\Lambda}}$ is translation invariant.
Main result
-----------
We introduce a function $\theta$ which satisfies
- $\theta$ is an odd ${\cal C}^{\infty}$ function, increasing and bounded
- for any $x$, $|\theta(x)| \leqslant |x|$
- for any $|x| \leqslant 1$, $\theta(x) = x$
- the ${\cal L}^{\infty}$ norm of its derivatives does not grow too fast
Then for $\mu >0$, we define the operators $C_{\Lambda, \mu}$, $D_{\Lambda, \mu}$ and $U_{\Lambda, \mu}$ through the Fourier transform of their kernel $$\begin{aligned}
\hat{C}_{\Lambda, \mu}^{-1}(p) &=& \hat{H}_{0}^{(\Lambda)}(p)- E -i\mu \\
\hat{D}_{\Lambda, \mu}(p) &=& \frac{1}{\left|\theta
[\hat{H}_{0}^{(\Lambda)}(p)-E] - i\mu \right|^{1/2}} \\
\hat{U}_{\Lambda, \mu}^{-1}(p) &=& \hat{D}_{\Lambda, \mu}^{2}(p)
\hat{C}_{\Lambda, \mu}^{-1}(p) \end{aligned}$$
Given any characteristic length $L$ we can divide the space into cubes $\Delta$ of side $L$ and construct an associated ${\cal C}^{\infty}_{0}$ partition of unity $$1 = \sum_{\Delta} \chi_{\Delta}$$ where $\chi_{\Delta}$ has support in a close neighborhood of the cube $\Delta$ ([*e.g.*]{} on $\Delta$ and its nearest neighbors). This decomposition induces an orthogonal decomposition of $V$ into a sum of fields $V_{\Delta}$ with covariance $$\xi_{\Lambda}^{\Delta} (x,y) = \int \! dz \, \xi_{\Lambda}^{1/2}(x-z)
\chi_{\Delta}(z) \xi_{\Lambda}^{1/2}(z-y)$$
For simplicity we will pretend that $\xi$ and $\xi^{1/2}$ have compact support, so that $V_{\Delta}$ is almost surely supported on a close neighborhood of $\Delta$, moreover we will take that it is restricted to $\Delta$ and its nearest neighbors. The generalization to a fast decaying $\xi$ can be easily obtained by decomposing each $V_{\Delta}$ over the various cubes and write more complicated small/large field conditions that test the size of $V_{\Delta}$ in the various cubes. This leads to lengthy expressions that we want to avoid.
Finally, we note $d_{\Lambda}$ the distance in $\mbox{I}\!\mbox{R}^{d}/_{\!
\displaystyle \Lambda \mbox{\sf Z\hspace{-5pt}Z}^{\scriptstyle d}}$ $$d_{\Lambda}(x,y) = \min_{z \in \Lambda \mbox{\scriptsize \sf
Z\hspace{-3.5pt}Z}^{d}} |x-y+z|$$
In the following, $C$ or $O(1)$ will stand as generic names for constants in order to avoid keeping track of the numerous constants that will appear. Furthermore we will not always make the distinction between a function and its Fourier transform but we will use $x$, $y$ and $z$ as space variables and $p$ and $q$ as momentum variables.
\[thRE\] \
Suppose that
- $\xi$ is smooth and has fast decay
- $\displaystyle C_{\xi} = \sup_{\Lambda} \frac{1}{2
\Lambda^{d}} \int_{[0, \Lambda]^{d}} \! \xi^{-1}_{\Lambda}(x, y) \, dx \, dy$ exists
- for all $E \in [E_{1}, E_{2}]$ and all $\mu$, $C_{\mu}$, $D_{\mu}$ and $U_{\mu}$ have smooth kernels with fast decay over a length scale $L$.
- for all $n_{1}$, we have $C_{n_{1}}$ such that for all $\Lambda$ and all triplets $(\Delta_{1}, \Delta_{2}, \Delta_{3})$ $$\| \chi_{\Delta_{1}} D_{\Lambda, \mu} V_{\Delta_{2}} D_{\Lambda,
\mu} \chi_{\Delta_{3}} \| \leqslant \frac{C_{n_{1}} \|D_{\Lambda, \mu}
V_{\Delta_{2}} D_{\Lambda, \mu}\|}{\left[1 + L^{-1}
d_{\Lambda}(\Delta_{1}, \Delta_{2})\right]^{n_{1}} \left[1 + L^{-1}
d_{\Lambda}(\Delta_{2}, \Delta_{3})\right]^{n_{1}}}$$
- there are constants $C_{0}$, $C_{1}$, $\kappa > 0$ and $\alpha >0$ such that $$\label{largedev}
\forall \Lambda \leqslant \infty, \, \forall a>1, \,
\forall \Delta, \quad I\!\!P_{\Lambda} \left(\| D_{\Lambda, \mu}
V_{\Delta} D_{\Lambda, \mu}\| \geqslant
a C_{0} \right) \leqslant C_{1} e^{-\kappa a^{2} L^{\alpha}}$$ where $I\!\!P_{\Lambda} (.)$ denote the probability with respect to the measure $d\mu_{\xi_{\Lambda}} \equiv \otimes d\mu_{\xi_{\Lambda}^{\Delta}}$ ($\xi_{\infty} \equiv \xi$) $$I\!\!P_{\Lambda} (X) = \int \! d\mu_{\xi_{\Lambda}}(V) \, \bbbone_{X}(V)
= \mu_{\xi_{\Lambda}}(X)$$
Then let $\mu_{0} = L^{-d/2} C_{\xi}^{1/2}$, $\mu = \lambda \mu_{0}$ and $$T_{\Lambda, \varepsilon} = D_{\Lambda, \mu}^{-1} G_{\Lambda, \varepsilon}
D_{\Lambda, \mu}^{-1}$$
For all $\lambda \leqslant \lambda_{0} = O(1)$ and for all $\varepsilon$ small enough (in a $\lambda$-dependent way), $T_{\Lambda, \varepsilon}(E, \lambda)$ is uniformly bounded in $\Lambda$ and admits the following development (in the operator norm sense) $$\label{Lambdadev}
1_{\Omega_{\Lambda}} T_{\Lambda, \varepsilon}(E, \lambda)
1_{\Omega_{\Lambda}} =
1_{\Omega_{\Lambda}} T(E+i\varepsilon, \lambda) 1_{\Omega_{\Lambda}}
+ \mbox{O}\left(\frac{1}{\Lambda}\right)$$ where $\Omega_{\Lambda} = [-\Lambda^{1/2}; \Lambda^{1/2}]^{d}$, and $1_{\Omega_{\Lambda}}$ is the characteristic function of $\Omega_{\Lambda}$.
Furthermore we have the following properties
- $T$ has a smooth, translation invariant kernel
- $T_{\Lambda, \varepsilon}$ and $T$ have high power decay $$\exists n_{0} \mbox{ large, } \exists C_{T}(n_{0}) \mbox{ such that }
\forall (\Delta, \Delta'), \quad
\| 1_{\Delta} T_{\Lambda, \varepsilon} 1_{\Delta'}\|
\leqslant \frac{C_{T}(n_{0})}{\left[1 + L^{-1} d_{\Lambda}(\Delta,
\Delta')\right]^{n_{0}}}$$ and a similar relation for $T$ with $d_{\Lambda}$ being replaced by $d$.
- $T(E, \lambda)$ is an analytic operator valued function of $E$ for all $E$ in $]E_{1}, E_{2}[$ with a small $\lambda$-dependent radius of analyticity.
- $T(E, \lambda)$ is a ${\cal C}^{\infty}$ operator-valued function of $\lambda$ and admits an asymptotic expansion to all orders in $\lambda$. which is the formal perturbative expansion of $$\int \! d\mu_{\xi}(V) \, e^{\frac{\mu_{0}^{2}}{2} <1, \xi^{-1} 1>}
\, e^{i \mu_{0} <V, \xi^{-1} 1>} \, \frac{1}{H_{0} -E +
\lambda V -i (\mu+0^{+})}$$ ($<>$ denotes the scalar product, [*i.e.*]{} $<f, Af> = \int \! \bar{f}(x) A(x, y) f(y)\, dx \, dy$)
This theorem is formulated in a rather general way so as to apply with minimum transformation to various situations (lattice or continuous models) and in any dimension. Then we construct a concrete example with a two-dimensional model. One can also refer to [@MPR] for a $d=3$ case.
Anderson model with an infra-red cut-off in dimension d=2
---------------------------------------------------------
We consider $$H = -\Delta_{\eta} + \lambda \eta_{E} V \eta_{E}$$ where
- $\Delta_{\eta}^{-1}$ is a ultra-violet regularized inverse Laplacian, [*i.e.*]{} there is a ${\cal C}_{0}^{\infty}$ function $\eta_{\scriptscriptstyle UV}$ equal to 1 on “low” momenta such that $$\Delta_{\eta}^{-1}(p) = \frac{\eta_{\scriptscriptstyle UV}(p)}{p^{2}}$$ We will note $p^{2}$ instead of $- \Delta_{\eta}$, the UV-cutoff being then implicit.
- we are interested in the mean Green’s function for an energy $E=O(1)$
- $\eta_{E}$ is an infra-red cut-off which enforces $$|p^{2} - E| \geqslant A \lambda^{2} |\log \lambda|^{2}$$ for some large constant $A$
- $V$ has covariance $\xi$ which is a ${\cal C}^{\infty}_{0}$ approximation of a $\delta$-function
This corresponds to the model away from the singularity $p^{2} = E$ in a multi-scale renormalization group analysis, we will show that it generates a small fraction of the expected imaginary part which is $O(\lambda^{2})$.
Let $M^{1/2}$ be an even integer greater than 2, we define $j_{0}
\in \NN$ such that $$M^{-j_{0}} \leqslant \inf_{\mbox{\scriptsize Supp}(\eta_{E})} |p^{2} - E|
\leqslant M^{-(j_{0}-1)}$$
Next, we construct a smooth partition of unity into cubes of side $M^{j_{0}}$ (they form a lattice $I\!\! D_{j_{0}}$) and we construct the fields $V_{\Delta}$’s accordingly.
\[thprob2d\] \
There exist constants $C_{0}$ and $C_{1}$ such that for any $\Lambda$, $a \geqslant 1$ and $\Delta \in I\!\! D_{j_{0}}$ we have $$I\!\!P_{\Lambda} \left( \|D_{\Lambda, \mu} \eta_{E} V_{\Delta} \eta_{E}
D_{\Lambda, \mu}\| \geqslant a C_{0} j_{0} M^{j_{0}/2}\right)
\leqslant C_{1} e^{-\frac{1}{2} a^{2} M^{j_{0}/6}}$$ Furthermore theorem \[thRE\] applies and $G_{E}$ is asymptotic to its perturbative expansion so that it behaves more or less like $$G_{E} \sim \frac{1}{p^{2} - E - i \eta_{E} O(\lambda^{2} |\log
\lambda|^{-2}) \eta_{E}}$$
It is easy to extend this result to the case of a rotation invariant dispersion relation and for energies outside the free spectrum not too close to the band edge. In this case, the cut-off is no longer needed so that the result apply to the full model.
Resolvent polymer expansion with large field versus small field conditions
==========================================================================
Sketch of proof for theorem [\[thRE\]]{}
----------------------------------------
We give here the global strategy for proving theorem \[thRE\], the main ingredient being the polymer expansion that we will detail in the following.
First we recall (without proving them) some quite standard properties of gaussian measures.
\[lemtrans\] Complex translation\
Let $X$ be a gaussian random field with covariance $C$ and let $d\mu_{C}$ be the associated measure. For any regular functional ${\cal F}(X)$ and any function $f \in \mbox{Ran } C$, we have the following identity $$\int \! d\mu_{C}(X) \, {\cal F}(X) = e^{\frac{1}{2} <f, C^{-1} f>} \int \!
d\mu_{C}(X) \, {\cal F}(X-if) \, e^{i<X, C^{-1} f>}$$
\[lemipp\] Integration by part\
With the same notations than above we have $$\int \! d\mu_{C}(X) \, X(x) {\cal F}(X) = \int \! dy \, C(x, y) \int \!
d\mu_{C}(X) \frac{\delta}{\delta X(y)} {\cal F}(X)$$
Those lemmas could for instance be easily proved for polynomial functionals and extended through a density argument to a wide class of functionals. $\blacksquare$
Our starting point is obtained by applying lemma \[lemtrans\] with $\displaystyle f = \mu_{0} 1$. $$\begin{aligned}
\label{complextrans}
G_{\Lambda, \varepsilon}(E+z, \lambda) &=&
\int \! d\mu_{\xi_{\Lambda}}(V) \,
e^{\frac{\mu_{0}^{2}}{2} <1, \xi_{\Lambda}^{-1} 1> +i \mu_{0}
<V, \xi_{\Lambda}^{-1} 1>} \,
\frac{1}{H_{0}^{(\Lambda)} - (E+ i\mu) +
\lambda V - i\varepsilon - z} \\
T_{\Lambda, \varepsilon}(E+z, \lambda) &=&
\int \! d\mu_{\xi_{\Lambda}}(V) \,
e^{\frac{\mu_{0}^{2}}{2} <1, \xi_{\Lambda}^{-1} 1> +i \mu_{0}
<V, \xi_{\Lambda}^{-1} 1>} \,
\frac{1}{U_{\Lambda, \mu}^{-1} +
\lambda D_{\Lambda, \mu} V D_{\Lambda, \mu} - (z + i\varepsilon)
D_{\Lambda, \mu}^{2}} \end{aligned}$$
On one hand we earned something because now the resolvent operator in the integral is bounded in norm independently of $\varepsilon$ (in the following we will note $z$ instead of $z+ i\varepsilon$ and show convergence for any $z$ such that $|z| \ll \mu$, this would allow to prove analyticity in $z$). But on the other hand we have a huge normalization factor to pay. However, we can remark that this normalization factor is in fact equivalent to a factor $e$ per $L$-cube.
Most of the demonstration amounts to a polymer expansion of $T_{\Lambda,
\varepsilon}$, [*i.e.*]{} we write $T_{\Lambda, \varepsilon}$ as a sum over polymers of polymer activities $$\begin{aligned}
T_{out, in} &=& \chi_{\Delta_{out}} T_{\Lambda, \varepsilon}
\chi_{\Delta_{in}} \\
T_{out, in} &=& \chi_{\Delta_{out}} \left[ U_{\Lambda, \mu} +
\frac{\lambda^{c_{1}}}{[1 + L^{-1}
d_{\Lambda}(\Delta_{in}, \Delta_{out})]^{n_{0}}}
\sum_{Y \in {\cal A}} \lambda^{c_{2}|Y|} \Gamma_{Y} \, T(Y) \right]
\chi_{\Delta_{in}}\end{aligned}$$ where $c_{1}$ and $c_{2}$ are small constants, $\Gamma_{Y}$ has decay in the spatial extension of $Y$ and $\displaystyle \|\sum_{Y \in {\cal A}} T(Y)\|$ is bounded. Furthermore, $G(Y)$ is given by a functional integration over fields $V_{\Delta}$’s corresponding to cubes in the support of the polymer $Y$. This show that $T_{\Lambda, \varepsilon}$ is bounded and has a high power decay uniformly in $\Lambda$.
Next, when we consider $1_{\Omega_{\Lambda}} T_{\Lambda, \varepsilon}
1_{\Omega_{\Lambda}}$ we can divide the sum over polymers into a sum over polymers with a large spatial extension (say $\Lambda^{2/3}$) and sum over “small” polymers. The large polymers will have a total contribution small as $\Lambda^{-1}$ to some large power. For the small polymers, since we are far away from the boundaries, their contribution calculated with $d\mu_{\xi_{\Lambda}}$ will be equal to their contribution calculated with $d\mu_{\xi}$ up to a factor $\Lambda^{-n}$. In this way we can prove the development (\[Lambdadev\]). Smoothness of the kernel will be obtained because we will show that we can write $$T(Y) = U_{\Lambda, \mu} \tilde{T}(Y) U_{\Lambda, \mu}$$ The convergence for any $z \ll \mu$ allows to show analyticity (we write $z$-derivatives as Cauchy integrals so that we can show that they all exists and do not grow too fast). Then an asymptotic expansion can be generated through the repeated use of resolvent identity.
Finally, for the density of states, we just need to remark that $$G(0, 0) = \int \! dp \, dq \, G(p, q) = <\tilde{\delta}, G \,
\tilde{\delta}>$$ where $\tilde{\delta}$ is a regularized $\delta$-function because of the presence of the ultra-violet cut-off. Thus an asymptotic expansion for $G$ with respect to the operator norm will yield an asymptotic expansion for the density of states.
Improved polymer expansions
---------------------------
Cluster expansions in constructive field theory lay heavily on a clever application of the Taylor formula with integral remainder. Writing the full Taylor series would amount to completely expand the perturbation series, which most often diverges, and therefore should be avoided. A rather instructive example of minimal convergent expansion is the Brydges-Kennedy forest formula: you have a function defined on a set of links between pair of cubes and you expand it not on all possible graphs but only on forests (cf [@Bry]).
For more complex objects a way to generalize such a formula can be found in [@AR], and we refer the reader to it for a more careful treatment and for various proofs. Let us assume that we have a set of objects that we call monomers. A sequence of monomers will be called a polymer, then we will expand a function defined on a set of monomers into a sum over allowed polymers.
To be more precise, let ${\cal X}$ be a set of monomers, we define the set ${\cal Y}$ of polymers on ${\cal X}$ as the set of all finite sequences (possibly empty) of elements of ${\cal X}$. Then a monomer can be identified to a polymer of length 1. The empty sequence or empty polymer will be noted $\emptyset$. We define on ${\cal Y}$
- a concatenation operator: for $Y=(X_{1}, \ldots, X_{n})$ and $Y'=(X'_{1}, \ldots, X'_{n'})$, we define $$Y \cup Y' = (X_{1}, \ldots, X_{n}, X'_{1}, \ldots, X'_{n'})$$
- the notion of starting sequence: we say that $Y_{1}$ is a starting sequence of $Y$ (equivalently that $Y$ is a continuation of $Y_{1}$) and we note $Y_{1} \subset Y$ iff there exists $Y_{2}$ such that $Y = Y_{1} \cup
Y_{2}$
Then we call allowed set (of polymers) any finite subset ${\cal A} \subset
{\cal Y}$ such that
- $\forall Y, Y' \quad Y' \subset Y \mbox{ and } Y \in {\cal A}
\Rightarrow Y' \in {\cal A}$
- $\forall X, Y, Y' \quad Y \subset Y' \mbox{ and } Y\cup X \not \in
{\cal A} \Rightarrow Y'\cup X \not \in {\cal A}$
the first condition implies that $\emptyset \in {\cal A}$ whenever ${\cal A}$ is non-empty. Finally, for $Y$ belonging to some allowed set ${\cal A}$, a monomer $X$ is said to be admissible for $Y$ (according to ${\cal A}$) iff $Y
\cup X \in {\cal A}$.
\[lemclust\] \
Let ${\cal X} = \{ X \}$ be a set of $N$ monomers and ${\cal Y}$ the set of polymers on ${\cal X}$. We assume that we have an indexation of $\mbox{I}\!
\mbox{R}^{N}$ by ${\cal X}$, [*i.e.*]{} a bijection from ${\cal X}$ to $\{ 1,
\ldots, N \}$ so that an element of $\mbox{I}\! \mbox{R}^{N}$ can be noted $\vec{z} = (z_{X})_{X \in {\cal X}}$.
For ${\cal F}$ a regular function from $\mbox{I}\! \mbox{R}^{N}$ to some Banach space ${\cal B}$ and an allowed set ${\cal A} \subset {\cal Y}$, the polymer expansion of ${\cal F}$ according to ${\cal A}$ is given through the following identity $$\begin{aligned}
\label{eqclust0}
{\cal F}(\vec{1}) &\!\!\! \equiv& \!\!\! {\cal F}(1, \ldots, 1) \nonumber \\
&\!\!\!=& \!\!\! \sum_{n \geqslant 0} \,
\sum_{Y=(X_{1}, \ldots, X_{n}) \in
{\cal A}} \int_{\lefteqn{\scriptstyle 1>h_{1}> \ldots h_{n} >0}}
\quad dh_{1} \ldots dh_{n}
\left( \prod_{X \in Y} \frac{\partial}{\partial z_{X}} \right)
{\cal F}\left[\vec{z}(Y, \{h_{i}\})\right]\end{aligned}$$ where $\vec{z}(Y,\{h_{i}\})$ is given by $$z_{X}(Y,\{h_{i}\}) = \left\{
\begin{array}{ll}
0&\mbox{if $X$ is admissible for $Y$} \\
1&\mbox{if $X$ is not admissible for $\emptyset$} \\
h_{i} &\mbox{if $X$ not admissible for $Y$ and $X=X_{j}$ for some $j$,}
\\
& \mbox{in which case } i = \max \{j / X = X_{j} \} \\
h_{i}&\mbox{with } i = \min \{j / X \mbox{ not admissible for }
(X_{1}, \ldots X_{j}) \}, \mbox{ otherwise}
\end{array}
\right.$$
[*Proof*]{}\
The proof is made through an inductive iteration of a first order Taylor formula. We start with ${\cal F}(\vec{1})$ and put a common interpolating parameter $h_{1}$ on all admissible monomers for the empty set, i.e. we make a first order Taylor expansion with integral remainder of ${\cal F} \left[ h_{1}
\vec{z}_{1} + (\vec{1} - \vec{z}_{1}) \right]$ between 0 and 1, with $\vec{z}_{1}$ being the vector with entries 1 or 0 according to whether the corresponding monomer is admissible or not. Then each partial derivative acting on ${\cal F}$ can be seen as taking down the corresponding monomer so that terms can be seen as growing polymers. The iteration goes as follow: for a term of order $n$ corresponding to a given polymer $Y$ and having $n$ interpolating parameters $1> h_{0} > \ldots > h_{n} >0$ we put a common parameter $h_{n+1}$ interpolating between $0$ and $h_{n}$ on all monomers admissible for $Y$. It is easy to check that the process is finite since ${\cal
A}$ is finite and that one obtains the desired formula. $\blacksquare$
In the following our monomers are sets of cubes (that we call the support of the monomer) and links between those cubes. When we take down a polymer, we connect all the cubes in its support and maybe some more cubes. Thus a polymer is made of several connected regions, we will say that it is connected if it has a single connected component. The rules of admissibility will be to never take down a monomer whose support is totally contained in a connected region.
In this case, one can show that the interpolating parameters depend only of the connected component to which the corresponding monomer belongs so that one can think to “factorize” the connected components. We define generalized polymers as sets of connected polymers. Then a generalized polymer $Y = \{
Y_{1}, \ldots , Y_{p} \}$ is allowed if the polymer $Y_{1} \cup \ldots \cup
Y_{p}$ is allowed (this does not depend of the order of the $Y_{i}'s)$. Equation (\[eqclust0\]) becomes $$\label{eqclust}
{\cal F}(\vec{1}) = \sum_{{Y=\{ Y_{1}, \ldots, Y_{p} \}} \atop
{Y_{i} = (X_{i}^{1}, \ldots X_{i}^{n_{i}})}} \,
\left( \prod_{i=1}^{p} \int_{\lefteqn{\scriptstyle 1>h_{i}^{1}> \ldots
h_{i}^{n_{i}} >0}} \quad dh^{1}_{i} \ldots dh^{n_{i}}_{i} \right) \,
\left( \prod_{X \in Y} \frac{\partial}{\partial z_{X}} \right)
{\cal F}\left[\vec{z}(Y, \{ h_{i}^{j} \})\right]$$ where the sum extends on all allowed generalized polymers, and $\vec{z}(Y,\{ h_{i}^{j} \})$ is given by $$\label{paramclust}
z_{X}(Y,\{ h_{i}^{j} \}) = \left\{
\begin{array}{ll}
0&\mbox{if $X$ is admissible for $Y$, {\it i.e.} for } Y_{1} \cup \ldots
\cup Y_{p} \\
1&\mbox{if $X$ is not admissible for $\emptyset$} \\
h_{i}^{j} &\mbox{if $X=X_{i}^{j}$ for some $i$ and $j$} \\
h_{i}^{j} &\mbox{where $X$ is not admissible for $Y_{i}$ and } \nonumber \\
& j= \min \{k / X \mbox{ not admissible for } (X_{i}^{1},
\ldots, X_{i}^{k})\}, \mbox{ otherwise}
\end{array}
\right.$$
Large/small field decomposition
-------------------------------
Semi-perturbative expansion (like cluster expansions) are convergent only when the “perturbation” is small (in our case the operators $V_{\Delta}$’s). Thus it is very important to distinguish between the so called [*small field regions*]{} where perturbations will work and the [*large field regions*]{} where we must find other estimates (they will come mostly from the exponentially small probabilistic factor attached to those regions).
We take a ${\cal C}^{\infty}_{0}$ function $\varepsilon$ such that
- $0 \leqslant \varepsilon \leqslant 1$
- $\mbox{Supp}(\varepsilon) \subset [0, 2]$
- $\varepsilon_{|_{[0, 1]}} = 1$
Then for each $\Delta$ we define $$\varepsilon_{\Delta} (V_{\Delta}) = \varepsilon \left(
\frac{\|D_{\Lambda, \mu} V_{\vec{\Delta}} D_{\Lambda, \mu}\|}{a
\lambda^{-1/4} C_{0}} \right) \quad
\mbox{and} \quad \eta_{\Delta}= 1-\varepsilon_{\Delta}$$ where $a = O(1)$. Then we can expand $$\label{lsdecomp}
1 = \prod_{\Delta} (\varepsilon_{\Delta} + \eta_{\Delta}) = \sum_{N
\geqslant 0} \, \sum_{\Omega = \{ \Delta_{1}, \ldots, \Delta_{N}\}}
\left(\prod_{\Delta \in \Omega} \eta_{\Delta}\right)
\left(\prod_{\Delta \not \in \Omega} \varepsilon_{\Delta}\right)$$ where $\Omega$ is the large field region whose contribution will be isolated through the following lemma.
\[lemlargef\] \
Let $\Omega$ be a large field region made of $N$ cubes $\Delta_{1}$, …, $\Delta_{N}$ and $A$ any operator such that $$\forall {D} \subset \{1, \ldots N \}, \quad \mbox{$\displaystyle A +
\sum_{i \in {D}} B_{i}$ is invertible}$$ ($B_{i}$ stands for $B_{\Delta_{i}} \equiv \lambda D_{\Lambda, \mu}
V_{\Delta_{i}} D_{\Lambda, \mu}$).
We have the following identity $$\label{eqlargef}
\frac{1}{\displaystyle A + \sum B_{i}} = \sum_{n=0}^{N} \, (-1)^{n}
\hspace{-1em}
\sum_{i_{1} \in \{1 \ldots N\}}
\sum_{{i_{2} \in \{1 \ldots N\}} \atop {i_{2} \not \in \{i_{1}\}}} \ldots
\sum_{{i_{n} \in \{1 \ldots N\}} \atop {i_{n} \not \in \{i_{1} \ldots
i_{n-1}\}}} \frac{1}{A} O_{n} \frac{1}{A} \ldots O_{1} \frac{1}{A}$$ where $$O_{p} = B_{p} - \left(\sum_{i \in \{1 \ldots p\}} B_{i} \right)
\frac{1}{\displaystyle A + \sum_{i \in \{1 \ldots p\}} B_{i}} B_{p}$$
[*Proof*]{}\
The proof relies on resolvent expansion identities $$\frac{1}{A+B} = \frac{1}{A} \left(I - B \frac{1}{A+B}\right) =
\left(I - \frac{1}{A+B} B\right) \frac{1}{A}$$ We show by induction that for all $m \in \{1, \ldots, N\}$ we have $$\begin{aligned}
\frac{1}{A + \sum B_{i}} &=& \sum_{n=0}^{m-1} \, (-1)^{n} \hspace{-1em}
\sum_{{(i_{1}, \ldots i_{n})} \atop {i_{k} \not \in
\{i_{1} \ldots i_{k-1}\}}} \frac{1}{A} O_{n} \frac{1}{A} \ldots O_{1}
\frac{1}{A} + (-1)^{m} R_{m} \\
R_{m} &=& \hspace{-1em} \sum_{{(i_{1}, \ldots i_{m})} \atop {i_{k} \not \in
\{i_{1} \ldots i_{k-1}\}}} \frac{1}{A + \sum B_{i}} B_{i_{m}} \frac{1}{A}
O_{m-1} \frac{1}{A} \ldots O_{1} \frac{1}{A} \end{aligned}$$
The case $m=1$ is obtained by a resolvent expansion $$\frac{1}{A + \sum B_{i}} = \frac{1}{A} - \sum_{i_{1}} \frac{1}{A + \sum B_{i}}
B_{i_{1}} \frac{1}{A}$$
Then we go from $m$ to $m+1$ with 2 steps of resolvent expansion. We write $$\begin{aligned}
\frac{1}{A + \sum B_{i}} &=& \left(I - \hspace{-1em} \sum_{i_{m+1} \not \in
\{i_{1} \ldots i_{m}\}} \frac{1}{A + \sum B_{i}} B_{i_{m+1}}\right)
\frac{1}{\displaystyle A + \sum_{k=1}^{m} B_{i_{k}}} \\
&=& \left(I - \hspace{-1em} \sum_{i_{m+1} \not \in \{i_{1} \ldots i_{m}\}}
\frac{1}{A + \sum B_{i}} B_{i_{m+1}} \right) \, \frac{1}{A}
\left(I - \sum_{k=1}^{m} B_{i_{k}}
\frac{1}{\displaystyle A + \sum_{l=1}^{m} B_{i_{l}}} \right)\end{aligned}$$
Finally, for $m=N$ we make a last resolvent expansion on the rest term $R_{N}$ by writing $$\frac{1}{A + \sum B_{i}} = \frac{1}{A} \left(I - \sum B_{i}
\frac{1}{A+\sum B_{i}} \right)$$ $\blacksquare$
If we look at $$\chi_{\Delta_{out}} \frac{1}{A+\sum B_{i}} \chi_{\Delta_{in}}$$ and fix $\{\Delta_{i_{1}}, \ldots \Delta_{i_{n}}\}$, we can see that summing over the sequences $(i_{1}, \ldots i_{n})$ and choosing a particular term for each $O_{p}$ amounts to construct a tree on $\{ \Delta_{in}, \Delta_{out}, \Delta_{i_{1}}, \ldots \Delta_{i_{n}}\}$.
We define an oriented link $l$ as a couple of cubes that we note $(l.y, l.x)$, then $\vec{\cal L}$ is the set of oriented links. Given two cubes $\Delta_{in}$ and $\Delta_{out}$ and a set of cubes $\Omega = \{ \Delta_{1},
\ldots \Delta_{n}\}$ we construct the set ${\cal T}_{R}(\Delta_{in},
\Delta_{out}, \Omega)$ of oriented trees going form $\Delta_{in}$ to $\Delta_{out}$ through $\Omega$ as the sequences $(l_{1}, \ldots l_{n+1}) \in \vec{\cal L}^{n+1}$ which satisfy
- $l_{1}.x = \Delta_{in}$
- $l_{n+1}.y = \Delta_{out}$
- $\forall k \in \{1, \ldots n\}, \, l_{k}.y \in \Omega $
- $\forall k \in \{2, \ldots n+1\}, \, l_{k}.x \in \{ l_{1}.y, \ldots
l_{k-1}.y \} $
- $\forall k \in \{2, \ldots n\}, \, l_{k}.y \not \in \{ l_{1}.y, \ldots
l_{k-1}.y \} $
Then we have the following equivalent formulation of lemma \[lemlargef\].
\[ldecoupling\] \
Let $\Omega$ be a large field region made of $N$ cubes $\Delta_{1}$, …, $\Delta_{N}$ and $A$ any operator such that $$\forall {D} \subset \{1, \ldots N \}, \quad \mbox{$\displaystyle A +
\sum_{i \in {D}} B_{i}$ is invertible}$$
We have the following identity $$\begin{aligned}
\label{eqldecoupling}
\chi_{\Delta'} \frac{1}{\displaystyle A + \sum B_{i}}
\chi_{\Delta} &=& \sum_{n=0}^{N} \, (-1)^{n} \hspace{-1.5em}
\sum_{{\Omega' \subset \Omega} \atop {\Omega' = \{\Delta'_{1}, \ldots
\Delta'_{n}\}}}
\sum_{{{\cal T} \in {\cal T}_{R}(\Delta, \Delta', \Omega')}
\atop {{\cal T} = (l_{1}, \ldots l_{n+1})}} \nonumber \\
&& \quad \chi_{\Delta'} \frac{1}{A}
O_{n}(l_{n+1}.x, l_{n}.y) \frac{1}{A} \ldots O_{1}(l_{2.x}, l_{1}.y)
\frac{1}{A} \chi_{\Delta}\end{aligned}$$ where $$O_{p}(\Delta_{j}, \Delta_{i}) = B_{\Delta_{i}} \delta_{\Delta_{i}
\Delta_{j}} -
B_{\Delta_{j}} \frac{1}{\displaystyle A + \sum_{i \in \{1 \ldots p\}} B_{i}}
B_{\Delta_{i}}$$
The proof being just a rewriting of lemma \[lemlargef\] is quite immediate. $\blacksquare$
Thanks to this lemma we can factorize out the contribution of the large field region, then we need to extract spatial decay for the resolvent in the small field region. However a kind of Combes-Thomas estimate ([@CT]) would not be enough because of the normalization factor that we must pay. For this reason, we will make a polymer expansion to determine which region really contributes to the resolvent.
Polymer expansion for the resolvent in the small field region
-------------------------------------------------------------
For some large field region $\Omega$, we want to prove the decay of $$\frac{1}{\displaystyle U_{\Lambda, \mu}^{-1} + \lambda \sum_{\Delta
\not \in \Omega} D_{\Lambda, \mu} V_{\Delta} D_{\Lambda, \mu} - z
D_{\Lambda, \mu}^{2}} \equiv R U_{\Lambda, \mu}$$ and get something to pay for the normalization factor.
We define the set ${\cal L}$ of links as the set of pair of cubes, and ${\cal L}(\Omega)$ as the set of links which do not connect two cubes of $\Omega$. Then for $l = \{\Delta, \Delta'\}$ we define $$\begin{aligned}
Q_{l} &=& \lambda (\chi_{\Delta} U_{\Lambda, \mu} D_{\Lambda, \mu}
V_{\Delta'} D_{\Lambda, \mu}+
\chi_{\Delta'} U_{\Lambda, \mu} D_{\Lambda, \mu} V_{\Delta}
D_{\Lambda, \mu})
- z (\chi_{\Delta} C_{\Lambda, \mu} \chi_{\Delta'}
+ \chi_{\Delta'} C_{\Lambda, \mu} \chi_{\Delta} ) \\
R_{l} &=& \chi_{\Delta} R \chi_{\Delta'} + \chi_{\Delta'} R
\chi_{\Delta} \\
U_{l} &=& \chi_{\Delta} U_{\Lambda, \mu} \chi_{\Delta'} +
\chi_{\Delta'} U_{\Lambda, \mu} \chi_{\Delta} \end{aligned}$$ with the convention that $V_{\Delta} = 0$ if $\Delta \in \Omega$.
For any fixed $l_{0} = (\Delta_{0}, \Delta_{0}')$ we expand $R_{l_{0}}$ on ${\cal L}(\Omega)$ with the rule that for any growing polymer
- if we have two adjacent connected components $Y_{1}$ and $Y_{2}$ (such that $d_{\Lambda}(Y_{1}, Y_{2}) = 0$) we connect the two components
- we connect $\Delta_{0}$ (resp. $\Delta_{0}'$) to any adjacent polymer component
This allows to take into account that the operators localized on a pair of cubes have their support extending to the neighboring cubes.
Let us notice that if $A$ and $B$ have disjoint support, we have $$\frac{1}{I+A+B} = \frac{1}{I+A} \, \frac{1}{I+B}$$
Then it is easy to see that the expansion of $R_{l_{0}}$ involves only totally connected polymers which connect $\Delta_{0}$ to $\Delta_{0}'$, because the other terms necessarily contain a product of two operators with disjoint supports which gives zero. We note ${\cal A}(\Omega, l_{0})$ the corresponding set of polymers which is a decreasing function of $\Omega$, [*i.e.*]{} $$\Omega' \subset \Omega \Rightarrow {\cal A}(\Omega, l_{0}) \subset
{\cal A}(\Omega', l_{0})$$ Then, according to (\[eqclust\]), our expansion looks like
$$\begin{aligned}
R_{l_{0}}(\Omega) &=& \sum_{n \geqslant 0} \, \sum_{{Y \in {\cal A}(\Omega,
l_{0})} \atop {Y = (X_{1}, \ldots, X_{n})}}
\int_{1>h_{1}> \ldots h_{n}>0} \hspace{-4em} dh_{1} \, \ldots \,
dh_{n} \left(\prod_{X \in Y} \frac{\partial }{\partial z_{X}}\right)
\frac{1}{\displaystyle I + \sum_{X \in {\cal L}(\Omega)} z_{X} Q_{X}}
\left[\vec{z}(Y, \{h_{i}\})\right] \\
&=& \sum_{Y \in {\cal A}(\Omega, l_{0})} \int \prod_{i} dh_{i}
\left(\prod_{X \in Y} \frac{\partial }{\partial u_{X}}\right)
\frac{1}{\displaystyle I + \sum_{X \in {\cal L}(\Omega)} z_{X} Q_{X}
+ \sum_{X \in Y} u_{X} Q_{X}}
\left[\vec{z}(Y, \{h_{i}\}), \vec{0}\right] \end{aligned}$$
Then in the second expression, we rewrite the derivatives as Cauchy integrals so that $$\begin{aligned}
\label{eqresexp}
R_{l_{0}}(\Omega) &=& \sum_{Y \in {\cal A}(\Omega, l_{0})} \int \prod_{i}
dh_{i} \, \left(\prod_{X \in Y} \oint \frac{du_{X} }{2i\pi
u_{X}^{2}}\right)
\frac{1}{\displaystyle I + \sum_{X \in {\cal L}(\Omega)}
z_{X}(Y, \{h_{i}\}) Q_{X}
+\sum_{X \in Y} u_{X} Q_{X}} \\
\label{Yexpan}
&& \equiv \sum_{Y \in {\cal A}(\Omega, l_{0})} R(Y) = I +
\sum_{Y \in {\cal A}^{*}(\Omega, l_{0})} R(Y)\end{aligned}$$ where ${\cal A}^{*}(\Omega, l_{0}) = {\cal A}(\Omega, l_{0}) / \{
\emptyset\}$
We suppose that we fixed $n_{1}$ the power rate of decay in $d_{\Lambda}(\Delta_{1}, \Delta_{3})$ of $\|\chi_{\Delta_{1}} D_{\Lambda, \mu} V_{\Delta_{2}} D_{\Lambda, \mu}
\chi_{\Delta_{3}}\|$ and $\| \chi_{\Delta_{1}} C_{\Lambda, \mu}
\chi_{\Delta_{3}}\|$, then we have the following lemma
\[lemresexp\] \
For $n_{2} = n_{1} - 3(d+1)$ and $\lambda$ small enough, we have $$\begin{aligned}
&& \forall l_{0}=\{\Delta_{0}, \Delta_{0}'\}, \, \forall Y \in
{\cal A}^{*}(\emptyset, l_{0}) \equiv {\cal A}^{*}(l_{0}), \nonumber \\
&& \quad \quad \quad
\| R(Y) \| \leqslant \frac{\lambda^{|Y|/4}}{[1 + L^{-1}
d_{\Lambda}(\Delta_{0}, \Delta_{0'})]^{n_{2}}} \Gamma(Y)
\mbox{ with} \sum_{Y \in {\cal A}^{*}(l_{0})} \Gamma(Y) \leqslant 1\end{aligned}$$ where $|Y|$ is the number of monomers in $Y$.
[*Proof*]{}\
Since we are in the small field region $$\forall l=\{\Delta, \Delta'\}, \quad \| Q_{l}\| \leqslant \frac{ O(1) \,
\lambda^{3/4}}{[1+L^{-1} d_{\Lambda}(\Delta, \Delta')]^{n_{1}-(d+1)}}$$ Then in (\[eqresexp\]) we can integrate each $u_{l}$ on a circle of radius $$R_{l} = \lambda^{-1/2} [1+L^{-1} d_{\Lambda}(\Delta, \Delta')]^{n_{1} -
2 (d+1)}$$ while staying in the domain of analyticity for $u_{l}$ and have a resolvent bounded in norm by say $2$ (if $\lambda$ is small enough). Thus $$\begin{aligned}
\| R(Y) \| &\leqslant& \int_{1>h_{1}> \ldots h_{|Y|}>0} \hspace{-4em} dh_{1}
\, \ldots \, dh_{|Y|} \, \left(\prod_{{X \in Y} \atop {X=\{\Delta_{X},
\Delta_{X}'\}}} \frac{O(1) \, \lambda^{1/2}}{[1+L^{-1}
d_{\Lambda}(\Delta_{X}, \Delta_{X}')]^{n_{1} - 2(d+1)}} \right) \\
&\leqslant& \frac{\lambda^{|Y|/4}}{[1 + L^{-1}
d_{\Lambda}(\Delta_{0}, \Delta_{0'})]^{n_{2}}} \, \frac{\left[O(1)
\lambda^{1/4}\right]^{|Y|}}{|Y|!}
\left(\prod_{{X \in Y} \atop {X=\{\Delta_{X}, \Delta_{X}'\}}}
\frac{1}{[1+L^{-1} d_{\Lambda}(\Delta_{X}, \Delta_{X}')]^{d+1}} \right) \end{aligned}$$ this demonstrates the first part of the lemma with $$\Gamma(Y) = \frac{\left[O(1) \lambda^{1/4}\right]^{|Y|}}{|Y|!}
\left(\prod_{X \in Y} \Gamma_{X} \right) \quad \mbox{ and } \sum_{X \ni
\Delta} \Gamma_{X} = O(1)$$
A link $l=\{\Delta, \Delta'\}$ can either be a true link when $\Delta \neq
\Delta'$ or a tadpole when both cubes collapse. Our expansion rules insure that there is at most 1 tadpole per cube of $Supp(Y)$ the support of $Y$. If we forget about proximity links (that we connect also adjacent cubes) then a polymer with $m$ true links and $p$ tadpoles has a support of $m+1$ cubes (2 of them being $\Delta_{0}$ and $\Delta_{0}$’) and the true links make a tree on the support of $Y$. If we take into account the proximity links then two connected links in the tree on $Y$ are adjacent instead of sharing a common cube, we will forget about this since it would induce at most a factor $O(1)^{|Y|}$. The links in $Y$ are ordered, but we can take them unordered by eating up the $1/|Y|!$ we have in $\Gamma(Y)$.
Then the sum over $Y$ can be decomposed as
- choose $m \geqslant 1$
- choose $m-1$ cubes $\{ \Delta_{1}, \ldots, \Delta_{m-1}\}$
- chose a tree ${\cal T}$ on $\{ \Delta_{0}, \Delta_{0}', \Delta_{1}, \ldots, \Delta_{m-1}\}$
- choose $0 \leqslant p \leqslant m+1$
- place $p$ tad-poles on $\{ \Delta_{0}, \Delta_{0}', \Delta_{1},
\ldots, \Delta_{m-1}\}$
We can perform the sum on tadpole configurations because for $p$ tadpoles, we have a factor $\left[O(1) \lambda^{1/4}\right]^{p}$ coming from the tadpoles and at most $\left({m+1} \atop {p}\right)$ configurations. Thus $$\begin{aligned}
\sum_{Y \in {\cal A}^{*}(l_{0})} \!\!\!\! \Gamma(Y) &\leqslant&
\sum_{m\geqslant 1} \sum_{\{ \Delta_{1}, \ldots \Delta_{m-1}\}}
\sum_{{\cal T}} \left[1 + O(1) \lambda^{1/4}\right]^{m+1} \nonumber \\
&& \quad
\left[O(1) \lambda^{1/4}\right]^{m} \left(
\prod_{X \in {\cal T}} \Gamma_{X}\right)\end{aligned}$$ Then we fix first the form of ${\cal T}$ then we sum over the positions of $\Delta_{1}$, …, $\Delta_{m-1}$. But since the cubes are now labeled we get $(m-1)!$ the desired sum. $$\sum_{Y \in {\cal A}^{*}(l_{0})} \Gamma(Y) \leqslant \sum_{m\geqslant 1}
\left[O(1) \lambda^{1/4}\right]^{m} \frac{1}{(m-1)!}
\sum_{\cal T} \sum_{(\Delta_{1}, \ldots, \Delta_{m-1})}
\left( \prod_{X \in {\cal T}} \Gamma_{X}\right)$$ We choose $\Delta_{0}$ as the root of our tree and suppose that the position of $\Delta_{0}'$ is not fixed. Then the sum over the position of the cubes is made starting from the leaves thanks to the decaying factors $\Gamma_{X}$ (cf. [@Riv2]), this costs a factor $O(1)^{m}$.
Finally, the sum over ${\cal T}$, which is a sum over unordered trees, is performed using Cayley’s theorem which states that there are $(m+1)^{m-1}$ such trees. $$\sum_{Y \in {\cal A}^{*}(l_{0})} \Gamma(Y) \leqslant \sum_{m\geqslant 1}
\left[O(1) \lambda^{1/4}\right]^{m} \frac{(m+1)^{m-1}}{(m-1)!} \leqslant O(1)
\lambda^{1/4} \leqslant 1$$ for $\lambda$ small enough. $\blacksquare$
We note that we can perform the same expansion on $$R'= U_{\Lambda, \mu}^{-1} \frac{1}{\displaystyle U_{\Lambda, \mu}^{-1} +
\lambda \sum_{\Delta
\not \in \Omega} D_{\Lambda, \mu} V_{\Delta} D_{\Lambda, \mu} - z
D_{\Lambda, \mu}^{2}}$$
Summation and bonds on $T$
--------------------------
We define $$T_{out, in} = \chi_{\Delta_{out}} T_{\Lambda, \varepsilon}
\chi_{\Delta_{in}}$$ We can combine equations (\[complextrans\]), (\[lsdecomp\]), (\[eqldecoupling\]) and (\[Yexpan\]) to write $$\begin{aligned}
\lefteqn{T_{out, in}} && \quad \quad = \int \! \otimes
d\mu_{\xi_{\Lambda}^{\Delta}}
(V_{\Delta}) \, e^{ \frac{\mu_{0}^{2}}{2} < 1, \xi_{\Lambda}^{-1} 1> + i
\mu_{0} <V, \xi_{\Lambda}^{-1} 1>} \,
\sum_{N \geqslant 0} \, \sum_{\Omega = \{ \Delta_{1}, \ldots \Delta_{N} \}}
\left(\prod_{\Delta \in \Omega} \eta_{\Delta}\right) \,
\left(\prod_{\Delta \not \in \Omega} \varepsilon_{\Delta} \right)
\nonumber \\
&& \quad \quad \quad \sum_{n=0}^{N} (-1)^{n}
\sum_{{\Omega' \subset \Omega} \atop {|\Omega'| = n}}
\sum_{{{\cal T} \in {\cal T}_{R}(\Delta_{in}, \Delta_{out}, \Omega')}
\atop {{\cal T} = (l_{1}, \ldots l_{n+1})}}
\sum_{{(\Delta_{2}^{x}, \ldots \Delta_{n+1}^{x})} \atop {(\Delta_{1}^{y},
\ldots \Delta_{n}^{y})}}
\sum_{(\Delta_{1}^{z}, \ldots \Delta_{n+1}^{z})}
\sum_{{(Y_{1}, \ldots Y_{n+1})} \atop {Y_{i} \in {\cal A}(\Omega,
\{\Delta_{i}^{y}, \Delta_{i}^{z}\})}} \nonumber \\
&& U_{\Delta_{out}, \Delta_{n+1}^{z}} R'_{\Delta_{n+1}^{z},
\Delta_{n+1}^{x}} T_{n}(l_{n+1}.x, l_{n}.y)
R_{\Delta_{n}^{y}, \Delta_{n}^{z}} U_{\Delta_{n}^{z}, \Delta_{n}^{x}}
\ldots T_{1}(l_{2.x}, l_{1}.y) R_{\Delta_{1}^{y}, \Delta_{1}^{z}}
U_{\Delta_{1}^{z}, \Delta_{in}}\end{aligned}$$ where we pretend that the $\chi_{\Delta}$’s are sharp otherwise we would have to deal with adjacent cubes but it’s an irrelevant complication. Furthermore, for the leftmost term we made a polymer expansion of $U_{\Lambda, \mu} R'$ instead of $RU_{\Lambda, \mu}$ so that we can write $T_{out, in}$ as $$T_{out, in} = \chi_{\Delta_{out}} \left(U_{\Lambda, \mu} + U_{\Lambda, \mu}
\tilde{T} U_{\Lambda, \mu} \right) \chi_{\Delta_{in}}$$
The crucial point here is to notice that for any cube $\Delta$, each term where $\Delta$ appears in $\Omega$ but not in $\Omega'$ pairs with a corresponding term where $\Delta \not \in \Omega$ and $\Delta \not \in
\bigcup \mbox{Supp}(Y_{i})$ ([*i.e.*]{} $\Delta$ has been killed in every polymer expansion). Then the corresponding $\varepsilon_{\Delta}$ and $\eta_{\Delta}$ add up back to 1 so that
$$\begin{aligned}
\lefteqn{T_{out, in}} && \quad \quad = \sum_{n \geqslant 0} (-1)^{n}
\sum_{\Omega = \{ \Delta_{1}, \ldots \Delta_{n} \}}
\sum_{{{\cal T} \in {\cal T}_{R}(\Delta_{in}, \Delta_{out}, \Omega)}
\atop {{\cal T} = (l_{1}, \ldots l_{n+1})}}
\sum_{{(\Delta_{2}^{x}, \ldots \Delta_{n+1}^{x})} \atop {(\Delta_{1}^{y},
\ldots \Delta_{n}^{y})}} \sum_{(\Delta_{1}^{z}, \ldots \Delta_{n+1}^{z})}
\sum_{{(Y_{1}, \ldots Y_{n+1})} \atop {Y_{i} \in {\cal A}(\Omega,
\{\Delta_{i}^{y}, \Delta_{i}^{z}\})}}
\nonumber \\
&& \quad \quad \int \! \otimes d\mu_{\xi_{\Lambda}^{\Delta}} (V_{\Delta})
\, e^{ \frac{\mu_{0}^{2}}{2} < 1, \xi_{\Lambda}^{-1} 1> + i \mu_{0} <V,
\xi_{\Lambda}^{-1} 1>} \,
\left(\prod_{\Delta \in \Omega} \eta_{\Delta}\right)
\left(\prod_{\Delta \in \cup \mbox{\footnotesize Supp}(Y_{i})}
\varepsilon_{\Delta} \right)
\nonumber \\
&& U_{\Delta_{out}, \Delta_{n+1}^{z}} R'_{\Delta_{n+1}^{z},
\Delta_{n+1}^{x}} T_{n}(l_{n+1}.x, l_{n}.y)
R_{\Delta_{n}^{y}, \Delta_{n}^{z}}
U_{\Delta_{n}^{z}, \Delta_{n}^{x}}
\ldots T_{1}(l_{2.x}, l_{1}.y) R_{\Delta_{1}^{y}, \Delta_{1}^{z}}
U_{\Delta_{1}^{z}, \Delta_{in}}\end{aligned}$$
The factor $e^{ \frac{\mu_{0}^{2}}{2} < 1, \xi_{\Lambda}^{-1} 1> + i \mu_{0}
<V, \xi_{\Lambda}^{-1} 1>}$ correponds to the translation of $V$ by $-i\mu_{0}$, this is equivalent to have translated all the $V_{\Delta}$’s by $-i \mu_{0} \chi_{\Delta}$ therefore we can write it as $$\prod_{\Delta} e^{ \frac{\mu_{0}^{2}}{2} < \chi_{\Delta},
(\xi_{\Lambda}^{\Delta})^{-1} \chi_{\Delta}> + i \mu_{0}
<V_{\Delta}, (\xi_{\Lambda}^{\Delta})^{-1} \chi_{\Delta}>}$$ then we can perform the integration on all $V_{\Delta} \not \in \Omega \cup
(\bigcup \mbox{Supp}(Y_{i}))$ so that the normalization factor reduces to $$\prod_{\Delta \in \Omega \cup (\cup \mbox{\footnotesize Supp}(Y_{i}))}
e^{ \frac{\mu_{0}^{2}}{2} < \chi_{\Delta},
(\xi_{\Lambda}^{\Delta})^{-1} \chi_{\Delta}> + i \mu_{0}
<V_{\Delta}, (\xi_{\Lambda}^{\Delta})^{-1} \chi_{\Delta}>}$$ This amounts to pay a constant per cube of $\Omega \cup
(\bigcup \mbox{Supp}(Y_{i}))$, this is done in $\Omega$ with a fraction of the probabilistic factor coming from the large field condition and in $\bigcup
\mbox{Supp}(Y_{i})$ with a fraction of the factor $\lambda^{\sum |Y_{i}|/4}$ coming from the $R's$.
The sums over the various $Y_{i}$’s are controled by lemma \[lemresexp\] and we are left with a sum over a tree that we perform much in the same way we did in lemma \[lemresexp\]. Indeed one can check that spatial decay appears through factors of the form $$\sum_{\Delta_{i}^{x}, \Delta_{i}^{y}, \Delta_{i}^{z}}
V_{\Delta_{l_{i}.y}} D_{\Lambda, \mu} \chi_{\Delta_{i}^{y}} R
\chi_{\Delta_{i}^{z}} U_{\Lambda, \mu} \chi_{\Delta_{i}^{x}} D_{\Lambda,
\mu} V_{\Delta_{l_{i}.x}}$$ thus we can extract decay in $d_{\Lambda}(l_{i}.y, l_{i}.x)$ time a bound in $\prod \|D_{\Lambda, \mu} V_{\Delta_{l_{i}.y}} D_{\Lambda, \mu}\|$ when we combine all these factors.
Yet we need some extra features to deal with the product of $O_{i}$’s, each of them being bounded in norm by $O(1) \, \mu^{-1} \|D_{\Lambda, \mu}
V_{i} D_{\Lambda, \mu}\| \|D_{\Lambda, \mu} V_{k_{i}} D_{\Lambda, \mu}\|$ for some $k_{i}$.
The factor $\mu^{-1}$ can be controled with a small fraction of the probabilistic factor attached to the cube $\Delta_{i}$
If a given $D_{\Lambda, \mu} V_{i} D_{\Lambda, \mu}$ appears at a large power it has necessarily a large number of links attached to it. Because of the tree structure, the links must go further and further so that the decay of the links together with the gaussian measure allow to control the factorial coming from the accumulation of fields. This is quite standard and the reader can refer to [@Riv2] for instance.
Finally we can write $T_{out, in}$ as a sum over polymers of the form $$\begin{aligned}
T_{out, in} &=& \chi_{\Delta_{out}} (U_{\Lambda, \mu} + \delta T)
\chi_{\Delta_{in}} \\
\delta T &=& \frac{\lambda^{c_{1}}}{[1 +
L^{-1} d_{\Lambda}(\Delta_{in}, \Delta_{out})]^{n_{3}}}
\sum_{Y \in {\cal A}^{*}(\Delta_{in}, \Delta_{out})} \lambda^{c_{2}|Y|}
\Gamma_{Y} T(Y) \end{aligned}$$ where $c_{1}$ and $c_{2}$ are small constant, $\Gamma_{Y}$ has decay in the spatial extension of $Y$ and $\displaystyle
\|\sum_{Y \in {\cal A}^{*}(\Delta_{in}, \Delta_{out})} T(Y)\|$ is bounded. $\blacksquare$
Anderson model with an infra-red cut-off in dimension $d=2$
===========================================================
We are interested now in the particular case $$H = -\Delta_{\eta} + \lambda \eta_{E} V \eta_{E}$$ where
- $\Delta_{\eta}^{-1}$ is a ultra-violet regularized inverse Laplacian, that we will note $-p^{2}$
- $\eta_{E}$ is an infra-red cut-off that forces $|p^{2} - E| \geqslant A \lambda^{2} |\log \lambda|^{2}$
- $V$ has covariance $\xi$ which is a ${\cal C}^{\infty}_{0}$ approximation of a $\delta$-function
- $M^{1/2}$ is an even integer greater than 2, and $j_{0} \in \NN$ is such that $$M^{-j_{0}} \leqslant \inf_{\mbox{\scriptsize Supp}(\eta_{E})} |p^{2} - E|
\leqslant M^{-(j_{0}-1)}$$
For each $0\leqslant j \leqslant j_{0}$, we construct a smooth partition of unity into cubes of side $M^{j}$ which form a lattice $I\!\! D_{j}$. It follows a decomposition of $V$ in fields $V_{\Delta_{j}}$ and we will assume for simplicity that for $j<k$ and $\Delta_{k} \in I\!\! D_{k}$ $$V_{\Delta_{k}} = \sum_{{\Delta_{j} \in I\!\! D_{j}} \atop { \Delta_{j}
\subset \Delta_{k}}} V_{\Delta_{j}}$$ even if it is not totally true because of irrelevant border effects.
The matrix model
----------------
We make a partition of unity according to the size of $p^{2}-E$ thanks to a function $\hat{\eta}$ which satisfies
- $\hat{\eta}$ is in ${\cal C}^{\infty}_{0}( \RR_{+})$ with value in $[0, 1]$
- $\hat{\eta}$ has its support inside $[0, 2]$ and is equal to 1 on $[0, 1]$
- the ${\cal L}^{\infty}$ norm of the derivatives of $\hat{\eta}$ does not grow too fast
Then we construct $$\left\{
\begin{array}{rcl}
\hat{\eta}_{0}(p)&=&1-\hat{\eta} \left[M^{2} (p^{2}-E)^{2}
\right] \\
\hat{\eta}_{j}(p)&=&\hat{\eta} \left[M^{2j}(p^{2}-E)^{2}\right] -
\hat{\eta} \left[M^{2(j+1)} (p^{2}-E)^{2} \right]
\quad \mbox{for } j > 0 \\
\end{array}
\right.$$ In order to shorten expressions, we assume that $$\eta_{E} = \sum_{j=0}^{j_{0}} \eta_{j}$$
We expect that most of the physics will come from the neighborhood of the singularity $p^{2}=E$ of the free propagator. As an operator in momentum space, $V$ has a kernel $\hat{V}(p,q) \equiv \hat{V}(p-q)$. But since $p$ and $q$ have more or less the same norm, there are only two configurations which give the sum $p-q$ (cf [@FMRT2]).
We can see this in another way. If we make perturbations and integrate on $V$ we will get Feynman graphs with four-legged vertices where the incoming momenta have a fixed norm and must add to zero (or almost zero) because of (approximate) translation invariance. Then the four momenta approximately form a rhombus which happens to be a parallelogram. It implies that they must be more or less opposite 2 by 2. Thus the problem looks like a vectorial model because the angular direction of the momentum is preserved.
In order to have this feature more explicit, we decompose the slice $\Sigma^{j} \equiv \mbox{Supp}(\hat{\eta}_{j})$ into $M^{j/2}$ angular sectors. We introduce $\hat{\eta}_{S}$ with
- $\hat{\eta}_{S}$ is an even function in ${\cal C}^{\infty}_{0}(\RR)$ with value in $[0, 1]$
- $\hat{\eta}_{S}$ has its support inside $[-1-\frac{1}{M},
1+\frac{1}{M}]$ and is equal to 1 on $[-1,1]$
- $\hat{\eta}_{S} (1+x) = 1 - \hat{\eta} \left(1+\frac{1}{M}-x \right)$ for $|x| \leqslant \frac{1}{M}$
- the ${\cal L}^{\infty}$ norm of the derivatives of $\hat{\eta}_{S}$ does not grow too fast
Then we define $\theta_{j} = \pi M^{-j/2}$ and construct sectors $S_{\alpha}^{j}$ of angular width $\theta_{j} (1+\frac{1}{M})$ centered around $k_{\alpha} \equiv e^{i\alpha}$ (identifying $\RR^{2}$ and $\CC$), with $\alpha \in \theta_{j} \, \ZZ /_{\textstyle \! M^{\scriptstyle j/2}
\ZZ}$. $$\hat{\eta}_{j} = \sum_{\alpha} (\hat{\eta}_{\alpha}^{j})^{2} \quad
\mbox{where} \quad \left(\hat{\eta}_{\alpha}^{j}\right)^{2}
\left( |p| e^{i\theta}\right)
\equiv \hat{\eta}_{j}(|p|) \;
\hat{\eta}_{S}\! \left(\frac{\theta-\alpha}{2 \theta_{j}}\right)$$
Afterwards, we define the operators $\eta_{\alpha}^{j}$’s by their kernel $$\eta_{\alpha}^{j} (x, y) = \int \! dp \, e^{ip(x-y)}
\hat{\eta}_{\alpha}^{j}(p)$$ They form a positive, self-adjoint partition of identity. $$I = \sum_{j, \alpha} \left(\eta_{\alpha}^{j}\right)^{2}$$
We will map our problem to an operator-valued matrix problem with the following lemma whose proof is quite obvious.
\
Let ${\cal H}$ be an Hilbert space and suppose that we have a set of indices ${\cal I}$ and a partition of unity $$I = \sum_{i \in {\cal I}} \eta_{i}^{2}$$ where $I$ is the identity in ${\cal L}({\cal H})$ and the $\eta_{i}$’s are self-adjoint positive operators.
For all $i \in {\cal I}$, we define $${\cal H}_{i} =\eta_{i} ({\cal H})$$ then ${\cal H}$ and ${\cal L}({\cal H})$ are naturally isomorphic to $\displaystyle \bigotimes_{i \in {\cal I}} {\cal H}_{i}$ and $\displaystyle
{\cal L}\left(\bigotimes_{i \in {\cal I}} {\cal H}_{i} \right)$ thanks to $$\begin{aligned}
x \in {\cal H} &\mapsto& (x_{i})_{i \in {\cal I}} \quad \mbox{ where } x_{i}
= \eta_{i} x \\
A \in {\cal L}({\cal H}) &\mapsto& (A_{ij})_{i, j \in {\cal I}} \quad
\mbox{ where } A_{ij} = \eta_{i} A \eta_{j}\end{aligned}$$
In our case, we define ${\cal I}_{j}$ as the set of sectors in the slice $j$ and ${\cal I} = \cup {\cal I}_{j}$ so that we can construct the operator-valued matrices $\mbox{\bf V}_{\Delta}$’s as $$\left(\mbox{\bf V}_{\Delta}\right)_{\alpha \beta}^{jk} =
\eta_{\alpha}^{j} V_{\Delta} \eta_{\beta}^{k}$$
For a slice $\Sigma^{l}$, we define the enlarged slice $$\bar{\Sigma}^{l} = \bigcup_{m \geqslant l} \Sigma^{m}$$ Then an angular sector $S_{\alpha}^{l}$ of $\Sigma^{l}$ has a natural extension into an angular sector $\bar{S}_{\alpha}^{l}$ of $\bar{\Sigma}^{l}$ and we have the corresponding operator $\bar{\eta}_{\alpha}^{l}$.
Size of the V$^{jk}_{\Delta}$’s
-------------------------------
Let $\mbox{\bf V}_{\Delta}^{jk}$ be defined by $$\left(\mbox{\bf V}_{\Delta}^{jk}\right)_{\alpha \beta}^{lm} =
\delta_{jl} \delta_{km} \eta_{\alpha}^{l} V_{\Delta} \eta_{\beta}^{m}$$ where the $\delta$ are Kronecker’s ones. We can remark that $$\left(V^{jk}_{\Delta}\right)^{\dag}=V^{kj}_{\Delta} \quad \Rightarrow \quad
\|V^{jk}_{\Delta}\| = \|V^{kj}_{\Delta}\|$$
Then we have the following large deviation result
\[thprobV\] \
There exists constants $C$ and $C_{\varepsilon}(\varepsilon)$ such that for all $\Lambda$, $j \leqslant k$, $a\geqslant 1$ and $\Delta \in \mbox{I}\! \mbox{D}_{k}$ $$I\!\!P_{\Lambda} \left(\| \mbox{\bf V}_{\Delta}^{jk}\| \geqslant
a C M^{-j/2}\right) \leqslant C_{\varepsilon}
e^{-(1-\varepsilon) a^{2} M^{(\frac{k}{2}- \frac{j}{3})}}$$ where $C_{\varepsilon}$ behaves like $1/\varepsilon$.
[*Proof*]{}\
We use the bound $$\| \mbox{\bf V}^{jk}_{\Delta} \|^{2m_{0}} \leqslant
\mbox{Tr} \left[\left(\mbox{\bf V}^{jk}_{\Delta}\right)
\left(\mbox{\bf V}^{jk}_{\Delta}\right)^{\dag} \right]^{m_{0}}$$ where $$\begin{aligned}
\left(\mbox{\bf A}^{\dag}\right)_{\alpha \beta}^{lm} &=&
\left(\mbox{\bf A}_{\beta \alpha}^{ml} \right)^{\dag} \\
\mbox{Tr \bf A} &=& \sum_{l, \alpha}
\mbox{tr \bf A}_{\alpha \alpha}^{ll}
= \sum_{l, \alpha} \int \!
\mbox{\bf A}_{\alpha \alpha}^{ll} (x,x)
\, dx\end{aligned}$$ Thus for any $m_{0}$ $$\begin{aligned}
I\!\!P \left(\| \mbox{\bf V}^{jk}_{\Delta} \|
\geqslant a C M^{-j/2} \right) &=&
\int \bbbone_{\left(\| \mbox{\bf V}^{jk}_{\Delta} \| \geqslant a
C M^{-j/2}
\right)} d\mu_{\xi_{\Lambda}} (V) \\
&\leqslant& \frac{1}{(a C M^{j/2})^{2m_{0}}} \int \|
\mbox{\bf V}^{jk}_{\Delta} \|^{2m_{0}} d\mu_{\xi_{\Lambda}} (V) \\
&\leqslant& \frac{1}{(a C M^{-j/2})^{2m_{0}}} \!
\int \mbox{Tr} \left[\left(\mbox{\bf V}^{jk}_{\Delta}\right)
\left(\mbox{\bf V}^{jk}_{\Delta}\right)^{\dag} \right]^{m_{0}}
d\mu_{\xi_{\Lambda}} (V)\end{aligned}$$
Let us note $${\cal I}_{m_{0}} \equiv
\mbox{Tr} \left[\left(\mbox{\bf V}^{jk}_{\Delta}\right)
\left(\mbox{\bf V}^{jk}_{\Delta}\right)^{\dag} \right]^{m_{0}}
\!\!\!\! =
\mbox{tr} \left[\sum_{\alpha} (\eta_{\alpha}^{j})^{2} \,
V_{\Delta} \sum_{\beta} (\eta_{\beta}^{k})^{2} \,
V_{\Delta} \right]^{m_{0}}$$ We have the following lemma
\[lemproba\] \
There exists a constant $C$ such that for all $m_{0}$ we have the following bound $$< {\cal I}_{m_{0}}> \leqslant C^{2 m_{0}} M^{-jm_{0}} \left[1 +
M^{-m_{0}(\frac{k}{2}-\frac{j}{3})} m_{0}! \right]$$
This lemma is the core of the demonstration but its proof is quite long so that we postpone it until the end of this part. It leads to $$I\!\!P \left(\| \mbox{\bf V}^{jk}_{\Delta} \| \geqslant a C M^{-j/2}
\right) \leqslant a^{-2m_{0}} \left[1 + M^{-m_{0}(\frac{k}{2} -
\frac{j}{3})} m_{0}!\right]$$ We take $m_{0}= a^{2} M^{\frac{k}{2} - \frac{j}{3}}$ and use the rough bound $$n! \leqslant n^{(n+1)} e^{-(n-1)}$$ to get the desired estimate. $\blacksquare$
In fact, in the proof of lemma \[lemproba\] it is easy to see that $\eta_{j}$ and $\eta_{k}$ can be replaced by $\bar{\eta}_{j}$ and $\bar{\eta}_{k}$ with the same result. Furthermore, thanks to the locality of $V$ and to the decay of the $\eta$’s , the sum of several $V_{\Delta}^{jk}$’s is more or less an orthogonal sum. More precisely, for any cube $\Delta_{0}$ we define $D_{m}(\Delta_{0})$ as the set of cubes of $I\!\!D_{m}$ which are contained in $\Delta_{0}$. Then given two sets $\Omega_{1}$ and $\Omega_{2}$ and their smoothed characteristic functions $\chi_{\Omega_{1}}$ and $\chi_{\Omega_{2}}$ we have
\[lemortho\] \
For any $n$ and $C$ there is a constant $C_{n}$ such that for any $j \leqslant k$ and $\Delta_{0} \in I\!\!D_{k}$ $$\begin{aligned}
\|\chi_{\Omega_{1}} \bar{\eta}_{j} V_{\Delta_{0}} \bar{\eta}_{k}
\chi_{\Omega_{2}} \|
&\leqslant&
\frac{C_{n}}{\left[1 + M^{-nj} d(\Omega_{1},\Delta_{0})^{n}
\right]\left[1 + M^{-nk} d(\Omega_{2},\Delta_{0})^{n} \right]}
\nonumber \\
&& \quad \max \left[ \|\bar{\eta}_{j}
V_{\Delta_{0}} \bar{\eta}_{k}\|,
\sup_{{m<j} \atop {n<k}} \sup_{\Delta \in D_{m \wedge n}(\Delta_{0})}
\!\! M^{-C(j-m+k-n)} \|\eta_{m} V_{\Delta} \eta_{n}\| \right] \end{aligned}$$ where $m \wedge n = \min(m, n)$.
[*Proof*]{}\
We introduce $\chi_{\bar{\Delta}_{0}}$ a ${\cal C}^{\infty}_{0}$ function equal to 1 on the support of $V_{\Delta_{0}}$ then we write $$\chi_{\Omega_{1}} \bar{\eta}_{j} V_{\Delta_{0}} \bar{\eta}_{k}
\chi_{\Omega_{2}} = \chi_{\Omega_{1}} \bar{\eta}_{j}
\chi_{\bar{\Delta}_{0}} \bar{\eta}_{j} V_{\Delta_{0}} \bar{\eta}_{k}
\chi_{\bar{\Delta}_{0}} \bar{\eta}_{k} \chi_{\Omega_{2}}
+ \sum_{{m<j} \atop {n<k}} \chi_{\Omega_{1}} \bar{\eta}_{j}
\chi_{\bar{\Delta}_{0}} \eta_{m} V_{\Delta_{0}} \eta_{n}
\chi_{\bar{\Delta}_{0}} \bar{\eta}_{k} \chi_{\Omega_{2}}$$
Afterwards we introduce the sectors and the matrix formulation and we notice that when we want to compute for instance the norm of the function $\eta^{n}_{\gamma} \chi_{\bar{\Delta}_{0}} \bar{\chi}^{k}_{\alpha}
\chi_{\Omega_{2}}$, momentum conservation tells us that we can convolve $\chi_{\bar{\Delta}_{0}}$ by a function which is restricted in momentum space to the neighborhood of $S_{\gamma}^{n} - \bar{S}^{k}_{\alpha}$. In this way it is quite easy to see that we can extract at the same time spatial decay and momentum conservation decay. $\blacksquare$
Proof of theorem [\[thprob2d\]]{}
---------------------------------
Let $\Delta_{0} \in I\!\! D_{j_{0}}$, we call $X_{C_{x}, a}$ and $Y_{C_{y}, a}$ the events $$\begin{aligned}
X_{C_{x}, a} &=& \left[ \exists j \leqslant k, \exists \Delta
\in D_{k}(\Delta_{0})
\mbox{ s. t. } \| \bar{\eta}_{j} V_{\Delta} \bar{\eta}_{k} \|
\geqslant a C_{x} M^{-j/2} M^{\frac{j_{0}-k}{4}} \right] \\
Y_{C_{y}, a} &=& \left[\| D_{\Lambda, \mu} \eta_{E} V_{\Delta_{0}} \eta_{E}
D_{\Lambda, \mu}\| \geqslant a C_{y} j_{0} M^{j_{0}/2} \right]\end{aligned}$$ We will note $\bar{Z}$ the contrary event of $Z$.
Theorem \[thprobV\] tells us that $$\begin{aligned}
I\!\! P(X_{C, a}) &\leqslant& \sum_{k} \sum_{j \leqslant k}
\sum_{\Delta \in D_{k}(\Delta_{0})} C' e^{-\frac{3}{4} a^{2}
M^{\frac{j_{0} -k}{2}} M^{(\frac{k}{2} - \frac{j}{3})}} \\
&\leqslant& \sum_{k} O(1) M^{2(j_{0}-k)} e^{-\frac{3}{4} a^{2}
M^{\frac{j_{0}}{6}} M^{\frac{j_{0}-k}{3}}} \\
&\leqslant& C_{1} e^{-\frac{1}{2} a^{2} M^{\frac{j_{0}}{6}}}\end{aligned}$$ One can see that thanks to lemma \[lemortho\], $\bar{X}_{C, a}$ implies $\bar{Y}_{O(1) C, a}$. Thus if we call $C_{0} = O(1)
C$ $$I\!\! P(Y_{C_{0}, a}) \leqslant I\!\! P(X_{C, a}) \leqslant
C_{1} e^{-\frac{1}{2} a^{2} M^{\frac{j_{0}}{6}}}$$
Furthermore, if we work with respect to $\bar{X}_{C, a}$ which is stronger than $\bar{Y}_{C_{0}, a}$ everything goes as if one had $$\| \chi_{\Delta_{1}} D_{\Lambda, \mu} V_{\Delta_{2}} D_{\Lambda,
\mu} \chi_{\Delta_{3}} \| \leqslant \frac{C_{n_{1}} \|D_{\Lambda, \mu}
V_{\Delta_{2}} D_{\Lambda, \mu}\|}{\left[1 + L^{-1}
d_{\Lambda}(\Delta_{1}, \Delta_{2})\right]^{n_{1}} \left[1 + L^{-1}
d_{\Lambda}(\Delta_{2}, \Delta_{3})\right]^{n_{1}}}$$ Thus we will be able to apply theorem \[thRE\] with an effective coupling constant $$\lambda_{\mbox{\scriptsize eff}} = \lambda j_{0} M^{j_{0}/2}$$ and a length scale $L = M^{j_{0}}$.
If we want to make perturbations it is clever to perturb around the expected Green’s function without cut-off, [*i.e.*]{} we write $$\frac{1}{p^{2}-E - i \mu +\lambda V} = \frac{1}{p^{2}-E - i \mu_{0} +\lambda
V + i \delta \mu}$$ where $\mu_{0}$ is the expected contribution of the tadpole given by the self-consistent condition $$\mu_{0} = \lambda^{2} \mbox{Im} \int \frac{1}{p^{2}-E-i \mu_{0}} dp$$
Afterwards, when we compute the perturbative expansion, the tadpole with cut-off will eat up a fraction $\lambda^{2} M^{j} \sim O(|\log \lambda|^{-2})$ of the counter-term so that $$G \sim \frac{1}{p^{2}-E-i \eta_{E} O(\lambda^{2} |\log \lambda|^{-2})
\eta_{E}}$$ $\blacksquare$
In fact since the tadpole has a real part, it implies that we should also renormalize the energy by a shift $$\delta E = O \left( \lambda^{2} \log [\mbox{UV cut-off scale}] \right)$$
Proof of lemma [\[lemproba\]]{}
-------------------------------
We will note $J_{\alpha} \equiv (\eta_{\alpha}^{j})^{2}$, $K_{\beta} \equiv (\eta_{\beta}^{k})^{2}$ and $X$ as either $J$ or $K$.
We can perform the integration on $V_{\Delta}$ so that $\left<{\cal I}_{m_{0}}
\right>$ appears as a sum of Feynman graphs.
$$\left< {\cal I}_{m_{0}} \right> = \sum_{{\alpha_{1}
\ldots \alpha_{m_{0}}} \atop {\beta_{1} \ldots \beta_{m_{0}}}}
\raisebox{-2cm}{\psfig{figure=figtracevm.eps,height=3cm}}
= \sum_{\cal G} {\cal A}({\cal G})$$
where a solid line stands for a $J_{\alpha_{i}}$, a dashed line stands for a $K_{\beta_{i}}$ and a wavy line represents the insertion of a $V_{\Delta}$. In the following, we will prove the theorem in infinite volume with $V$ having a covariance $\delta$ in order to have shorter expressions. The proof can then easily be extended to short range covariances and finite volume except for the first few slices where one must pay attention to the ultra-violet cut-off but this is irrelevant because it will cost only a factor $O(1)$.
The integration on $V_{\Delta}$ consists in contracting the wavy lines together, then both ends are identified and bear an extra $\chi_{\Delta}$ which restricts their position.
The $X$’s will stand as propagators and the contraction of the $V_{\Delta}$’s will give birth to 4-legged vertices.
### Momentum conservation at vertices
First, we notice that if we note $\bar{\alpha}$ the opposite sector of $\alpha$ $$X_{\alpha}(x,y) = X_{\bar{\alpha}}(y,x)$$
Then we put an orientation on each propagator, so that if a $X_{\alpha}$ goes from a vertex at $z$ to a vertex at $z'$ it gives a $X_{\alpha}(z, z') = X_{\bar{\alpha}}(z', z)$, [*i.e.*]{} it is equivalent to have an incoming $X_{\bar{\alpha}}$ at $z$ and an incoming $X_{\alpha}$ at $z'$. Now, for a given vertex with incoming propagators $X_{\alpha_{1}}$, $X_{\alpha_{2}}$, $X_{\alpha_{3}}$, $X_{\alpha_{4}}$, the spatial integration over its position gives a term of the form $$\Gamma_{\alpha_{1} \ldots \alpha_{4}} (x_{1}, x_{2}, x_{3}, x_{4}) = \int
\! X_{\alpha_{1}}(x_{1},z) X_{\alpha_{2}}(x_{2},z)
X_{\alpha_{3}}(x_{3},z) X_{\alpha_{4}}(x_{4},z)
\chi_{\Delta}(z) \, dz$$
In momentum space, it becomes $$\Gamma_{\alpha_{1} \ldots \alpha_{4}} (p_{1}, \ldots, p_{4}) =
X_{\alpha_{1}}(p_{1}) \ldots X_{\alpha_{4}}(p_{4})
\int \! \chi_{\Delta}(k) \,
\delta(p_{1}+\ldots +p_{4}-k) \, dk$$ where we use the same notation for a function and its Fourier transform.
In $x$-space, $\chi_{\Delta}$ is a ${\cal C}^{\infty}_{0}$ function with support inside a box of side $O(1) M^{j_{0}}$, it means that in momentum space, it is a ${\cal C}^{\infty}$ function with fast decay over a scale $M^{-j_{0}}$. Thus for all $n$ there exists $C_{\chi}(n)$ such that $$\left| \chi_{\Delta}(k)\right| \leqslant
\frac{C_{\chi}(n) M^{2j_{0}}}{\left(1+M^{2j_{0}} |k|^{2}\right)^{n+1}}$$
We make a decomposition of $\chi_{\Delta}$ $$\chi_{\Delta}(k) = \sum_{s=0}^{j_{0}/2}
\chi_{s}(k)$$ where $\chi_{0}$ has its support inside the ball of radius $2M M^{-j_{0}}$, $\chi_{j_{0}/2} (\equiv \chi_{\infty})$ has its support outside the ball of radius $M^{j_{0}/2} M^{-j_{0}}$ and $\chi_{s}$ forces $|k|$ to be in the interval $[M^{s} M^{-j_{0}}; 2M M^{s} M^{-j_{0}}]$. In this way, we can decompose each vertex $v$ into a sum of vertices $v_{s}$, where a vertex $v_{s}$ forces momentum conservation up to $O(1) M^{s} M^{-j_{0}}$ and has a factor coming from $$\left| \chi_{s}(x) \right| \leqslant C'_{\chi}(n)M^{-sn} \times M^{-sn}$$ We split the factor in order to have a small factor per vertex and yet retain some decay to perform the sum on $s$.
### Tadpole elimination
A graph will present tadpoles when two neighboring $V_{\Delta}$’s contract together thus yielding a $X(z,z)$. Suppose that we have a $j$-tadpole then at the corresponding vertex we will have something of the form $$\int \! \! dz \, \chi_{u}(z) \, K_{\beta} (x,z) \, J_{\alpha}(z,z)
\, K_{\beta'}(z,y)$$ Between the two $K$’s, momentum will be preserved up to $2M \, M^{u} M^{-j_{A}}$ which in most case is much smaller than $M^{-j_{0}/2}$ so that $\beta'$ is very close to $\beta$. Then we would like to forget about $J_{\alpha}$ by summing over $\alpha$ and see the whole thing as a kind of new $K_{\beta}$. Now if per chance the new $K_{\beta}$ makes a tadpole we will erase it, and so on recursively.
First, we define the propagators as propagators (or links) of order $0$ $$\begin{aligned}
^{0}\! J^{(0,0)}_{\alpha \alpha'} &=& \delta_{\alpha \alpha'}
J_{\alpha} \\
^{0}\! K^{(0,0)}_{\beta \beta'} &=& \delta_{\beta \beta'}
K_{\beta} \end{aligned}$$
Then we define links of order $1$ $$\begin{aligned}
^{1}\! K^{(p,0)}_{\beta \beta'}(x,y) &=& \sum_{\beta_{1} \ldots
\beta_{p-1}} \sum_{{\alpha_{1} \ldots \alpha_{p}} \atop
{\alpha'_{1} \ldots \alpha'_{p}}} \int \! dz_{1} \ldots dz_{p} \,
K_{\beta}(x, z_{1}) \ ^{0}\! J^{(0,0)}_{\alpha_{1} \alpha'_{1}}
(z_{1}) \chi_{u_{1}}(z_{1}) \nonumber \\
&& \quad \quad K_{\beta_{1}}(z_{1}, z_{2}) \ldots \ ^{0} \!
J_{\alpha_{p} \alpha'_{p}}^{(0,0)}
(z_{p}) \chi{u_{p}}(z_{p}) K_{\beta'}(z_{p}, y)\end{aligned}$$ where we don’t write the momentum conservation indices for shortness. We have a similar definition for $^{(1)}\! J^{(0,q)}_{\alpha \alpha'}$ (obtained by erasing $q$ $K$-tadpoles of order 0).
We will note $X^{(),t}$ to indicate that momentum is preserved up to $t M^{-j_{0}}$ between the leftmost and the rightmost $X$’s or $X^{(),\infty}$ if momentum conservation is worse than $M^{-j_{0}/2}$.
Now, we can iterate the process in an obvious way. Yet, we must add an important restriction: we will erase a $X^{( ),\infty}$ tadpole only if it is attached to a $v_{\infty}$ vertex.
\
There exist constants $C_{1}$ and $C_{2}$ (independent of $j$ and $k$) such that for any tadpole obtained by erasing a total of $p$ $J$-tadpoles and $q$ $K$-tadpoles we have the following bond $$\left|X_{\gamma \gamma'}^{(p,q)}(z, z)\right| \leqslant C_{1} C_{2}^{p+q}
M^{-pj-qk} M^{-3x/2} {\cal F}(X) \mbox{, } x = \left\{
\begin{array}{l}
\mbox{$j$ if $X=J$} \\
\mbox{$k$ if $X=K$}
\end{array}
\right.$$
where ${\cal F}(X)$ is a small factor coming from the various $\chi_{s}$ that appear in the expression of $X$. Thus, ${\cal F}$ gets smaller as momentum conservation gets worse.
[*Proof*]{}\
First we will prove this result when momentum is well preserved, [*i.e.*]{} up to $M^{-j_{0}/2}$ at worst, then we will see what has to be adapted when there is a bad momentum conservation.
The proof is by induction on the order of the tadpole. We define $C_{1}$, $C_{2}$ and $C_{3}$ such that $$\begin{aligned}
|X_{\gamma}(x, y)| &\leqslant& C_{1} M^{-3x/2} \\
\sup_{x} \int \! dy \, \left| X_{\gamma}(x,y) \right| &\leqslant& C_{3} \\
C_{2} &=& 9 C_{1} C_{3}\end{aligned}$$ It is easy to see that for level 0 tadpoles $$\left|^{0}\! X_{\gamma \gamma'} (z, z)\right| \leqslant C_{1} M^{-3x/2}
{\cal F}(X)$$
Now, consider $^{m}\! J_{\alpha \alpha'}^{(p,q)}$ a $J$-tadpole of order $m$ and weight $(p, q)$ obtained by erasing $n$ $K$-tadpoles of order $m-1$ and weights $(p_{1}, q_{1})$, …, $(p_{n}, q_{n})$. We have $$p = p_{1}+ \ldots +p_{n} \quad \quad q = q_{1}+ \ldots + q_{n} + n$$ The expression of $^{m}\! J$ will be of the form $$\begin{aligned}
^{m}\! J_{\alpha \alpha'}^{(p,q)} (z, z) &=& \sum_{\alpha_{1} \ldots
\alpha_{n-1}} \sum_{{\beta_{1} \ldots \beta_{n}} \atop
{\beta'_{1} \ldots \beta'_{n}}} \int \! dz_{1} \ldots dz_{n} \,
J_{\alpha}(z, z_{1}) \ ^{m-1}\! K_{\beta_{1} \beta'_{1}}^{(p_{1},
q_{1})}(z_{1}, z_{1}) \nonumber \\
&& \hspace{-2em} \chi_{u_{1}}(z_{1}) \, J_{\alpha_{1}}(z_{1}, z_{2})
\ldots K_{\beta_{n} \beta'_{n}}^{(p_{n}, q_{n})}(z_{n}, z_{n}) \,
\chi_{u_{n}}(z_{n}) J_{\alpha'}(z_{n}, z)\end{aligned}$$
Since we supposed that we have momentum conservation up to $M^{-j_{0}/2}$, the $\alpha_{i}$’s will be either $\alpha_{i-1}$ or one of its neighbors and $\beta'_{i}$ will be either $\beta_{i}$ or one of its neighbors. Thus the sum on sector attribution will give a factor $3^{2n-1} M^{nk/2} \leqslant
9^{n} M^{nk/2}$.
We have $n+1$ $J$’s but only $n$ spatial integrations because we have a tadpole. This gives a factor $C_{3}^{n} C_{1} M^{-3j/2}$ (we forget about the momentum conservation factor for the moment).
Finally the $^{m-1}\! K$’s bring their factor so that $$\begin{aligned}
\left|^{m}\! J_{\alpha \alpha'}^{(p,q)} (z, z) \right| &\leqslant &
9^{n} M^{nk/2} C_{3}^{n} C_{1} M^{-3j/2} \nonumber \\
&& \left(9C_{1} C_{3}\right)^{\sum p_{i}+ \sum q_{i}}
M^{-j \sum p_{i} -k \sum q_{i}}
\left(C_{1} M^{-3k/2}\right)^{n} {\cal F}(X) \nonumber \\
&\leqslant& C_{1} \left(9 C_{1} C_{3}\right)^{\sum p_{i}+ \sum q_{i} + n}
M^{-j \sum p_{i} -k (\sum q_{i} +n)} M^{-3j/2} {\cal F}(X)\end{aligned}$$ which is precisely what we want. Then we can do the same for the $^{m}\! K$’s.
Now we must consider the cases with bad momentum conservation. First, let us suppose that momentum conservation is bad overall for $^{m}\! J$ but was good for the $^{m-1}\! K$’s, then the previous argument will work except if there are some $v_{\infty}$ vertices. In this case we will have to pay a factor $M^{j/2}$ to find the following $\alpha_{i}$ instead of a factor 3. But from the corresponding $\chi_{\infty}$ we have a small factor $$\frac{1}{1+ M^{j_{0}N'/2}}$$ from which we can take a fraction to pay the $M^{j/2}$ and retain a small factor for ${\cal F}(X)$.
Finally, if a $^{m-1}\! K$ has a bad momentum conservation it is necessarily attached to a $v_{\infty}$ vertex (otherwise we would not erase it). In this case we must pay a factor $M^{j/2} M^{k/2}$ (to find $\beta'_{i}$ and $\alpha_{i}$) but again we can take a fraction of the factor of $\chi_{\infty}$ to do so.
When tadpole elimination has been completed, we have erased $t_{j}$ $J$-tadpoles and $t_{k}$ $K$-tadpoles and we are left with $m_{0}' = m_{0} - t_{j} - t_{k}$ vertices linked together by $m'_{0}$ $J$’s and $m'_{0}$ $K$’s (a tadpole which has not been erased being seen as a propagator).
For a $X_{\alpha \alpha'}^{(p, q)}(x,y)$, it is quite easy to see that to integrate on $y$ with fixed $x$ amounts more or less to the same problem for $O(1)^{p+q} X_{\alpha}$ and that to find $\alpha'$ knowing $\alpha$ costs a factor $O(1)^{p+q}$.
### Sector conservation at the vertex
\[sconserv\] \
*Let ($\bar{S}_{\alpha_{1}}^{l}$, …, $\bar{S}_{\alpha_{4}}^{l}$) be a quadruplet of sectors of the enlarged slice $\bar{\Sigma}^{l}$ and $0 \leqslant r \leqslant O(1)M ^{l/2}$ such that there are $p_{1} \in \bar{S}^{l}_{\alpha_{1}}$, …, $p_{4} \in
\bar{S}^{l}_{\alpha_{4}}$ verifying $$|p_{1}+ \ldots + p_{4}| \leqslant r M^{-l} \quad \mbox{with} \quad$$*
Then we can find $\{\alpha, \alpha', \beta, \beta'\} =
\{\alpha_{1}, \ldots, \alpha_{4}\}$ satisfying $$\left\{
\begin{array}{lcr}
|\alpha' - \bar{\alpha}| &\leqslant& (a \sqrt{r}+ b) \, M^{-l/2} \\
|\beta' - \bar{\beta}| &\leqslant& (a \sqrt{r} + b) \, M^{-l/2}
\end{array}
\right.$$ where $a$ and $b$ are some constants independent of $l$ and $r$.
[*Proof*]{}\
If we can prove the result for $l \geqslant O(1)$ then we will be able to enlarge the result to any $l$ provided maybe we take some slightly bigger $a$ and $b$. Therefore we assume that this is the case in the following.
We define $(\alpha, \alpha', \beta, \beta')$ by
- $\{\alpha, \alpha', \beta, \beta'\} = \{ \alpha_{1}, \ldots,
\alpha_{4} \}$
- $\alpha = \alpha_{1}$
- $\displaystyle |\alpha -\beta| = \min_{i \in \{2, 3, 4\}}
|\alpha -\alpha_{i}|$
- $|\bar{\alpha} - \alpha'| \leqslant |\bar{\alpha} - \beta'|$
Then, if $|\alpha -\beta| \leqslant |\alpha' - \beta'|$ we exchange $(\alpha, \beta)$ and $(\alpha', \beta')$.
A sector $\bar{S}^{l}_{\gamma}$ is included in a tube, of center $k_{\gamma} \equiv
e^{i\gamma}$ and whose direction is orthogonal to the direction $\gamma$, of size $$\left\{
\begin{array}{lcl}
\mbox{length}&:& L= \pi M^{-l/2} (1+{\textstyle \frac{2}{M}}) \\
\mbox{width}&:&2 M^{-l}
\end{array}
\right.$$
We define $$\begin{aligned}
k_{\alpha \beta} &=& k_{\alpha} + k_{\beta} = 2 \cos \left(
\frac{\alpha-\beta}{2}\right) e^{i\frac{\alpha+\beta}{2}} \equiv
2 \cos x \, e^{i\theta} \equiv r e^{i\theta} \\
k_{\bar{\alpha}' \bar{\beta}'} &=& - k_{\alpha' \beta'} \equiv 2 \cos
\bar{x}' \, e^{i\bar{\theta}'}\end{aligned}$$
If we can prove that $$\left\{
\begin{array}{lcr}
|\bar{x}' - x| &\leqslant& (a' \sqrt{r}+ b') \, M^{-l/2} \\
|\bar{\theta}' - \theta| &\leqslant& (a' \sqrt{r} + b') \, M^{-l/2}
\end{array}
\right.$$ then we will be able to conclude, with $a=2a'$ and $b=2b'$.
It is easy to check that by construction, we have
- $0 \leqslant \bar{x}' \leqslant x$
- $|\alpha - \beta| \leqslant \frac{2 \pi}{3} \Rightarrow \cos x
\geqslant \frac{1}{2}$
We have a trivial bound $$|k_{\alpha \beta} - k_{\bar{\alpha}' \bar{\beta}'}| \leqslant 2 \tan
\left|\frac{\bar{\theta}' - \theta}{2}\right| \leqslant r M^{-l} + 2L
+ 4 M^{-l} \equiv R$$
Therefore $$|\bar{\theta}' - \theta| \leqslant 2 \tan \left|\frac{\bar{\theta}' -
\theta}{2}\right| \leqslant R \leqslant O(1) M^{-l/2}$$
We can see that $\theta$ is very well conserved.
If $\sin x \leqslant (a_{1} \sqrt{r} + b_{1}) M^{-l/2}$ then $|x- \bar{x}'| \leqslant x \leqslant (a_{2} \sqrt{r} + b_{2}) M^{-l/2}$.
Otherwise, let us remark that $\bar{S}^{l}_{\alpha} + \bar{S}^{l}_{\beta}$ is at a distance at most $2 M^{-l}$ from a rhombus $R_{\alpha \beta}$ of center $k_{\alpha \beta}$ and of diagonals $$\left\{
\begin{array}{l}
2 L \sin x \quad \mbox{in the direction {\bf u}}_{r} \equiv
\frac{\alpha + \beta}{2} \\
2 L \cos x \quad \mbox{in the direction {\bf u}}_{\theta} \equiv
\frac{\alpha + \beta}{2} + \frac{\pi}{2}
\end{array}
\right.$$
Then, $R_{\alpha \beta} - R_{\bar{\alpha}' \bar{\beta}'}$ is at a distance at most $4 M^{-l}$ from a rectangle ${\cal R}$ of center $k_{\alpha \beta} -
k_{\bar{\alpha}' \bar{\beta}'}$ and of sides $$\left\{
\begin{array}{l}
L_{r} = 2L \left( \sin x + \sin \bar{x}' \cos |\bar{\theta}' - \theta| +
\cos \bar{x}' \sin |\bar{\theta}' - \theta| \right)
\mbox{ in {\bf u}}_{r} \\
L_{\theta} = 2L \left( \cos x +\cos \bar{x}' \cos |\bar{\theta}' - \theta|
+ \sin \bar{x}' \sin |\bar{\theta}' -\theta| \right)
\mbox{ in {\bf u}}_{\theta}
\end{array}
\right.$$
Since $|\bar{\theta}' - \theta| \leqslant O(1) M^{-l/2}$, we have $|\cos
(\bar{\theta}' - \theta) -1| \leqslant O(1) M^{-l}$. We define a $z$ axis in the direction $\mbox{\bf u}_{r}$. $(k_{\bar{\alpha}' \bar{\beta}'} -
k_{\alpha \beta})$ has a $z$ coordinate $2 (\cos \bar{x}' - \cos x) +
O(1) M^{-l}$. This leads to the condition $$\begin{aligned}
2 |\cos \bar{x}' - \cos x| &\leqslant& r M^{-l } +
2L (\sin x + \sin \bar{x}') +b_{3} M^{-l} \\
&\leqslant& (r + b_{3}) M^{-l} + 4 L \sin x \end{aligned}$$
Let us note that $|\cos (x-u) - \cos x|$ is an increasing function of $u$ and that we have $$\cos (x-u) - \cos x = \sin x \, u - \frac{1}{2} \cos x \, u^{2} +
u^{3} \varepsilon(u) \quad \mbox{with } |\varepsilon(u)| \leqslant
\frac{1}{6}$$ We take $u = (\sqrt{r} +b_{4}) M^{-l/2} + 2L \equiv (\sqrt{r} + b'_{4})
M^{-l/2} \leqslant O(M^{-l/4})$. $$\begin{aligned}
|\cos (x - u) - \cos x| &\geqslant&
\sin x \, u - \left(\frac{1}{2} + \frac{u}{6} \right) u^{2} \\
&\geqslant& 2L \sin x + \sin x (\sqrt{r} +b_{4}) M^{-l/2}
- (\sqrt{r} +b'_{4})^{2} M^{-l} \end{aligned}$$ Since we are in the case $\sin x \geqslant (a_{1} \sqrt{r} +b_{1})
M^{-l/2}$, for $a_{1}$ and $b_{1}$ large enough we will have $$2 |\cos (x-u) - \cos x| \geqslant (r + b_{3}) M^{-l} + 4L \sin x$$
Therefore we must have $|\bar{x}' -x| \leqslant u \leqslant (\sqrt{r}+b'_{4})
M^{-l/2}$ which allows us to conclude. $\blacksquare$
### Size of a graph
The previous section shows that, at each vertex, momenta come approximately by pairs of opposite sectors. Thus, for all the vertices which haven’t been erased by the tadpole elimination process, we can choose by a factor $3$ how to pair the sectors. Then we split the vertices in two half-vertices according to this pairing. We represent graphically this as
This gives $3^{m'_{0}}$ (split) graphs that we will consider as our basic graphs in the following.
A graph is decomposed into a number of [*momentum cycles*]{} connected together by wavy lines. We will follow those cycles to fix momentum sectors. Finding the enlarged sectors (of level $j$) will cost a factor $M^{j/2}$ per cycle times a constant per vertex. Then we will pay an extra $M^{(k-j)/2}$ for each $K$ propagator to find its sector.
We define $c$ the total number of momentum cycles that we decompose into $t$ tadpoles, $b$ bubbles (with 2 vertices) and $l$ large cycles (with 3 or more vertices). We have $$\begin{aligned}
t + b + l &=& c \\
\label{momcycle} t + 2b + 3l &\leqslant& 2 m'_{0}\end{aligned}$$ and the sector attribution costs $$\label{Cpainting}
{\cal A}_{1} = C^{m_{0}} M^{cj/2} M^{m'_{0} (k-j)/2}$$ Notice that the constant has an exponent $m_{0}$ because of the tadpole elimination process.
The spatial integration of the vertices will be made with the short $J$ links whenever possible. We can decompose each graph into $J$-cycles linked together by $K$ links (because there are $2$ incoming $J$’s at each vertex), this allows to integrate all the vertices but one per cycle with a $J$ link. The total cost is (noticing that the last vertex is integrated in the whole cube $\Delta$) $$\label{Cintegrate}
{\cal A}_{2} = C^{m_{0}} M^{3(m'_{0} - c')j/2}
M^{3 (c'-1) k/2} M^{2k}$$ where $c'$ is the total number of short cycles that we decompose into $t'$ short tadpoles, $b'$ short bubbles and $l'$ large short cycles. $$\begin{aligned}
t' + b' + l' &=& c' \\
\label{shortcycle}t'+ 2b' + 3 l' &\leqslant& m'_{0}\end{aligned}$$
The scaling of the tadpoles and the propagators give a factor $$\label{Cscaling}
{\cal A}_{3} = M^{-jt_{j} - k t_{k}} M^{-3 m_{0}' (j+k)/2}$$
Tadpoles that have been obtained by erasing a few vertices (say $O(M^{j_{0}/4})$ for instance) will have an extra small factor because they strongly violate momentum conservation, we can take it to be a power of $M^{-j_{0}/4}$. Tadpoles with higher weights will not have this good factor but we will see that they bring a better combinatoric. The $t$ momentum tadpoles will consist in $t_{1}$ low weight ones and $t_{2}$ others while the $t'$ short tadpoles split into $t'_{1}$ low weight ones and $t'_{2}$ others. We can manage to have a factor $${\cal A}_{4} = M^{-2 t_{1} j_{0}} M^{-2 t'_{1} j_{0}}$$
If we have a short bubble we will have four incoming long propagators whose momenta must add up to zero up to $x M^{-j_{0}}$. If we apply lemma \[sconserv\], we can see that knowing 3 of these momenta it cost only a factor $O(1) \sqrt{x}$ to find the fourth momentum sparing us a factor $M^{(k-j)/2}$ obtained by na[ï]{}vely fixing first the enlarged sector at slice $j$. If the bubble has a weight $p$, [*i.e.*]{} the two short propagators have been obtained after erasing $p$ vertices, and a momentum conservation worse than $O(1) p M^{-j_{0}}$ then the small factor of bad momentum conservation will pay for the $O(1) \sqrt{p}$. We will have $b'_{1}$ such good bubbles, each of them bringing a factor $M^{-(k-j)/2}$. In addition, we will have $b'_{3}$ bad bubble of weight greater than $M^{(k-j)}$ for which we earn nothing and $b'_{2}$ bad bubbles of weight $p_{i}$ bringing a factor $C \sqrt{p_{i}} M^{-(k-j)/2}$. This gives a factor $${\cal A}_{5} = M^{-(b'_{1}+b'_{3})(k-j)/2} C^{b'_{3}} \prod_{i} \sqrt{p_{i}}$$
Finally we have the following bound for the contribution of a graph $$\begin{aligned}
|{\cal A}({\cal G})| &\leqslant& {\cal A}_{1} \ldots {\cal A}_{5} \nonumber
\\
& \leqslant & C^{m_{0}} M^{cj/2} M^{m'_{0}(k-j)/2}
M^{3(m'_{0}-c')j/2} M^{3(c'-1)k/2} M^{2k}
M^{-jt_{j} - kt_{k}} \nonumber \\
&& M^{-3m'_{0}(j+k)/2} M^{-2 j_{0}(t_{1} +t'_{1})}
M^{-(b'_{1}+b'_{2})(k-j)/2} \prod_{i} \sqrt{p_{i}} \\
&\leqslant& C^{m_{0}} M^{k/2} M^{cj/2} M^{3c'(k-j)/2}
M^{-m'_{0}(k+j/2)} M^{-jt_{j} -kt_{k})} M^{-2j_{0}(t_{1}+t_{1}')}
\nonumber \\
&& M^{-(b'_{1}+b'_{2})(k-j)/2} \prod_{i} \sqrt{p_{i}} \end{aligned}$$
If we use equations (\[momcycle\]) and (\[shortcycle\]) we obtain $$\begin{aligned}
|{\cal A}({\cal G})| &\leqslant& C^{m_{0}} M^{k/2}
M^{-m'_{0}(k+j)/2} M^{-j t_{j} -k t_{k}}
M^{-m'_{0}j/6} M^{bj/6} \nonumber \\
\label{graphbond} && M^{t_{2}j/3} M^{t'_{2} (k-j)}
M^{b'_{3}(k-j)/2} M^{-t_{1}(2j_{0}-j/3)} M^{-t'_{1}(2j_{0}+j-k)}
\prod_{i} \sqrt{p_{i}} \end{aligned}$$
we will take $m_{0} \geqslant M^{k/6}$ so that $M^{k/2} \leqslant
C^{m_{0}}$. Furthermore, $t$ and $t'$ are at most equal to $M^{-j_{0}/4} m_{0}$ thus $M^{kt} \leqslant C^{m_{0}}$. It allows us to rewrite the bound $$|{\cal A}({\cal G})| \leqslant C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-j t_{j} -k
t_{k}} M^{-(m'_{0}-b) j/6} M^{b'_{3}(k-j)/2} \prod_{i} \sqrt{p_{i}}$$
Graph counting
--------------
\
Let $T(p)$ be the number of ways to contract $2p$ adjacent $V$’s so as to make only generalized tadpoles. We have $$\label{Ntadpole} T(p) = \frac{(2p) !}{p! (p+1)!}$$
[*Proof*]{}\
It is easy to see that a good contraction scheme, [*i.e*]{} one that gives only generalized tadpoles, corresponds to have no crossing contractions. It means that if we label the fields $V_{0} \ldots V_{2p-1}$ according to their order and if $V_{i}$ and $V_{j}$ contract respectively to $V_{k}$ and $V_{l}$ then $$i < j \Rightarrow k<j \mbox{ or } k>l$$ We have $T(1)=1$. For $p>1$, we contract first $V_{0}$ to some $V_{i}$. $V_{1}, \ldots, V_{i-1}$ will necessarily contract among themselves making only generalized tadpoles and so will do $V_{i+1}, \ldots, V_{2p-1}$. Thus $i$ is necessarily odd and we have $$\label{Trecursion} T(p) = \sum_{k=0}^{p-1} T(k) T(p-1-k)$$ where by convention $T(0)=1$.
We introduce the generating function $$t(z) = \sum_{p = 0}^{\infty} T(p) z^{p}$$ The recursion formula (\[Trecursion\]) can be translated into an equation for $t$ which is $$t(z) = z t^{2}(z) + 1$$ whose resolution yields $$t(z) = \frac{1 + \sqrt{1-4z}}{2z} \mbox{ or } \frac{1 - \sqrt{1-4z}}{2z}$$ Since the second solution is analytic around $z=0$, we can take it as $t(z)$ and the coefficients of its power expansion will give us $T(p)$. An easy computation leads to the desired formula. $\blacksquare$
\
The number ${\cal N}_{M}(B)$ of graphs with $B$ possible momentum bubbles obtained in the contraction of a cycle of $2M$ $V$’s has the following bound $${\cal N}_{M}(B) \leqslant C^{M} (M-B)!$$
[*Proof*]{}\
First lets us remark that bubbles come in chains (possibly with a tadpole at one end) of two possible types
- type 1:
- type 2:
where a solid line stands here either for a $J$ or a $K$.
We have two special cases
- which can be seen as a type 1 chain
- which can generate only one momentum bubble so that we can see it as a type 2 chain of length 1.
Having chosen the $V$’s there are only two contraction schemes that yield a type 1 chain and a unique contraction scheme for a type 2 chain. If we fix explicitly the subgraphs corresponding to $B$ bubbles and contract the remaining $V$’s in any way we will get all the desired graphs plus some extra ones so that we can bound ${\cal N}_{M}(B)$.
We construct $r_{1}$ type 1 chains of lengths $\beta_{1}$, …, $\beta_{r_{1}}$ and $r_{2}$ type 2 chains of lengths $\gamma_{1}$, …, $\gamma_{r_{2}}$. We set $$\begin{aligned}
B_{1} &=& \sum_{i} \beta_{i} \\
B_{2} &=& \sum_{i} \gamma_{i} \\
B &=& B_{1} + B_{2}\end{aligned}$$
To count the contraction schemes, first we cut the cycle of $2M$ $V$’s into a sequence of $V$’s (there are $2M$ ways to do so). Then it is easy to check that in order to build a type 1 chain we must choose two sets ${\cal B}_{i}$ and $\bar{\cal B}_{i}$ of $\beta_{i}+1$ adjacent $V$’s while for a type 2 we need a set ${\cal D}_{i}$ of $2 \gamma_{i}+1$ adjacent $V$’s. We distribute those $2 r_{1}+r_{2}$ objects in $(2M-2B_{1}-2r_{1}-2B_{2}-r_{2})+2r_{1}+r_{2}$ boxes in an ordered way, and for the $\mbox{i}^{th}$ type 1 chain the respective order of ${\cal B}_{i}$ and $\bar{\cal B}_{i}$ will fix the contraction scheme. Then, there remain $2M -2B-2r_{1}$ $V$’s to contract so that we have the following number of configurations $${\cal N}_{M}(B) \leqslant 2M \!\! \sum_{B_{1}+B_{2}=B} \sum_{{r_{1}
\leqslant B_{1}}
\atop {r_{2}\leqslant B_{2}}} \sum_{{\beta_{1}+ \ldots \beta_{r_{1}}=B_{1},
\beta_{i} \geqslant 1}
\atop {\gamma_{1}+ \ldots \gamma_{r_{2}}=B_{2}, \gamma_{i}
\geqslant 1}} \frac{1}{r_{1}!}
\frac{1}{r_{2}!} \frac{(2M-2B)!}{(2M-2B-2r_{1}-r_{2})!}
(2M - 2B -2r_{1}-1)!!$$ We can compute this $$\begin{aligned}
{\cal N}_{M}(B) &\leqslant& 2M \!\! \sum_{B_{1}+B_{2}=B}
\sum_{{r_{1}\leqslant
B_{1}} \atop {r_{2}\leqslant B_{2}}}
{B_{1}-1 \choose r_{1}-1} {B_{2}-1 \choose r_{2}-1}
\frac{[2(M-B)]!}{[2(M-B)-2r_{1}-r_{2}]! (2r_{1})! r_{2}!} \nonumber \\
&&\frac{(2 r_{1})!}{r_{1}!} \frac{[2(M-B-r_{1})]!}{2^{M-B-r_{1}}
(M-B-r_{1})!} \\
&\leqslant& 2M \!\! \sum_{B_{1}+B_{2}=B} \sum_{{r_{1}\leqslant
B_{1}} \atop {r_{2}\leqslant B_{2}}}
{B_{1}-1 \choose r_{1}-1} {B_{2}-1 \choose r_{2}-1} 3^{2(M-B)}
2^{2r_{1}} r_{1}! 2^{M-B-r_{1}} (M-B-r_{1})! \\
&\leqslant& 2M \!\! \sum_{B_{1}+B_{2}=B} 2^{B_{1}-1} 2^{B_{2}-1} 9^{M-B}
2^{M} (M-B)! \\
&\leqslant& 2M (B+1) 18^{M} (M-B)! \end{aligned}$$ $\blacksquare$
Bounds
------
Now we can achieve the proof of lemma \[lemproba\] in bounding $$< {\cal I}_{m_{0}}> \leqslant \sum_{\cal G} |{\cal A}({\cal G})|$$
In order to compute this sum, we fix first $t_{j}$, $t_{k}$ and $\bar{b}$, where $\bar{b}$ is the number of possible momentum bubbles and therefore is greater than $b$.
Then we define the set $\Omega(t_{j}, t_{k}, \bar{b}, n, q_{1}, \ldots,
q_{n})$ has the set of graphs with the corresponding $t_{j}$, $t_{k}$ and $\bar{b}$ and for which the erased tadpoles form $n$ sets of $2 q_{i}$ adjacent $V$’s. We can write $$< {\cal I}_{m_{0}}> \leqslant \sum_{t_{j}, t_{k, \bar{b}}}
\sum_{n=1}^{t_{j}+t_{k}} \frac{1}{n!}
\sum_{{q_{1}, \ldots q_{n}} \atop {q_{i} \geqslant 1}}
\sum_{{\cal G} \in \Omega(t_{j}, t_{k}, \bar{b}, n, q_{1}, \ldots q_{n})}
\prod (q_{i}+1) \prod \frac{1}{q_{i}+1} |{\cal A}({\cal G})|$$ To bound $|{\cal A}({\cal G})|/ \prod(q_{i}+1)$ we notice that when a graph has a bad bubble of weight $p_{i}$ it means that we have erased two set of generalized tadpoles $q_{1_{i}}$ and $q_{2_{i}}$ on the two propagators of the bubble with $q_{1_{i}} + q_{2_{i}}=p_{i}$. Thus we have a corresponding factor $(q_{1_{i}}+1)^{-1} (q_{2_{i}}+1)^{-1}$ which control the bad factor $\sqrt{p_{i}}$ of the bad bubble so that $$< {\cal I}_{m_{0}}> \leqslant \sum_{t_{j}, t_{k, \bar{b}}}
\sum_{n=1}^{t_{j}+t_{k}} \frac{1}{n!}
\sum_{{q_{1}, \ldots q_{n}} \atop {q_{i} \geqslant 1}}
\sum_{{\cal G} \in \Omega(\ldots)}
\prod (q_{i}+1) C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-jt_{j}-kt_{k}}
M^{-(m'_{0}-\bar{b})j/6}$$
The number of graphs in $\Omega(\ldots)$ has the following bound $${\cal N}\left[\Omega(\ldots)\right] \leqslant 2^{m_{0}} \prod T(q_{i})
3^{m'_{0}} C^{m'_{0}} (m'_{0}-\bar{b})! \leqslant C^{m_{0}} \prod
\frac{2^{2q_{i}}}{q_{i}+1} (m'_{0}-\bar{b})!$$
This leads to $$\begin{aligned}
< {\cal I}_{m_{0}}> &\leqslant& \sum_{t_{j}, t_{k, \bar{b}}}
C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-jt_{j}-kt_{k}}
M^{-(m'_{0}-\bar{b})j/6} (m'_{0}-\bar{b})! \nonumber \\
&& \quad \quad \sum_{n=1}^{t_{j}+t_{k}}
\frac{1}{n!} \sum_{{q_{1}+ \ldots q_{n} = t_{j}+t_{k}}
\atop {q_{i} \geqslant 1}} 1 \\
&\leqslant& \sum_{t_{j}, t_{k, \bar{b}}}
C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-jt_{j}-kt_{k}}
M^{-(m'_{0}-\bar{b})j/6} (m'_{0}-\bar{b})! \sum_{n=1}^{t_{j}+t_{k}}
{t_{j}+t_{k}-1 \choose n-1} \\
&\leqslant& \sum_{t_{j}, t_{k, \bar{b}}}
C^{m_{0}} M^{-m'_{0}\frac{(k+j)}{2}} M^{-jt_{j}-kt_{k}}
M^{-(m'_{0}-\bar{b}) \frac{j}{6}} (m'_{0}-\bar{b})!\end{aligned}$$
Summing on $t_{j}$ and $t_{k}$ is equivalent to sum over $m'_{0}$ and $t_{k}$ with $t_{j}=m_{0}-t_{k}-m'_{0}$. The sum over $\bar{b}$ is roughly evaluated by taking the supremum over $\bar{b}$, the result depends whether $m'_{0}$ is greater than $M^{j/6}$ or not. $$\begin{aligned}
< {\cal I}_{m_{0}}> &\leqslant& \sum_{m'_{0}\leqslant M^{j/6}}
\sum_{t_{k}}
C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-jt_{j}-kt_{k}} \nonumber \\
&& \quad + \sum_{m'_{0}> M^{j/6}} \sum_{t_{k}}
C^{m_{0}} M^{-m'_{0}(k+j)/2} M^{-jt_{j}-kt_{k}}
\max \left[1, M^{-m'_{0}j/6} (m'_{0}-\bar{b})! \right] \\
&\leqslant& C^{m_{0}} M^{-m_{0}j} \sum_{m'_{0}, t_{k}}
M^{-m'_{0}(k-j)/2} M^{-t_{k}(k-j)} \left[ 1 + 1_{(m'_{0}> M^{j/6})}
M^{-m'_{0}j/6} (m'_{0}-\bar{b})! \right]\end{aligned}$$
Finally, the sum over $t_{k}$ is easy and we bound the sum over $m'_{0}$ by finding the supremum. One can check that it gives the announced result. $\blacksquare$
Conclusion
==========
Understanding the effect of perturbations on the free spectrum of Hamiltonian operators is an outstanding challenge. We think that for the two-dimensional case, this paper can help to control the model up to quite close to the scale of the expected “mass” ([*i.e.*]{} imaginary part) so that in studying the full model, one can focus on the thin slice of momentum $p^{2}-E \sim \lambda^{2}$. Then we think that a key of the problem lies in the fact that in this case the potential is very close (in momentum space) to a large hermitian random matrix with almost independent entries and therefore we can connect our problem to the much better understood domain of random matrices or equivalently use the vector model picture of the problem.
One can note that in dimension $d=3$, the interpretation in term of random matrices can also be helpful (cf. [@MPR]), but then one has to deal with constrained matrices whose entries are no longer independent.
Acknowledgments
===============
This paper is part of a larger program in collaboration with J. Magnen and V. Rivasseau to apply rigorous renormalization group methods to the study of disordered systems, the ultimate goal being to devise tools to construct extended states in weakly disordered systems in dimension $d \geqslant 3$. They played an important part in getting these results and I thank them very much for their help.
[FMRT2]{} M. Aizenman, in [*The State of Matter*]{}, World Scientific (1994) 367 A. Abdesselam and V. Rivasseau “An explicit large versus small field multiscale cluster expansion” (hep-th/9605094) to appear in [*Reviews in Math. Phys.*]{} D.Brydges, “A short course on cluster expansion”, in [*Critical phenomena, random systems, gauge theories*]{}, Les Houches session XLIII, 1984, Elsevier Science Publishers (1986) J.M. Combes and L. Thomas “Asymptotic behaviour of eigenfunctions for multiparticle Schr[ö]{}dinger operators”, [*Comm. Math. Phys.*]{} [**34**]{} (1973) 251-270 J.Feldman, J. Magnen, V. Rivasseau and E. Trubowitz, in [*The State of Matter*]{}, World Scientific (1994) 293 J.Feldman, J. Magnen, V. Rivasseau and E. Trubowitz, [*Europhys. Lett.*]{} [**24**]{} (1993) 521 E. Fradkin, in [*D[é]{}veloppement R[é]{}cents en Th[é]{}orie des Champs et Physique Statistique*]{}, Les Houches XXXIX, North Holland (1984) M.L. Mehta, [*Random matrices*]{}, Academic Press, Boston (1991) J. Magnen, G. Poirot and V. Rivasseau “The Anderson Model as a Matrix Model” (cond-mat/9611236) to appear in [*Advanced Quantum Field Theory*]{}, La Londe Les Maures (in Memory of C. Itzykson) Elsevier (1996) G. Parisi, in [*D[é]{}veloppement R[é]{}cents en Th[é]{}orie des Champs et Physique Statistique*]{}, Les Houches XXXIX, North Holland (1984) V. Rivasseau, “Cluster expansions with large/small field conditions”, minicourse in Vanvouver summer school (1993) V. Rivasseau, [*From Perturbative to constructive renormalization*]{}, Princeton University Press (1991) D.J. Thouless, [*Phys. Report*]{} [**13**]{} (1974) 93
|
---
abstract: 'Scattering of ultraintense short laser pulses off relativistic electrons allows one to generate a large number of X- or gamma-ray photons with the expense of the spectral width—temporal pulsing of the laser inevitable leads to considerable spectral broadening. In this Letter, we describe a simple method to generate optimized laser pulses that compensate the nonlinear spectrum broadening, and can be thought of as a superposition of two oppositely linearly chirped pulses delayed with respect to each other. We develop a simple analytical model that allow us to predict the optimal parameters of such a two-pulse—the delay, amount of chirp and relative phase—for generation of a narrowband $\gamma$-ray spectrum. Our predictions are confirmed by numerical [[optimization and simulations including 3D effects.]{}]{}'
author:
- Daniel Seipt
- 'Vasily Yu. Kharin'
- 'Sergey G. Rykovanov'
title: 'Optimizing Laser Pulses for Narrowband Inverse Compton Sources in the High-Intensity Regime'
---
The Inverse Compton Scattering (ICS) of laser light off high-energy electron beams is a well-established source of X- and gamma-rays for applications in medical, biological, nuclear, and material sciences [@HIGS; @Bertozzi:PRC2008; @Albert:PRSTAB2010; @Albert:PRSTAB2011; @Quiter:NIMB2011; @Rykovanov:JPB2014; @Albert:PPCF2016; @ELI-NP; @Kramer:SciRep2018]. [[One of the main advantages of ICS photon sources is the possibility to generate narrowband MeV photon beams, as opposed to a broad continuum of Bremsstrahlung sources, for instance. The radiation from 3rd and 4th generation light-sources, on the other hand, typically has their highest brightness at much lower photon energies [@Albert:PRSTAB2010], giving ICS sources their unique scope [@Nedorezov:PhysUsp2004; @Carpinelli:NIMA2008; @Albert:PPCF2014].]{}]{}
With laser plasma accelerators (LPA), stable GeV-level electron beams have been produced with very high peak currents, small source size, intrinsic short duration (few 10s fs) and temporal correlation with the laser driver [@Mangles:Nature2004; @Leemans:Nature2006; @Esarey:RevModPhys2009], which is beneficial for using them in [[compact all-optical]{}]{} ICS. In particular the latter properties allow for the generation of femtosecond gamma-rays that can be useful in time-resolved (pump-probe) studies. For designing narrowband sources, it is important to understand in detail the different contributions to the scattered radiation bandwidth [@Rykovanov:JPB2014; @Curatolo:PRSTAB2017; @Kramer:SciRep2018; @Ranjan:PRSTAB2018].
For an intense source the number of electron-photon scattering events needs to be maximized [@Rykovanov:JPB2014], which can be achieved in the most straightforward way by increasing the intensity of the scattering laser, $I[\watt\per\centi\metre^2] = 1.37\times 10^{18}\, a_0^2 / \lambda^2[\micro\metre]$, where $a_0=e A_0/m$ with $e$ and $m$ electron absolute charge and mass respectively, and $A_0$ the [[peak]{}]{} amplitude of the laser pulse vector potential in Gaussian CGS units (with $\hbar=c=1$) and $\lambda$ the laser wavelength. However, this leads to an unfortunate consequence when the laser pulse normalized amplitude reaches $a_0\sim 1$: ponderomotive spectral broadening of the scattered radiation on the order of $\Delta\omega' /\omega' \sim a_0^2/\left(1+a_0^2 \right)$ [@Hartemann:PRE1996; @Krafft:PRL2004; @Hartemann:PRL2010; @Maroli:PRSTAB2013; @Rykovanov:JPB2014; @Curatolo:PRSTAB2017].
The ponderomotive broadening is caused by the [[$\vec v\times \vec B$ force, which effectively slows down the forward motion of electrons near the peak of the laser pulse where the intensity is high]{}]{} [@Krafft:PRL2004; @Hartemann:PRL2010; @Kharin:PRA2016]. Hence, the photons scattered near the peak of the laser pulse are red-shifted compared to the photons scattered near the wings of the pulse and the spectrum of gamma rays becomes broad. Broadband gamma-ray spectra from ICS using laser-accelerated electrons and scattering pulses with $a_0>1$ have been observed experimentally [@Chen:PRL2013; @Sarri:PRL2014; @Khrennikov:PRL2015; @Cole:PRX2018].
To overcome this fundamental limit it was proposed to use temporal laser pulse chirping to compensate the nonlinear spectrum broadening [@Ghebregziabher:PRSTAB2013; @Terzic:PRL2014; @Seipt:PRA2015; @Rykovanov:PRSTAB2016; @Kharin:PRL2018]. By performing a stationary phase analysis of the non-linear Compton S-matrix elements [@Seipt:PRA2015] or the corresponding classical Liénard-Wiechert amplitudes [@Terzic:PRL2014; @Rykovanov:PRSTAB2016] one finds that the fundamental frequency of on-axis ICS photons behaves as (off-axis emission and higher harmonics can be easily accounted for [@Seipt:PRA2015]). $$\begin{aligned}
\omega'(\varphi)
=
\frac{4 \gamma^2 \omega_L }{1 + a(\varphi)^2 + \frac{2\gamma\omega_L}{m}} \,,
\end{aligned}$$ where the last term in the denominator is the quantum recoil and missing in a classical approach. Here $a(\varphi)$ is the envelope of the normalized laser pulse vector potential, with peak value $a_0$, $\omega_L$ as the laser frequency, and $\varphi = t - z$. If the instantaneous laser frequency follows the laser pulse envelope according to $$\begin{aligned}
\label{eq:optimum-chirp}
\omega_L(\varphi)
=
\omega_{0} [ 1 + a(\varphi)^2 ] \,,
\end{aligned}$$ with a reference frequency $\omega_{0}$, then the non-linear broadening is perfectly compensated, $\omega' = const. \approx 4\gamma^2 \omega_0$ in the recoil free limit. This is the basic idea behind the promising approach for ponderomotive broadening compensation in ICS using chirped laser pulses. However, realizing such a highly nonlinear temporal chirp experimentally seems challenging because the laser frequency needs to precisely sweep up and down again within a few femtoseconds [@Terzic:PRL2014; @Kharin:PRL2018].
In this Letter, we propose a simple method to generate optimized laser pulses for the compensation of the nonlinear broadening of ICS photon sources and, thus, enhance photon yield in a narrow frequency band. We propose to synthesize an optimized laser spectrum using standard optical dispersive elements. By working in frequency space, both the temporal pulse shape and the local laser frequency are adjusted simultaneously to fulfill the compensation condition, Eq. , where the frequency first rises until the peak of the pulse intensity and then drops again. We develop an analytic model to predict the optimal dispersion needed for generating a narrowband ICS spectrum, and we compare with numerical optimization of the peak spectral brightness of the compensated nonlinear Compton source. [[We performed simulations for realistic electron beams, and taking into account laser focusing effects.]{}]{}
In order to produce the optimized laser spectra we propose a two-pulse scheme: An initially unchirped broadband laser pulse with the spectral amplitude $\tilde a_\mathrm{in}(\omega)$ is split into two identical pulses, e.g. using a beam splitter. Each of these pulses is sent to the arms of an interferometer where a spectral phase $\tilde \Phi(\omega)$ is applied to one of the pulses and the conjugate spectral phase $-\tilde \Phi (\omega)$ is imposed onto the other one, using, for example, acousto-optical dispersive filters, diffraction gratings, or spatial light modulators. The two pulses are coherently recombined causing a spectral modulation. In the time-domain this translates to the coherent superposition of two linearly and oppositely chirped laser pulses which are delayed with respect to each other, see Fig. \[fig:wigner\](a).
We model the initial laser pulse by a Gaussian spectral amplitude, $$\begin{aligned}
\tilde a_\mathrm{in}(\omega)
&= \frac{a_0}{\Delta \omega_L} \sqrt{2\pi}
\: e^{-\frac{(\omega-\omega_{0})^2}{2 \Delta\omega_L^2}} \,,
\end{aligned}$$ with bandwidth $\Delta\omega_L$. The inverse Fourier transform of this spectrum gives complex amplitude, $$\begin{aligned}
\label{eq:fourier-limited}
a_\mathrm{in}(\varphi) = a_0
\, e^{- \frac{\varphi^2 \Delta \omega_L^2}{2}}
\, e^{- i\omega_0\varphi} = a(\varphi) \, e^{-i\omega_0\varphi} \,,
\end{aligned}$$ which determines the real vector potential of a circularly polarized laser pulse propagating in the $z$ direction via $\vec a_\perp = \Re [ a_\perp \vec \epsilon ]$ with $\vec \epsilon = \vec e_x + i \vec e_y$, and with $\varphi = t+z$.
After the spectral phases $\pm \tilde \Phi(\omega)$ are applied to the two split pulses and the pulses have been recombined, the spectrum of the recombined two-pulse is modulated as $$\begin{aligned}
\label{in_spec}
\tilde a(\omega) = \tilde a_\mathrm{in} (\omega) \cos \tilde \Phi(\omega) \,.
\end{aligned}$$ The spectral phase is parametrized as $\tilde \Phi(\omega) = \sum_{k=0}^2 B_k \,\left(\omega-\omega_{0}\right)^k /(k!\Delta\omega_L^k)$. When including higher order terms we didn’t find great improvements, so we keep here only terms up to quadratic order. The dimensionless parameters $B_0$, $B_1$ and $B_2$ determine the relative phase, relative delay and amount of linear chirp, respectively, and they are collated into a vector $\vec{B}=\left\{B_0, B_1, B_2 \right\}$ for short-hand notation.
![ [[Schematic sketch of the two-pulse model for generating optimized laser pulses for narrowband nonlinear ICS (a). Wigner function of the recombined two-pulse for $\vec B = \{1.4,4.5,-4.6\}$ (b). The black curve is the analytical expression for the instantaneous frequency $\omega_L(\varphi)$, the green dashed curve is the analytical instantaneous $1+a^2(\varphi)$.]{}]{}[]{data-label="fig:wigner"}](figure1a.pdf "fig:"){width="0.70\columnwidth"} ![ [[Schematic sketch of the two-pulse model for generating optimized laser pulses for narrowband nonlinear ICS (a). Wigner function of the recombined two-pulse for $\vec B = \{1.4,4.5,-4.6\}$ (b). The black curve is the analytical expression for the instantaneous frequency $\omega_L(\varphi)$, the green dashed curve is the analytical instantaneous $1+a^2(\varphi)$.]{}]{}[]{data-label="fig:wigner"}](figure1b.pdf "fig:"){width="0.95\columnwidth"}
The temporal structure of the recombined pulse can be found analytically by the inverse Fourier transform of Eq. , yielding the complex amplitude $a_\perp(\varphi)=a(\varphi)e^{-i\Phi(\varphi)}$, where $a(\varphi)$ is the time-dependent envelope, and $\Phi(\varphi)$ is the temporal phase of the two-pulse. The instantaneous frequency in the recombined pulse is given by $\omega_L(\varphi) = {\mathrm{d}}\Phi(\varphi)/{\mathrm{d}}\varphi$, [[which is a slowly varying function [@Maroli:JApplPhys2018].]{}]{} Explicit analytical expressions are lengthy, and given in the Supplementary Material. In the limit $\vec{B}=\vec{0}$, we re-obtain the Fourier-limited Gaussian pulse with the amplitude $a_0$ and temporal r.m.s. duration $\Delta \varphi = 1/\Delta\omega_L$, Eq. . For nonzero values of $\vec B$ the resulting laser pulse will be stretched, with a non-Gaussian envelope in general, and with a lower effective peak amplitude than the Fourier limited pulse. It can be thought of as a superposition of two interfering linearly chirped Gaussian pulses with a delay of $\delta=2B_1/\Delta\omega_L$, each with a full-width at $1/e$ duration $\varphi_p=2\sqrt{1+B_2^2}/\Delta\omega_L$, see Fig. \[fig:wigner\] (a). The Wigner function characterizing such a recombined two-pulse is shown in Fig. \[fig:wigner\](b), together with the instantaneous frequency $\omega_L(\varphi)$ (dashed black curve) and instantaneous $1+a^2(\varphi)$ (dashed green curve). The goal is to optimize $\vec B$ such that the compensation condition is fulfilled as good as possible, see green and black curves in Fig. \[fig:wigner\](b). The optimal values of $\vec B$ depend on $a_0$ and the available laser bandwidth. We mention here that the stretching of the pulse alone leads to some improvement of the spectral brightness already since the effective $a_0$ is lowered. However, as we shall see later when comparing with an unchirped matched Gaussian pulse, the optimal pulse chirping improves the spectral brightness much further.
The on-axis ICS gamma photon spectral density of an optimized laser pulse scattered from a counterpropagating electron with $\gamma\gg1$, assuming quantum recoil and radiation reaction can be still neglected [^1], can then be written as [@Supplement] [[ $$\begin{aligned}
\left.
\frac{{\mathrm{d}}^2 N}{{\mathrm{d}}y {\mathrm{d}}\Omega}
\right|_{\theta=\pi}
=
\mathcal N
\left|\intop_{-\infty}^{+\infty} \! {\mathrm{d}}\varphi \:
a_\perp(\varphi)\,
e^{i \omega_0 y \left(\varphi+\int_{-\infty}^{\varphi} a(\varphi')^2 {\mathrm{d}}\varphi' \right)} \right|^2 \,,
\label{photon_spec_eq}
\end{aligned}$$ with $y=\omega'/(4\gamma^2 \omega_{0})$ as the normalized frequency, $\theta$ is the polar angle, and normalization $\mathcal N = e^2 y \gamma^2 \omega_0^2/\pi^2$.]{}]{} The on-axis scattered photon spectrum depends on the values of $\vec{B}$, as illustrated in Fig. \[spectra\_fig\](a), where the case of $a_0=2$ and $\Delta\omega_L/\omega_{0} = 0.1 $ is presented. The black dashed curve shows the spectrum for the unchirped laser pulse with $\vec B =\vec{ 0 }$, while the blue solid line shows the spectrum for the optimized values $\vec{B}=\{1.4, 4.5, -4.6\}$ with rms bandwidth $\Delta y = 0.0178$. The corresponding normalized laser pulse vector potential shapes are shown in Fig. \[spectra\_fig\](b) with the same color code.
For the chirped pulse (blue solid line), both the envelope and the waveform are presented, and one can measure that the frequency in the wings of the laser pulse is lower than in the center, where the intensity is higher. For comparison, we also employ the notion of the matched Gaussian pulse—a pulse that has the same amplitude as the chirped pulse ($a_\mathrm{eff} = 0.58$), and the same energy content, but the frequency is constant and is equal to $\omega_{0}$ [@Supplement]. The envelope of the matched Gaussian pulse and the corresponding ICS photon spectrum are shown by red curves in Fig. \[spectra\_fig\](b) and (a), respectively. One can see that, by choosing the optimal chirp parameters, the peak of the scattered photon spectrum is in this case 4 times higher and the bandwidth is significantly narrower than the case of the matched unchirped Gaussian pulse.
![(a) On-axis scattered photon spectra for $a_0=2$, $\Delta\omega_L/\omega_{0}=0.1$ for different values of parameters $\vec{ B }$. The spectrum for the unchirped case with $\vec{B} = \vec{ 0 }$ is drawn with dashed black line. Note, that for visibility, the spectrum has been multiplied by 5. The case of the optimal set of parameters $\vec{ B }=\{1.4, 4.5, -4.6\}$ is shown with a blue curve. The red cure demonstrates the on-axis scattered photon spectrum for the matched unchirped Gaussian pulse (described in the text). (b) Corresponding amplitudes $a(\varphi)$ of the normalized laser vector potential. []{data-label="spectra_fig"}](figure2.pdf){width="\columnwidth"}
We now develop a model to predict the optimal chirp parameters $\vec B$ for generation of a narrowband ICS spectrum. As discussed above, the recombined optimized laser pulse can be seen as the coherent superposition of two delayed oppositely linearly chirped Gaussian pulses. If the delay is too large this will result in two separate pulses and the optimization condition cannot be fulfilled.
First, for optimal pulse overlap the delay between the two pulses $\delta$ should roughly equal the duration of each of the pulses $\varphi_p$, hence, $B_1 = \chi \sqrt{1+B_2^2}$, with $\chi=\mathcal O(1)$ as a factor of proportionality [^2]. Second, the interference of two pulses should be mostly constructive and the two-pulse should contain the maximum possible amount of photons, which determines $B_0(B_1,B_2)$, see Eq. below, by requiring that the argument of the cosine in the analytical expression for $a^2(\varphi)$ [@Supplement] is $2\pi n$ with integer $n$.
Finally, but most importantly, we match the linear chirp given by parameter $B_2$ to the change of the envelope. For doing this, one only needs two values of instantaneous frequency $\omega_L(\phi)$ [@Supplement]: one at the center of each Gaussian pulses, $\omega_1=\omega_{0}$, and the other in the middle of the two-pulse, $\omega_2 \simeq \omega_{0} - {\chi B_2 \Delta\omega_L}/{\sqrt{1+B_2^2}} $. This is outlined in Fig. \[fig:wigner\] (a), where instantaneous frequency is schematically drawn with the dashed red line, and two points used for slope matching are shown with red dots. The compensation condition for narrowband emission now turns into $$\begin{aligned}
\frac{\omega_1}{1+a^2(-\delta/2) } = \frac{\omega_2}{1+a^2(0)}\,,
\end{aligned}$$ which provides an equation for $B_2$. In the limit $|B_2|\gg 1$, after straightforward algebraic calculations, we find expressions for all three parameters as functions of laser bandwidth and $a_0$, $$\begin{aligned}
\begin{split}
B_2 &= -\frac{a_0^2}{4\chi}\frac{\omega_{0}}{\Delta\omega_L}
\left(4 e^{-\chi^2} - 1 - \chi \frac{\Delta\omega_L}{\omega_{0}}\right) \,, \\
B_1 &= \chi \sqrt{1+B_2^2} \,, \\
B_0 &= \frac{\chi^2 B_2}{2} - \frac{1}{4} \arctan B_2 + n\pi \,, \end{split}
\label{B_model}
\end{aligned}$$ that produce laser pulses which compensate the nonlinear spectrum broadening and significantly reduce the bandwidth of backscattered X-rays.
![ (a) Optimal chirp parameters $B_0$ (red diamonds), $B_1$ (green squares) and $B_2$ (blue circles) that lead to the narrowest on-axis scattered photon spectrum and obtained from simulations, as functions of $a_0$ for $\Delta \omega_L/\omega_0=0.1$. Shaded areas of corresponding colors are the model predictions, Eqns. . (b) Relative rms bandwidth of the on-axis photon spectrum for the optimally chirped pulse. (c) Peak spectral brightness (psb) of the on-axis scattered photons as a function of $a_0$ for the optimally chirped pulse (purple diamonds) and compared to matched Gaussian (orange squares) and unoptimized cases (cyan circles). The black dotted curve represents a fit $f(a_0)=637\,a_0^{2.8}$ of the optimized case, the shaded are corresponds to peak spectral brightness predicted using model parameters just like in (a). []{data-label="sim_fig"}](figure3.pdf){width="0.95\columnwidth"}
Numerical optimization was carried out in order to find optimal sets of parameters $\vec{ B }$ that yields the narrowest rms ICS spectrum for various $a_0$ and laser bandwidth $\Delta\omega_L/\omega_{0}=0.1$. The results are shown in Fig. \[sim\_fig\], and compared to the analytical model predictions, Eqns. . The symbols in Fig. \[sim\_fig\] (a) refer to the numerically optimized parameters $\vec B$, while the shaded areas correspond to the model predictions with varying $\chi=0.95\ldots1$ with the dotted curves at $\chi=1$. Note there are some discrepancies between the numerical optimization and the analytical model for small $a_0$ because we assumed $B_2\gg1$ in order to solve the equations to arrive at .
Fig. \[sim\_fig\] (b) shows a relative rms bandwidth of the ICS photons, which is well below 4% throughout. It is evident there is an optimal range of $a_0$ for the given laser bandwidth around $a_0=1.5$ where the bandwidth of the ICS photons is smallest. This optimal region shifts to higher $a_0$ for larger laser bandwidth.
For large values of $a_0 > 3.5$, the quality of the photon spectrum somewhat decreases. We attribute this to the fact, that for high values of $a_0$, the amount of chirp stretches the pulses very long and the delay between pulses leads to the beating of two Gaussians such that the envelope of the two-pulse is not smooth anymore. According to our model, for the optimally chirped pulses the effective $a_\mathrm{eff} \approx 1.76 \times \sqrt{ \Delta \omega_L / \omega_0} $ for $|B_2|\gg1$, independent of the initial $a_0$. We have performed studies for different values of $\Delta \omega_L/\omega_0$ and found similar optimization but extending to larger $a_0$ for larger bandwidth.
Fig. \[sim\_fig\](c) shows the on-axis peak spectral brightness of the ICS photons as a function of $a_0$ for the numerically optimized chirped pulses (purple diamonds). The purple shaded area corresponds to the optimized model parameters Eqns. with the same variation as in Fig. \[sim\_fig\] (a), showing a very good agreement and robustness. The case of $a_0=2$ is the same as shown in Fig. \[spectra\_fig\]. Our simulations indicate that the peak spectral brightness for optimally chirped pulses grows $\propto a_0^{2.8}$ (black dotted curve). For comparison, we also show the completely unchirped pulses (green circles) and matched gaussians (orange squares). The peak spectral brightness for optimally chirped pulse exceeds the matched Gaussian pulse by a factor of $4\ldots5$.
[ We have presented a simple two-pulse scheme for the compensation of non-linear broadening effects in inverse Compton scattering gamma ray sources based on the spectral synthesis of optimized chirped laser pulses. We developed a model to predict the required spectral phase in dependence on the laser intensity and bandwidth. To verify the robustness of our scheme with regard to 3D effects we numerically simulated electron trajectories and the radiation emission using Liénard-Wiechert potentials in the far field [@book:Jackson] for a realistic scenario (Figure \[fig:simulation\]): A $\unit{270}{\mega\electronvolt}$ LPA electron beam with $\unit{2.2}{\%}$ energy spread and $\unit{0.2}{\milli\metre\,\milli\rad}$ normalized emittance, beam size $\unit{1.8}{\micro\metre}$ and duration $\unit{10}{\femto\second}$ [@Lundh:NatPhys2011; @Plateau:PRL2012; @Weingartner:PRSTAB2012], is colliding with a Gaussian laser pulse of waist $w_0=\unit{20}{\micro\metre}$ within the paraxial approximation [@Harvey:PRSTAB2016; @Maroli:JApplPhys2018], $a_0=2$, $\omega_0=\unit{1.55}{\electronvolt}$, and the temporal chirping taken from the numerical optimization in Fig. \[sim\_fig\]. ]{}
![[[Simulated spectral-angular Compton photon distribution for a $\unit{50}{\pico\coulomb}$ realistic electron beam interacting with a focused laser pulse. The green curve is an on-axis lineout.]{}]{}[]{data-label="fig:simulation"}](figure4.pdf){width="0.95\columnwidth"}
[[ Our model predictions can serve as initial conditions for active feedback optimization of inverse Compton sources, e.g. using machine learning techniques. The optimal control of the temporal laser pulse structure has been successfully demonstrated already, e.g. for the optimization of laser accelerated electron beams and x-ray production [@Streeter:APL2018; @Dann:2018]. The optimal chirping of the scattering pulse could be included into an overall feedback loop. Depending on the desired bandwidth of the ICS the other contributions to the total bandwidth, such as the electron energy spread and emittance, could be optimized alongside the temporal spectral shape of the scattering laser pulse by choosing proper goal functions for optimization. ]{}]{}
D. S. acknowledges fruitful discussions with A. G. R. Thomas and S. Dann. This work was funded in part by the US ARO grant no. W911NF-16-1-0044 and by the Helmholtz Association (Young Investigators Group VH-NG-1037).
Analytic Results for the Two-Pulse
==================================
Here we provide explicit formulas for the vector potential of the recombined two-pulse $a_\perp(\varphi)$, its envelope $a(\varphi)$, and local frequency $\omega_L(\varphi)$. First, by calculating the inverse Fourier transformation of the modulated spectrum, Eq. (5) of the main text, we immediately find the complex scalar amplitude $$\begin{aligned}
a_\perp & = \frac{a_0}{2 \sqrt[4]{1+B_2^2}}
e^{-i \omega_0 \varphi }
\left(
e^{- \frac{c_-}{2} (1+iB_2) + i B_0 + \frac{i}{2} \arctan B_2 }
+
e^{- \frac{c_+}{2} (1-iB_2) - iB_0 - \frac{i}{2} \arctan B_2}
\right)\end{aligned}$$ with $$\begin{aligned}
c_\pm &= \frac{(B_1 \pm \varphi \Delta \omega_L)^2}{1+B_2^2} \,,\end{aligned}$$ and $\varphi = t-z $. The real vector potential of a circularly polarized laser pulse is related to $a_\perp$ by $\vec a_\perp = \Re [ a_\perp \vec \epsilon ]$ with $\vec \epsilon = \vec e_x + i \vec e_y$.
The squared envelope of the laser pulse is given by $$\begin{aligned}
\label{eq:a2}
a^2(\varphi) &= \vec a_\perp^2 = | a_\perp |^2
= \frac{a_0^2}{2\sqrt{1+B_2^2}} e^{-\xi}
\left[ \cosh 2d_0\varphi
+ \cos \zeta
\right]\end{aligned}$$ where we introduced the following abbreviations $$\begin{aligned}
d_0 & = \frac{B_1 \Delta \omega}{1+B_2^2}\,, \qquad
\xi = \frac{B_1^2 + \varphi^2 \Delta\omega^2_L}{1+B_2^2} \,, \qquad
\zeta = 2 B_0 - B_2 \xi + \arctan B_2 \,.\end{aligned}$$
An important quantity is the infinite integral over the squared vector potential envelope, $$\begin{aligned}
\label{total_energy_eq}
\intop_{-\infty}^\infty \! {\mathrm{d}}\varphi \: a^2(\varphi) =
\frac{a_0^2\sqrt{\pi}}{2}\frac{1}{\Delta\omega_L}
\left[
1
+
\frac{e^{-\frac{B_1^2}{1+B_2^2}}}{(1+B_2^2)^\frac{1}{4}}
\,
\cos \left(
{2B_0 - \frac{B_1^2B_2}{1+B_2^2} + \frac{1}{2} \arctan B_2 } \right)
\right] \,.\end{aligned}$$ When the cosine term is maximised, i.e. its argument a multiple of $2\pi$, then the interference between the two sub-pulses is mostly constructive.
Because $a_\perp = a \, e^{- i \Phi(\varphi)}$ we can write $\Phi = i \log \frac{a_\perp}{a}$, and $$\begin{aligned}
\label{eq:omega-analytic}
\omega_L(\varphi) &= \frac{{\mathrm{d}}\Phi }{{\mathrm{d}}\varphi}
= - \frac{{\rm Im}\,[ a_\perp'a_\perp^* ] }{a^2}
= \omega_{0} - d_0 B_2 + d_0 \: \frac{\varphi \frac{\Delta \omega B_2}{B_1} \sinh 2d_0 \varphi - \sin \zeta}{\cosh 2d_0\varphi + \cos \zeta} \,.\end{aligned}$$
Defining the Matched Gaussian Pulse
===================================
We match the a Gaussian with constant frequency $\omega_0$ to the two-pulse, having the same effective peak amplitude and total energy in the pulse, $$\begin{aligned}
a_\mathrm{matched}(\varphi) = a_\mathrm{eff} \, e^{-i\omega_0\varphi} \, e^{-\varphi^2/2\Delta \varphi_\mathrm{eff}^2}\,.\end{aligned}$$ First, the amplitude is matched by evaluating Eq. at $\varphi=0$ with the approximation $\zeta \to 0$, yielding $$\begin{aligned}
a_\mathrm{eff}^2 &= a_0^2 \: \frac{e^{- \frac{B_1^2}{1+B_2^2}}}{\sqrt{1+B^2_2}} \,.\end{aligned}$$
Second, the pulse duration is matched by the requirement that both the chirped two-pulse and the matched Gaussian have the same energy, $$\begin{aligned}
w = \int {\mathrm{d}}\varphi \left| \frac{ {\mathrm{d}}a_\perp }{{\mathrm{d}}\varphi } \right|^2 \,,\end{aligned}$$ i.e. $$\begin{aligned}
\Delta\varphi_\mathrm{eff} = \frac{w}{\sqrt{\pi} \omega_0^2 a_\mathrm{eff}^2} \,.\end{aligned}$$
Derivation of the Formula for the on-axis Spectrum
==================================================
Under the assumption that we can use classical electrodynamics to calculate the radiation spectrum, the energy and angular differential photon distribution is given by [@book:Jackson] $$\begin{aligned}
\label{N1}
\frac{{\mathrm{d}}N}{{\mathrm{d}}\omega' {\mathrm{d}}\Omega} = \frac{\omega'}{4\pi^2} | \vec n' \times (\vec n' \times \vec j)|^2 \,,\end{aligned}$$ with the Fourier transformed electron current $$\begin{aligned}
\vec j (\omega',\vec n') = - e \int \! {\mathrm{d}}s \, \vec u(s) \, e^{i \omega' [ t(s) - \vec n' \cdot \vec x(s)]} \,,\end{aligned}$$ and $\vec n'$ the direction under which the radiation is observed. Here, $s$ denotes the electron’s proper time, parametrizing the electron orbits $t(s),\vec x(s)$ and four-velocity components $\gamma(s) = {\mathrm{d}}t/{\mathrm{d}}s = \sqrt{1+\vec u^2}$ and $\vec u = {\mathrm{d}}\vec x/{\mathrm{d}}s$, which is a solution of the Lorentz force equation $$\begin{aligned}
\frac{{\mathrm{d}}\vec u}{{\mathrm{d}}s} = \frac{e}{m} ( \gamma \vec E + \vec u\times \vec B ) \,.\end{aligned}$$ For on-axis radiation, $\vec n' = - \vec e_z$, the double vector product can be simplified to $$\begin{aligned}
| \vec n' \times (\vec n' \times \vec j)|^2 = | \vec j_\perp |^2 \,,\end{aligned}$$ and with $\vec u_\perp = -\vec a_\perp$ and changing integration variables from proper time to laser phase $\varphi = t-z$ via $ {\mathrm{d}}\varphi / {\mathrm{d}}s \approx 2\gamma $ (for initial value of $\gamma\gg1$), turns into $$\begin{aligned}
\left.\frac{{\mathrm{d}}N}{{\mathrm{d}}\omega' {\mathrm{d}}\Omega}\right|_\mathrm{on-axis}
= \frac{e^2 \omega'}{4\pi^2 (2 \gamma)^2} \: \left| \int \! {\mathrm{d}}\varphi \, \vec a_\perp \: e^{i\omega' [ t(\varphi) + z(\varphi) ] } \right|^2 \,.\end{aligned}$$ By noting that $t(\varphi) + z(\varphi) = (2\gamma)^{-1} \int \! {\mathrm{d}}\varphi [ \gamma(\varphi) + u_z(\varphi) ] $, with $\gamma(\varphi) + u_z(\varphi) = ( 1 + \vec a_\perp^2)/(2\gamma)$, the definition of the normalized frequency of the emitted photon, $y = \omega' / (4\gamma^2 \omega_{0})$ we eventually arrive at Eq. (6) of the main text. In Fig. \[fig:onaxis\] we show the on-axis spectra for optimized laser pulses of various intensities and bandwidth $\Delta\omega_L/\omega_0=0.1$.
![On-axis photon spectra from optimized laser pulses with the chirping parameters $\vec B$ calculated using the model Eqns. (8) with $\chi=1$.[]{data-label="fig:onaxis"}](figureS1.pdf){width="0.6\columnwidth"}
[^1]: Quantum recoil and spin effects can be neglected for electron energies below 200 MeV, but could easily be included and do not change any qualitative findings on the optimal chirp [@Seipt:PRA2015]. Radiation reaction effects and recoil broadening effects due to multi-photon emission can be neglected when the number of scatterings per electron is less than one, $N_\mathrm{sc} \simeq \frac{2\sqrt{\pi} \alpha}{3} a_0^2 \frac{\omega_0}{\Delta \omega_L} <1$, i.e. $a_0 < 3.4$ for $\Delta \omega_L/\omega_0=0.1$ [@Rykovanov:JPB2014; @Terzic:PRL2014]. Otherwise the recoil contribution to the bandwidth is proportional to $(N_\mathrm{sc}-1) \frac{2\gamma\omega_0}{m}$.
[^2]: Due to symmetry reasons, $-\vec B$ produces the same laser pulse as $\vec B$. Moreover, $B_1$ and $B_2$ need to have opposite sign, we chose here $B_1>0$.
|
---
abstract: 'Many proposals have been put forth for controlling quantum phenomena, including open-loop, adaptive feedback, and real-time feedback control. Each of these approaches has been viewed as operationally, and even physically, distinct from the others. This work shows that all such scenarios inherently share the same fundamental control features residing in the topology of the landscape relating the target physical observable to the applied controls. This unified foundation may provide a basis for development of hybrid control schemes that would combine the advantages of the existing approaches to achieve the best overall performance.'
author:
- Alexander Pechen
- Constantin Brif
- Rebing Wu
- Raj Chakrabarti
- Herschel Rabitz
title: General unifying features of controlled quantum phenomena
---
Steering the dynamics of quantum systems by means of external controls is a central goal in many areas of modern science, ranging from the creation of selective molecular transformations to quantum information processing [@Brif2010NJP]. The control may be either coherent (e.g., a shaped laser pulse [@Ra88-90; @OLC; @JuRa92; @Ra00; @AFC-exp]) or incoherent (e.g., a tailored environment [@ICE] or a sequence of quantum measurements [@QMC]). Quantum control is proving to be successful in the laboratory for manipulating a broad variety of physical, chemical, and biologically relevant processes [@Brif2010NJP; @Ra00; @AFC-exp].
Many seemingly distinct approaches have been developed for controlling quantum phenomena [@suppl], including open-loop control (OLC) in which model-based designs are directly applied in experiments [@OLC], adaptive feedback control (AFC) which is a measurement-guided closed-loop laboratory procedure including resetting the system to the initial state on each loop iteration [@JuRa92; @Ra00; @AFC-exp], and real-time feedback control (RTFC) which involves quantum measurement back-action on the system upon traversing the loop [@Belavkin; @Wiseman]. Each of these control schemes has been argued to be operationally, and even physically, distinct from the others. This Rapid Communication shows that all such scenarios are unified on a fundamental level by the common character of the control landscape [@RHR04-06; @Hsieh08] which relates the physical objective $J[c]$ to the control variables $c$. This general foundation makes it possible to extend the powerful landscape-based methods of optimality analysis, originally developed for OLC and AFC [@RHR04-06], to RTFC. Furthermore, one can envision combinations of such schemes [@suppl], or even the prospect of new unanticipated ones, possibly arising in the future, all of which will share the same fundamental control landscape features.
The structure of the control landscape (i.e., its topology characterized by the nature of the local and global extrema) determines the efficacy of a search for optimal solutions to the posed control problem [@Ho06; @MHR08]. Searching for the global maximum of the objective function $J[c]$ in the laboratory would be significantly hindered by the existence of local maxima that act as traps during the optimization procedure. Even stochastic optimization algorithms could be impeded by a high density of traps [@DiMa01]. If the objective function has no traps, then, in principle, nothing lies in the way of reaching the global maximum (i.e., the best possible yield), except limited controls. This work establishes that a common landscape topology is shared by all quantum control schemes, leading to two general conclusions about controlling quantum phenomena. First, there are no local optima to act as traps on the landscape for a wide class of control problems, thereby enabling highly efficient searches for globally optimal controls. Second, for each particular objective $J$, there exists a single special control $c_*$ which is universally optimal for all initial states of a system; this result implies that $c_*$ provides inherent robustness to variations of the initial system’s state. For practical applications, these generic fundamental properties can facilitate more flexible laboratory implementations of quantum control as well as guide the design of suitable algorithms for laboratory optimization [@Roslund09].
The control landscapes for OLC and AFC were studied in recent works [@RHR04-06; @Hsieh08; @Ho06; @MHR08; @OpenSystemLandscape]. For closed quantum systems, a control objective $J$ can be cast as a function of unitary evolution operators. Assuming evolution-operator controllability (i.e., that any unitary evolution can be produced by the available controls), the landscapes were shown to have no traps for typical objectives, including the expectation value of an observable [@RHR04-06] and the fidelity of a unitary quantum gate [@Hsieh08]. The absence of landscape traps was also analyzed for closed systems in the space of actual controls [@Ho06]. The landscapes for OLC and AFC of open quantum systems can be studied by casting an objective as a function of completely positive, trace-preserving maps (i.e., Kraus maps [@suppl; @Kraus83]) which describe the system evolution. Assuming Kraus-map controllability (i.e., that any Kraus map can be produced by the available controls), the trap-free landscape topology was established for open-system observable control [@OpenSystemLandscape].
The analysis of the control landscapes for RTFC is performed below to significantly extend the prior analysis [@OpenSystemLandscape] and provide a single complete framework unifying the landscapes for all quantum control schemes, including OLC, AFC, and RTFC. Two distinct approaches to RTFC of quantum systems are considered. The first one is based on feedback employing measurements of some quantum system output channel to guide a classical controller [@Wiseman; @DHJMT00; @RTFC-meas-exp]. The other feedback approach is a quantum analog of Watt’s flyball governor—a self-regulating quantum machine. In this scenario (called coherent RTFC), no measurements with a classical output signal are involved, and instead another quantum subsystem is employed to facilitate the control action, so that the evolution of the composite quantum system, consisting of the “plant” (the subsystem of interest) and “controller” (the auxiliary subsystem), is coherent [@CohRTFC-theo; @CohRTFC-exp]. The performance of practical RTFC controllers (classical, quantum, or both) can be, in principle, optimized using AFC [@suppl], possibly guided by an initial OLC design, further enhancing the significance of a universal landscape topology common to all quantum control approaches.
In RTFC based on selective measurements of a single quantum system, the outcome of each measurement is random. After averaging over the random processes corresponding to all possible measurement outcomes, feedback control produces Kraus-type evolution of the controlled system. In RTFC based on measurements of an ensemble of quantum systems, the ensemble average produces a Kraus map as well. In coherent RTFC, if the plant and controller are initially prepared in a product state, the evolution of the plant is once again represented by a Kraus map. Therefore, all of these incarnations of RTFC generate Kraus-type evolution. Arbitrary Kraus maps can be engineered, for example, by using a simple quantum measurement combined with coherent feedback actions [@LlVi01]. Adopting a previous analysis [@OpenSystemLandscape], we show, under the Kraus-map controllability assumption, that quantum control landscapes have no traps for all considered types of RTFC. We will start with basic definitions, then prove that each considered type of RTFC produces Kraus-type evolution, and, finally, use these results to determine the quantum control landscape topology. The conclusion of the Rapid Communication will then draw the analysis together for a unified control landscape formulation including OLC, AFC, and RTFC.
In practically important situations the evolution of an open quantum system can be represented by a Kraus map [@suppl; @Kraus83]. For an $n$-level system, any such map $\Phi$ can be cast as the operator-sum representation (OSR) [@Kraus83]: $\Phi(\rho) = \sum_{\nu=1}^L
K_{\nu} \rho K^{\dagger}_{\nu}$, where $\rho$ is the density matrix (a positive, unit-trace $n \times n$ matrix) and $\{ K_{\nu}
\}_{\nu=1}^L$ is a set of Kraus operators (complex $n \times n$ matrices satisfying the trace-preserving condition $\sum_{\nu=1}^L
K^{\dagger}_{\nu} K_{\nu} = \mathbb{I}$). The OSR is not unique: Any map $\Phi$ can be represented using infinitely many sets of Kraus operators. We denote the set of all Kraus maps by $\mathcal{K}_n$. The Kraus-map description of open-system dynamics is very general and includes both Markovian and non-Markovian regimes [@LiBiWh01].
A generalized quantum measurement with $N_0$ possible outcomes $\{O_{\alpha}\}$ ($\alpha = 1,2,\ldots, N_0$) is characterized by a family of Kraus operators $\{K_{\alpha,\beta}\}$. We denote the set of outcomes and corresponding Kraus operators for a given quantum measurement as $O := \{O_{\alpha} , K_{\alpha,\beta} \}$. In particular, the projective measurement of a Hermitian observable $\hat{O} = \sum_{\alpha} O_{\alpha} \Pi_{\alpha}$ with eigenvalues $O_{\alpha}$ and spectral projectors $\Pi_{\alpha}$ corresponds to the case $K_{\alpha,\beta} = \Pi_{\alpha} \delta_{\alpha,\beta}$. If the measurement starts at time $t$ and the system density matrix before the measurement is $\rho(t)$, then the probability of the measurement outcome $O_{\alpha}$ will be $p_{\alpha} = \text{Tr} \big[
\sum_{\beta} K_{\alpha,\beta} \rho(t) K^{\dagger}_{\alpha,\beta}
\big]$.
When a selective measurement of duration $\tau_{\text{m}}$ is performed and the outcome $O_{\alpha}$ is observed, the density matrix evolves as $\rho(t) \to \rho_{\alpha}(t+\tau_{\text{m}})$, where $$\label{eq:selective}
\rho_{\alpha}(t+\tau_{\text{m}}) = \frac{1}{p_{\alpha}} \sum_{\beta}
K_{\alpha,\beta} \rho(t) K^{\dagger}_{\alpha,\beta} .$$ If the measurement is nonselective (i.e., the measurement outcome is not observed), the corresponding evolution $\rho(t) \to \rho(t+\tau_{\text{m}})$ will be the average over all possible measurement outcomes: $$\label{eq:non-selective}
\rho(t+\tau_{\text{m}}) = \sum_{\alpha} p_{\alpha}
\rho_{\alpha}(t+\tau_{\text{m}}) = \sum_{\alpha,\beta}
K_{\alpha,\beta} \rho(t) K^{\dagger}_{\alpha,\beta} .$$ The evolution in Eq. (\[eq:non-selective\]) defines a Kraus map $\Omega_O : \rho(t) \to \Omega_O [\rho(t)]=\rho(t+\tau_{\text{m}})$, which is completely determined by the measurement $O$.
In RTFC, the measurement (or, in another variation, the interaction with an auxiliary quantum “controller”) alters the evolution of the quantum system at each feedback iteration. Thus, the same quantum system is followed in real time in the feedback loop. Implementing RTFC on the atomic or molecular scale appears to be a very challenging technical problem, but its practical realization promises to significantly improve the ability to stabilize and control quantum systems. Consider now RTFC of a single quantum system, where a discrete series of selective measurements is performed and each measurement is followed by a feedback action (continuous feedback can be treated as the limit of the discrete case, resulting in the same control landscape topology). For the discrete case, the $i$th iteration of the feedback process consists of a measurement $O^i$ with possible outcomes $\{O^i_{\alpha}\}$ at time $t_i$, followed by a feedback action dependent on the measurement outcome (the measured observables and feedback actions can be distinct at different iterations). The feedback action may be generally represented by a Kraus map that depends on the measurement outcome (a special case of the feedback action is a unitary transformation corresponding to a coherent control). If at the $i$th iteration the measurement outcome $O^i_{\alpha}$ is observed, then the feedback action (of duration $\tau_{\text{f}}$) described by the Kraus map $\Lambda^i_{\alpha}$ will be applied to the system, so that the density matrix will evolve as $\rho(t_i) \to \rho_{\alpha}(t_i+\tau_{\text{m}}+\tau_{\text{f}})
= \Lambda^i_{\alpha}[\rho_{\alpha}(t_i+\tau_{\text{m}})]$, where $\rho_{\alpha}(t_i + \tau_{\text{m}})$ is of the form (\[eq:selective\]), with the corresponding probability $p_{\alpha}^i$. The map $\Lambda^i_{\alpha}$ also includes the free evolution and influence of the environment. The system evolution after one feedback iteration, averaged over all possible measurement outcomes, $\rho(t_i + \tau_{\text{m}} + \tau_{\text{f}}) =
\sum_{\alpha} p_{\alpha}^i
\rho_{\alpha}(t_i+\tau_{\text{m}}+\tau_{\text{f}})$, is therefore given by the transformation $\rho(t_i) \to \rho(t_i + \tau_{\text{m}}
+ \tau_{\text{f}}) = \Phi^i [\rho(t_i)]$, where $$\label{eq:one-iteration}
\Phi^i [\rho(t_i)]
= \sum_{\alpha} \Lambda^i_{\alpha} \bigg[ \sum_{\beta}
K^i_{\alpha,\beta} \rho(t_i) (K^i_{\alpha,\beta})^{\dagger} \bigg]$$ and $\{ K^i_{\alpha,\beta} \}$ are the Kraus operators characterizing the measurement $O^i$. Let the OSR of the feedback-action Kraus map $\Lambda^i_{\alpha}$ be $\Lambda^i_{\alpha}(\rho) = \sum_{\nu}
L^i_{\nu,\alpha} \rho (L^i_{\nu,\alpha})^{\dagger}$, where $\sum_{\nu}
(L^i_{\nu,\alpha})^{\dagger} L^i_{\nu,\alpha} = \mathbb{I}$. Then Eq. (\[eq:one-iteration\]) can be rewritten as $$\label{eq:Phi-one-iteration}
\Phi^i [\rho(t_i)] = \sum_{\nu,\alpha,\beta} Z^i_{\nu,\alpha,\beta} \rho(t_i)
(Z^i_{\nu,\alpha,\beta})^{\dagger} ,$$ where $Z^i_{\nu,\alpha,\beta} = L^i_{\nu,\alpha}
K^i_{\alpha,\beta}$. Since $\sum_{\nu,\alpha,\beta}
(Z^i_{\nu,\alpha,\beta})^{\dagger} Z^i_{\nu,\alpha,\beta} =
\mathbb{I}$, the transformation $\Phi^i$ of Eqs. (\[eq:one-iteration\]) and (\[eq:Phi-one-iteration\]) is a Kraus map. Thus, Eq. (\[eq:Phi-one-iteration\]) shows that the average evolution for one feedback iteration is of the Kraus type for arbitrary (generalized) quantum measurement and any (coherent or incoherent) feedback action. For a special case of a pure, least-disturbing measurement and a coherent feedback action, this result was obtained in Ref. [@LlVi01].
The entire feedback process for controlling the system from the initial to final time is characterized by a sequence of measurements and feedback actions: $F=\{O^1,F^1, \ldots, O^N,F^N\}$, where $O^i$ ($i = 1,2,\ldots,N$) is the measurement for the $i$th iteration, $F^i=\{\Lambda^i_{\alpha}\}$ is the set of all feedback actions (Kraus maps) for the $i$th iteration, and $N$ is the number of iterations. Different trials of the feedback process will, in general, produce distinct evolutions, resulting in different system states at the final time $T$. The average output of the feedback process is given by averaging over all possible evolutions, which produces the transformation $\rho(T)=
\Phi_F [\rho(0)]$, where $\Phi_F = \Phi^N \circ \cdots \circ \Phi^2
\circ \Phi^1$ is a Kraus map given by the composition of one-iteration Kraus maps of the form (\[eq:Phi-one-iteration\]).
Consider now RTFC of an ensemble of identical quantum systems, where measurements record the expectation value $\overline{O} =
\sum_{\alpha} p_{\alpha} O_{\alpha} = \text{Tr} [ \hat{O} \rho(t)]$ of an observable $\hat{O}$. The density matrix representing the state of the ensemble undergoes a transformation characteristic for a nonselective measurement (i.e., with averaging over all possible measurement outcomes): $\rho(t) \to \rho(t+\tau_{\text{m}}) =
\Omega_O [\rho(t)]$, where the Kraus map $\Omega_O$ is defined by Eq. (\[eq:non-selective\]). The feedback action conditioned upon the measured value $\overline{O}$ is generally represented by a Kraus map $\Lambda_{\overline{O}}$, so that the ensemble evolution for one feedback iteration is $\rho(t_i) \to \rho(t_i + \tau_{\text{m}} +
\tau_{\text{f}}) = \Phi^i [\rho(t_i)]$, where the Kraus map $\Phi^i
= \Lambda^i_{\overline{O}} \circ \Omega^i_O$ is the composition of the Kraus maps representing the ensemble measurement and feedback action for the $i$th iteration. Similar to the single-system case, the overall transformation for the entire feedback process is $\rho(T)=
\Phi_F [\rho(0)]$, where the Kraus map $\Phi_F = \Phi^N \circ \cdots
\circ \Phi^2 \circ \Phi^1$ is again the composition of one-iteration Kraus maps.
Consider now coherent RTFC, where the quantum subsystem of interest (the plant) interacts with an auxiliary quantum subsystem (the controller), and the evolution of the composite system is coherent: $\rho_{\text{tot}}(T) = U(T) \rho_{\text{tot}}(0) U^{\dagger}(T)$. Here, $\rho_{\text{tot}}$ and $U(T)$ are the density matrix and the unitary evolution operator, respectively, for the composite system. An external coherent control field (which is generally time-dependent) can act on the plant, controller, or both. The state of the plant at any time $t$ is represented by the reduced density matrix $\rho(t) =
\text{Tr}_{\text{c}}[\rho_{\text{tot}}(t)]$, where $\text{Tr}_{\text{c}}$ denotes the trace over the controller’s degrees of freedom. If the initial state of the composite system is in tensor product form, $\rho_{\text{tot}}(0) = \rho(0) \otimes
\rho_{\text{c}}(0)$, then the evolution of the plant is represented by a Kraus map: $\rho(T) = \Phi_U[\rho(0)] = \sum_{\nu} K^U_{\nu} \rho(0)
(K^U_{\nu})^{\dagger}$, where the Kraus operators $K^U_{\nu}$ depend on the evolution operator $U(T)$ of the composite system [@Kraus83].
With the analysis above, we can now assess the topology of quantum control landscapes for all considered types of RTFC. To be specific, a prevalent problem in quantum control is to maximize the expectation value of some target observable of the controlled system. As we established above, in measurement-based RTFC for both a single quantum system and an ensemble, the average density matrix at the final time $T$ is given by $\rho (T) = \Phi_F [\rho(0)]$, where the Kraus map $\Phi_F$ is determined by the set $F$ of measurements and feedback actions at all iterations of the feedback process. The goal is to find an optimal feedback process $F_{\text{opt}}$ that maximizes the expectation value of the target observable $\hat{A}$ at the final time, $\overline{A}(T) = \text{Tr} [\hat{A} \rho(T)]$. The feedback process $F$ plays the role of a set of controls, and the objective function has the form: $J[F] = \text{Tr} \big\{\hat{A} \Phi_F[\rho(0)]
\big\}$. An OSR of the Kraus map $\Phi_F$ is given by $\Phi_F
[\rho(0)] = \sum_{\nu} K^F_{\nu} \rho(0) (K_{\nu}^F)^{\dagger}$, where $\{K^F_{\nu}\}$ is a set of Kraus operators depending on the feedback process $F$. The objective $J[F]$ can be cast as a function of Kraus operators: $$\label{eq:OF-K}
J[\{ K^F_{\nu} \}] = \text{Tr} \left[ \hat{A}
\textstyle{\sum_{\nu}} K^F_{\nu}
\rho(0) (K_{\nu}^F)^{\dagger} \right] .$$ We assume that the set $\mathcal{F}$ of all available feedback processes (i.e., all available measurements and feedback actions) is rich enough to produce all possible Kraus maps, i.e., that the system is Kraus-map controllable. For coherent RTFC, the plant also undergoes a Kraus-type evolution, $\rho(T) = \Phi_U[\rho(0)]$, and the control objective $\overline{A}(T)$ can also be cast as a function of the Kraus operators: $J[\{ K^U_{\nu} \}] = \text{Tr} \big[ \hat{A}
\sum_{\nu} K^U_{\nu} \rho(0) (K_{\nu}^U)^{\dagger} \big]$, which has the same form as Eq. (\[eq:OF-K\]). For a sufficiently large controller prepared initially in a pure state, evolution-operator controllability of the composite system is a sufficient condition for Kraus-map controllability of the plant [@Wu07].
Building on recent OLC and AFC-motivated analysis of control landscape topology [@OpenSystemLandscape] for a system that is Kraus-map controllable, the objective function of the form (\[eq:OF-K\]) has no local maxima for any initial state $\rho(0)$ and any Hermitian operator $\hat{A}$. This result implies that *quantum control landscapes for OLC, AFC, measurement-based RTFC, coherent RTFC, and combinations thereof, all have the same trap-free topology*, provided that the available controls are sufficient to produce any Kraus map. Furthermore, all local extrema of the objective function of the form (\[eq:OF-K\]) are saddles [@OpenSystemLandscape] that can be easily evaded by a suitable algorithm guiding an ascent over the landscape. The trap-free control landscape topology is established above in the space of Kraus maps, $\mathcal{K}_n$. The control landscape also will be trap-free in the space of actual controls, $\mathcal{C}$, if the tangent map from $\mathcal{C}$ to $\mathcal{K}_n$ is surjective everywhere in $\mathcal{C}$. Although, in general, there exist so-called singular controls [@Bonnard2003] at which surjectivity does not hold, numerical results show that their impact on the search for optimal controls should be negligible [@Wu2009].
Another important consequence of controllable Kraus-type evolution for various types of quantum OLC, AFC and RTFC is the existence of a special control (e.g., for measurement-based RTFC, a special feedback process $F_{\ast}$) that is optimal for *all* initial states of the system. Let $\rho(T)$ be the state maximizing the target expectation value: $\text{Tr}[\hat{A} \rho(T)] = \max_{\rho}
\text{Tr}(\hat{A} \rho)$, and let the spectral decomposition of this final state be $\rho(T) = \sum_{\alpha=1}^n p_{\alpha}
|u_{\alpha}\rangle \langle u_{\alpha} |$, where $p_{\alpha}$ is the probability to find the system in the state $|u_{\alpha}\rangle$. For an arbitrary orthonormal basis $\{ v_{\beta} \}$ in the system’s Hilbert space, define operators $K_{\alpha,\beta} = p_{\alpha}^{1/2}
|u_{\alpha}\rangle \langle v_{\beta} |$. The Kraus map built from these operators, $\Phi_{\ast}(\rho) = \sum_{\alpha,\beta=1}^n
K_{\alpha,\beta} \rho K_{\alpha,\beta}^{\dagger}$ generates evolution $\Phi_{\ast}[\rho(0)] = \rho(T)$ for all initial states $\rho(0)$ [@Wu07]. Therefore, the control that produces the Kraus map $\Phi_{\ast}$ (e.g., for measurement-based RTFC, the feedback process $F_{\ast} \in \mathcal{F}$ such that $\Phi_{F_{\ast}} = \Phi_{\ast}$) will be optimal for all initial states and this control will be robust to variations of the initial system state.
This work shows that the operationally and technologically distinct quantum control approaches of OLC, AFC, and RTFC share, under the condition of Kraus-map controllability, a unified control landscape structure implying two common fundamental properties. First, all such control schemes are characterized by the absence of landscape traps, with all local extrema being saddles. Second, special controls exist which are universally optimal for all initial states. These findings establish that (1) there are no inherent landscape features hindering attainment of the highest possible control yield and (2) suitable controls can provide broad scale robustness to variations of the initial conditions. Moreover, these properties are valid for any quantum control scheme which produces Kraus-type evolution of the system, including feedback-based and open-loop approaches, as well as their combinations. The unification of these seemingly different approaches at a conceptual level (see figures in supplementary material [@suppl]) should, in turn, provide a basis to ultimately unite the currently distinct laboratory realizations of quantum feedback control and open-loop control to attain the best performance under all possible conditions.
This work was supported by NSF and ARO.
[99]{}
C. Brif, R. Chakrabarti, and H. Rabitz, [New J. Phys. [**12**]{}, 075008 (2010).](http://dx.doi.org/10.1088/1367-2630/12/7/075008) A. P. Peirce, M. A. Dahleh, and H. Rabitz, [Phys. Rev. A [**37**]{}, 4950 (1988);](http://dx.doi.org/10.1103/PhysRevA.37.4950) S. Shi and H. Rabitz, [J. Chem. Phys. [**92**]{}, 364 (1990).](http://dx.doi.org/10.1063/1.458438)
N. Timoney *et al.*, [Phys. Rev. A [**77**]{}, 052334 (2008);](http://dx.doi.org/10.1103/PhysRevA.77.052334) M. J. Biercuk *et al.*, [*ibid.* [**79**]{}, 062324 (2009);](http://dx.doi.org/10.1103/PhysRevA.79.062324) Y. Silberberg, [Annu. Rev. Phys. Chem. [**60**]{}, 277 (2009);](http://dx.doi.org/10.1146/annurev.physchem.040808.090427) F. Krausz and M. Ivanov, [Rev. Mod. Phys. [**81**]{}, 163 (2009).](http://dx.doi.org/10.1103/RevModPhys.81.163) R. S. Judson and H. Rabitz, [Phys. Rev. Lett. [**68**]{}, 1500 (1992).](http://dx.doi.org/10.1103/PhysRevLett.68.1500)
H. Rabitz *et al.*, [Science [**288**]{}, 824 (2000).](http://dx.doi.org/10.1126/science.288.5467.824)
A. Assion *et al.*, [Science [**282**]{}, 919 (1998);](http://dx.doi.org/10.1126/science.282.5390.919) D. Meshulach and Y. Silberberg, [Nature (London) [**396**]{}, 239 (1998);](http://dx.doi.org/10.1038/24329) R. Bartels *et al.*, [*ibid.* [**406**]{}, 164 (2000);](http://dx.doi.org/10.1038/35018029) T. Brixner *et al.*, [*ibid.* [**414**]{}, 57 (2001);](http://dx.doi.org/10.1038/35102037) J. L. Herek *et al.*, [*ibid.* [**417**]{}, 533 (2002);](http://dx.doi.org/10.1038/417533a) R. J. Levis, G. M. Menkir, and H. Rabitz, [Science [**292**]{}, 709 (2001);](http://dx.doi.org/10.1126/science.1059133) V. I. Prokhorenko *et al.*, [*ibid.* [**313**]{}, 1257 (2006);](http://dx.doi.org/10.1126/science.1130747) M. P. A. Branderhorst *et al.*, [*ibid.* [**320**]{}, 638 (2008);](http://dx.doi.org/10.1126/science.1154576) T. Brixner *et al.*, [Phys. Rev. Lett. [**92**]{}, 208301 (2004);](http://dx.doi.org/10.1103/PhysRevLett.92.208301) A. Lindinger *et al.*, [*ibid.* [**93**]{}, 033001 (2004);](http://dx.doi.org/10.1103/PhysRevLett.93.033001) T. Laarmann *et al.*, [*ibid.* [**98**]{}, 058302 (2007);](http://dx.doi.org/10.1103/PhysRevLett.98.058302) M. Roth *et al.*, [*ibid.* [**102**]{}, 253001 (2009);](http://dx.doi.org/10.1103/PhysRevLett.102.253001) P. Nuernberger, G. Vogt, T. Brixner, and G. Gerber, [Phys. Chem. Chem. Phys. [**9**]{}, 2470 (2007).](http://dx.doi.org/10.1039/b618760a) A. Pechen and H. Rabitz, [Phys. Rev. A [**73**]{}, 062102 (2006);](http://dx.doi.org/10.1103/PhysRevA.73.062102) R. Romano and D. D’Alessandro, [*ibid.* [**73**]{}, 022323 (2006).](http://dx.doi.org/10.1103/PhysRevA.73.022323) R. Vilela Mendes and V. I. Man’ko, [Phys. Rev. A [**67**]{}, 053404 (2003);](http://dx.doi.org/10.1103/PhysRevA.67.053404) A. Pechen, N. Il’in, F. Shuang, and H. Rabitz, [*ibid.* [**74**]{}, 052102 (2006);](http://dx.doi.org/10.1103/PhysRevA.74.052102) F. Shuang *et al.*, [*ibid.* [**78**]{}, 063422 (2008).](http://dx.doi.org/10.1103/PhysRevA.78.063422) See supplementary material at \[[http://link.aps.org/supplemen tal/10.1103/PhysRevA.82.030101](http://link.aps.org/supplemental/10.1103/PhysRevA.82.030101)\], for more information on quantum control approaches, a hybrid quantum control scheme, and Kraus maps.
V. P. Belavkin, Autom. Remote Control [**44**]{}, 178 (1983).
H. M. Wiseman, [Phys. Rev. A [**49**]{}, 2133 (1994).](http://dx.doi.org/10.1103/PhysRevA.49.2133)
H. Rabitz, M. Hsieh, and C. Rosenthal, [Science [**303**]{}, 1998 (2004);](http://dx.doi.org/10.1126/science.1093649) H. Rabitz, T.-S. Ho, M. Hsieh, R. Kosut, and M. Demiralp, [Phys. Rev. A [**74**]{}, 012721 (2006);](http://dx.doi.org/10.1103/PhysRevA.74.012721) H. Rabitz, M. Hsieh, and C. Rosenthal, [J. Chem. Phys. [**124**]{}, 204107 (2006);](http://dx.doi.org/10.1063/1.2198837) M. Hsieh, R. B. Wu, and H. Rabitz, [*ibid.* [**130**]{}, 104109 (2009).](http://dx.doi.org/10.1063/1.2981796)
M. Hsieh and H. Rabitz, [Phys. Rev. A [**77**]{}, 042306 (2008);](http://dx.doi.org/10.1103/PhysRevA.77.042306) T.-S. Ho, J. Dominy, and H. Rabitz, [*ibid.* [**79**]{}, 013422 (2009).](http://dx.doi.org/10.1103/PhysRevA.79.013422)
T.-S. Ho and H. Rabitz, [J. Photochem. Photobiol. A [**180**]{}, 226 (2006).](http://dx.doi.org/10.1016/j.jphotochem.2006.03.038)
K. Moore, M. Hsieh, and H. Rabitz, [J. Chem. Phys. [**128**]{}, 154117 (2008);](http://dx.doi.org/10.1063/1.2907740) A. Oza *et al.*, [J. Phys. A [**42**]{}, 205305 (2009).](http://dx.doi.org/10.1088/1751-8113/42/20/205305)
J. G. Digalakis and K. G. Margaritis, [Int. J. Comp. Math. [**77**]{}, 481 (2001).](http://dx.doi.org/10.1080/00207160108805080)
J. Roslund and H. Rabitz, [Phys. Rev. A [**79**]{}, 053417 (2009);](http://dx.doi.org/10.1103/PhysRevA.79.053417) J. Roslund, O. M. Shir, T. Back, and H. Rabitz, [*ibid.* [**80**]{}, 043415 (2009).](http://dx.doi.org/10.1103/PhysRevA.80.043415)
A. Pechen, D. Prokhorenko, R. B. Wu, and H. Rabitz, [J. Phys. A [**41**]{}, 045205 (2008);](http://dx.doi.org/10.1088/1751-8113/41/4/045205) R. B. Wu, A. Pechen, H. Rabitz, M. Hsieh, and B. Tsou, [J. Math. Phys. [**49**]{}, 022108 (2008).](http://dx.doi.org/10.1063/1.2883738)
K. Kraus, *States, Effects and Operations: Fundamental Notions of Quantum Theory* (Springer, Berlin, 1983).
A. C. Doherty, S. Habib, K. Jacobs, H. Mabuchi, and S. M. Tan, [Phys. Rev. A [**62**]{}, 012105 (2000).](http://dx.doi.org/10.1103/PhysRevA.62.012105)
P. Bushev *et al.*, [Phys. Rev. Lett. [**96**]{}, 043003 (2006);](http://dx.doi.org/10.1103/PhysRevLett.96.043003) G. G. Gillett *et al.*, [*ibid.* [**104**]{}, 080503 (2010).](http://dx.doi.org/10.1103/PhysRevLett.104.080503)
S. Lloyd, [Phys. Rev. A [**62**]{}, 022108 (2000).](http://dx.doi.org/10.1103/PhysRevA.62.022108) R. J. Nelson, Y. Weinstein, D. Cory, and S. Lloyd, [Phys. Rev. Lett. [**85**]{}, 3045 (2000);](http://dx.doi.org/10.1103/PhysRevLett.85.3045) H. Mabuchi, [Phys. Rev. A [**78**]{}, 032323 (2008).](http://dx.doi.org/10.1103/PhysRevA.78.032323)
S. Lloyd and L. Viola, [Phys. Rev. A [**65**]{}, 010101 (2001).](http://dx.doi.org/10.1103/PhysRevA.65.010101)
D. A. Lidar, Z. Bihary, and K. B. Whaley, [Chem. Phys. [**268**]{}, 35 (2001).](http://dx.doi.org/10.1016/S0301-0104(01)00330-5)
R. Wu, A. Pechen, C. Brif, and H. Rabitz, [J. Phys. A [**40**]{}, 5681 (2007).](http://dx.doi.org/10.1088/1751-8113/40/21/015)
B. Bonnard and M. Chyba, *Singular Trajectories and Their Role in Control Theory* (Springer, Berlin, 2003).
R. B. Wu, J. Dominy, T.-S. Ho, and H. Rabitz, e-print [arXiv:0907.2354.](http://arxiv.org/abs/arXiv:0907.2354)
[**Supplementary material for the manuscript**]{}\
[Alexander Pechen, Constantin Brif, Rebing Wu, Raj Chakrabarti, and Herschel Rabitz]{}\
[*Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA*]{}
Quantum control approaches {#quantum-control-approaches .unnumbered}
==========================
\[fig1\] =0.99
A prospective hybrid scheme of quantum control {#a-prospective-hybrid-scheme-of-quantum-control .unnumbered}
==============================================
\[fig2\] =0.83
Kraus maps {#kraus-maps .unnumbered}
==========
The density matrix $\rho$ representing the state of an $n$-level quantum system is a positive, unit-trace $n \times n$ matrix. We denote by $\mathcal{M}_n =\mathbb{C}^{n\times n}$ the set of all $n
\times n$ complex matrices, and by $\mathcal{D}_n := \{ \rho\in
\mathcal{M}_n \,|\, \rho=\rho^\dagger, \rho \ge 0, \mathrm{Tr}(\rho) =
1\}$ the set of all density matrices. A map $\Phi : \mathcal{M}_n \to
\mathcal{M}_n$ is positive if $\Phi(\rho)\ge 0$ for any $\rho \ge 0$ in $\mathcal{M}_n$. A map $\Phi : \mathcal{M}_n \to \mathcal{M}_n$ is completely positive (CP) if for any $l \in \mathbb{N}$ the map $\Phi
\otimes \mathbb{I}_l : \mathcal{M}_n \otimes \mathcal{M}_l \to
\mathcal{M}_n \otimes \mathcal{M}_l$ is positive ($\mathbb{I}_l$ is the identity map in $\mathcal{M}_l$). A CP map is trace-preserving if $\mathrm{Tr} [\Phi(\rho)] = \mathrm{Tr} (\rho)$ for any $\rho \in
\mathcal{M}_n$. We denote by $\mathcal{K}_n$ the set of all CP, trace-preserving maps acting in $\mathcal{M}_n$, referred to as *Kraus maps* or *quantum operations* [@suppl-Kraus1983; @suppl-Alicki2007; @suppl-Choi1975].
[99]{}
R. Fanciulli, A. M. Weiner, M. M. Dignam, D. Meinhold, and K. Leo, Phys. Rev. B [**71**]{}, 153304 (2005); N. Dudovich, T. Polack, A. Pe’er, and Y. Silberberg, Phys. Rev. Lett. [**94**]{}, 083002 (2005); Z. Amitay, A. Gandman, L. Chuntonov, and L. Rybak, *ibid.* [**100**]{}, 193002 (2008); N. Timoney, V. Elman, S. Glaser, C. Weiss, M. Johanning, W. Neuhauser, and C. Wunderlich, Phys. Rev. A [**77**]{}, 052334 (2008); M. J. Biercuk, H. Uys, A. P. VanDevender, N. Shiga, W. M. Itano, and J. J. Bollinger, *ibid.* [**79**]{}, 062324 (2009).
H. Stapelfeldt and T. Seideman, Rev. Mod. Phys. [**75**]{}, 543 (2003); M. Dantus and V. V. Lozovoy, Chem. Rev. [**104**]{}, 1813 (2004); Y. Silberberg, Ann. Rev. Phys. Chem. [**60**]{}, 277 (2009); F. Krausz and M. Ivanov, Rev. Mod. Phys. [**81**]{}, 163 (2009).
C. Brif, R. Chakrabarti, and H. Rabitz, New J. Phys. [**12**]{}, 075008 (2010). R. S. Judson and H. Rabitz, Phys. Rev. Lett. [**68**]{}, 1500 (1992).
A. Assion *et al.*, Science [**282**]{}, 919 (1998); D. Meshulach and Y. Silberberg, Nature [**396**]{}, 239 (1998); T. C. Weinacht, J. Ahn, and P. H. Bucksbaum, *ibid.* [**397**]{}, 233 (1999); R. Bartels *et al.*, *ibid.* [**406**]{}, 164 (2000); T. Brixner, N. H. Damrauer, P. Niklaus, and G. Gerber, *ibid.* [**414**]{}, 57 (2001); J. L. Herek, W. Wohlleben, R. J. Cogdell, D. Zeidler, and M. Motzkus, *ibid.* [**417**]{}, 533 (2002); R. J. Levis, G. M. Menkir, and H. Rabitz, Science [**292**]{}, 709 (2001); T. Brixner *et al.*, Phys. Rev. Lett. [**92**]{}, 208301 (2004); A. Lindinger, C. Lupulescu, M. Plewicki, F. Vetter, A. Merli, S. M. Weber, and L. Wöste, *ibid.* [**93**]{}, 033001 (2004); V. I. Prokhorenko, A. M. Nagy, S. A. Waschuk, L. S. Brown, R. R. Birge, and R. J. D. Miller, Science [**313**]{}, 1257 (2006); T. Laarmann *et al.*, Phys. Rev. Lett. [**98**]{}, 058302 (2007); M. P. A. Branderhorst, P. Londero, P. Wasylczyk, C. Brif, R. L. Kosut, H. Rabitz, and I. A. Walmsley, Science [**320**]{}, 638 (2008); M. J. Biercuk, H. Uys, A. P. VanDevender, N. Shiga, W. M. Itano, and J. J. Bollinger, Nature [**458**]{}, 996 (2009); M. Roth, L. Guyon, J. Roslund, V. Boutou, F. Courvoisier, J.-P. Wolf, and H. Rabitz, Phys. Rev. Lett. [**102**]{}, 253001 (2009).
H. Rabitz, R. de Vivie-Riedle, M. Motzkus, and K. Kompa, Science [**288**]{}, 824 (2000); R. J. Levis and H. A. Rabitz, J. Phys. Chem. A [**106**]{}, 6427 (2002); D. Goswami, Phys. Rep. [**374**]{}, 385 (2003); T. Brixner and G. Gerber, ChemPhysChem [**4**]{}, 418 (2003); M. Wollenhaupt, V. Engel, and T. Baumert, Ann. Rev. Phys. Chem. [**56**]{}, 25 (2005); W. Wohlleben, T. Buckup, J. L. Herek, and M. Motzkus, ChemPhysChem [**6**]{}, 850 (2005); T. Pfeifer, C. Spielmann, and G. Gerber, Rep. Prog. Phys. [**69**]{}, 443 (2006); P. Nuernberger, G. Vogt, T. Brixner, and G. Gerber, Phys. Chem. Chem. Phys. [**9**]{}, 2470 (2007); C. Winterfeldt, C. Spielmann, and G. Gerber, Rev. Mod. Phys. [**80**]{}, 117 (2008).
V. P. Belavkin, Autom. Remote Control [**44**]{}, 178 (1983).
H. M. Wiseman and G. J. Milburn, Phys. Rev. Lett. [**70**]{}, 548 (1993); H. M. Wiseman, Phys. Rev. A [**49**]{}, 2133 (1994).
A. C. Doherty, S. Habib, K. Jacobs, H. Mabuchi, and S. M. Tan, Phys. Rev. A [**62**]{}, 012105 (2000); A. Doherty, J. Doyle, H. Mabuchi, K. Jacobs, and S. Habib, in *Proceedings of the 39th IEEE Conference on Decision and Control* (IEEE, 2000), vol. 1, p. 949.
H. M. Wiseman and G. J. Milburn, *Quantum Measurement and Control* (Cambridge University Press, Cambridge, UK, 2010).
P. Bushev *et al.*, Phys. Rev. Lett. [**96**]{}, 043003 (2006); A. J. Berglund, K. McHale, and H. Mabuchi, Opt. Lett. [**32**]{}, 145 (2007); G. G. Gillett *et al.*, Phys. Rev. Lett. [**104**]{}, 080503 (2010).
S. Lloyd, Phys. Rev. A [**62**]{}, 022108 (2000).
M. R. James, H. I. Nurdin, and I. R. Petersen, IEEE Trans. Autom. Control [**53**]{}, 1787 (2008).
R. J. Nelson, Y. Weinstein, D. Cory, and S. Lloyd, Phys. Rev. Lett. [**85**]{}, 3045 (2000); H. Mabuchi, Phys. Rev. A [**78**]{}, 032323 (2008).
R. L. Kosut, R. J. Levis, H. Mabuchi, H. Rabitz, I. A. Walmsley, and E. Yablonovich, “Managing uncertainties in the control of quantum systems,” *Summary of 2003 Princeton-Oxford Workshop on Control of Quantum Systems* (unpublished).
K. Kraus, *States, Effects and Operations: Fundamental Notions of Quantum Theory* (Springer, Berlin, 1983).
R. Alicki and K. Lendi, *Quantum Dynamical Semigroups and Applications* (Springer, Berlin, 2007).
M.-D. Choi, Lin. Alg. Appl. [**10**]{}, 285 (1975).
|
---
author:
- |
by J. Klusoň\
Department of Theoretical Physics and Astrophysics\
Faculty of Science, Masaryk University\
Kotlářská 2, 611 37, Brno\
Czech Republic\
E-mail:
title: 'Remark about Non-BPS Dp-Brane at the Tachyon Vacuum Moving in Curved Background'
---
Introduction {#first}
============
One of the most interesting problems in string theory is the study of the time dependent process. Even if this problem is far from to be solved in the full generality one can find many examples where we can obtain some interesting results. The most celebrated problem is the time dependent tachyon condensation in the open string theory [^1]. Another example of the time dependent process is the study of the motion of the probe D-brane in given supergravity background. It turns out that the dynamics of such a probe has a lot of common with the time dependent tachyon condensation [@Kutasov:2004dj] [^2]. In our previous works [@Kluson:2005jr; @Kluson:2005qx; @Kluson:2004yk; @Kluson:2004xc] we have studied the dynamics of a non-BPS Dp-brane in the Dk-brane and in NS5-brane background in the effective field theory description. We have shown that generally, when we take the time dependent tachyon into account, it is very difficult to obtain an exact time dependence of the tachyon and radion mode. On the other hand we argued in [@Kluson:2005jr], where we studied the properties of the worldvolume theory of BPS D-branes and non-BPS Dp-branes in the near horizon limit of $N$ Dk-branes or NS5-branes, that the problem simplifies considerably in case when the tachyon reaches its homogeneous vacuum value $T_{min}$ that is defined as $V(T_{min})=0
\ , \partial_iT_{min}=0$ where $V(T)$ is a tachyon potential. Since the analysis in [@Kluson:2005jr] was performed in the near horizon region of given background configuration of D-branes one can ask the question how this description changes when we do not restrict to this particular situation. This paper is then devoted to the study of the situation when the non-BPS Dp-brane at the tachyon vacuum moves in general spatial dependent background.
An analysis of the properties of the DBI non-BPS tachyon effective action at the tachyon vacuum was previously performed in [@Gibbons:2000hf; @Sen:2000kd; @Yee:2004ec; @Kwon:2003qn; @Gibbons:2002tv; @Sen:2002qa]. However this analysis was mainly focused on the problem of the space-time filling non-BPS Dp-brane. Our goal on the other hand is to study the dynamics of the non-BPS Dp-brane where the worldvolume tachyon reaches its minimum and when this Dp-brane is embedded in a general background.
As it is believed the final state of the D-brane decay comprises the dust of massive closed strings known as a tachyon matter [@Sen:2002in; @Sen:2002an; @Lambert:2003zr]. Another interesting aspect of the low energy theory is found in the sector with net electric flux that carries fundamental string charges. Generally, when the D-brane decays, the classical solution of the system is characterised as a two component fluid system: One is preasurless electric flux lines, known as string fluid, while the other is a tachyon matter [@Mukhopadhyay:2002en; @Nagami:2003mr; @Sen:2003iv; @Rey:2003zj; @Rey:2003xs]. As we claimed above the string fluid and tachyon matter must have a natural interpretation via closed string states. In fact, it was shown that string fluid reproduces the classical behaviour of fundamental string remarkably well. Dynamics of such a configuration has been shown to be exactly that of Nambu-Goto string [@Gibbons:2000hf; @Sen:2000kd]. Natural construction from this however, hampered by the degeneracy of the string fluid.
More recently the macroscopic interpretation for the combined system of string fluid and tachyon matter was proposed in [@Yee:2004ec; @Sen:2003bc]. The basic idea was to consider a macroscopic number of long fundamental strings lined up along one particular directions and turn on oscillators along each of these strings. The proposed map is to identify energy of electric flux lines as coming from the winding mode part of the fundamental strings, while attributing the tachyon matter energy to oscillator part.
While the analysis performed in [@Yee:2004ec; @Sen:2003bc] is very interesting and certainly deserves generalisation to the Dp-brane moving in general background (We hope to return to this problem in future) the goal of this paper is more modest. As it is clear from the analysis given in [@Yee:2004ec; @Sen:2003bc] the crucial point in the mapping the string fluid and the tachyon matter to the fundamental strings degrees of freedom is an existence of the nonzero electric flux. On the other hand we know that the tachyon condensation also occurs when the electric flux is zero and the resulting configuration should correspond to the gas of massive closed strings [@Lambert:2003zr]. Due to the remarkable success of the tachyon effective action in the description of the open string tachyon condensation one could hope that the classical effective field theory analysis should be able to capture some aspects of the closed strings a non-BPS Dp-brane decays into. We will see that this is indeed the case. More precisely, in section (\[second\]) we will solve the equation of motion for the non-BPS Dp-brane at the tachyon vacuum moving in the Dk-brane background and we will argue that the solution is the same as the collective motion of the gas of massless particles. Then in section (\[third\]) we will demonstrate the equivalence between the homogeneous tachyon condensation and the gas of massless particles for spacetime, where the metric components are functions of coordinates transverse to Dp-brane, following [@Sen:2000kd]. As we will argue in the conclusion this result is in perfect agreement with the open-closed string conjecture presented in [@Sen:2003iv; @Sen:2003xs]. In order to find the solution corresponding to the macroscopic fundamental string we will consider the solution with nonzero electric flux aligned along one spatial direction on the worldvolume of the Dp-brane. We will show in section (\[fourth\]) that this solution can be interpreted as a gas of the macroscopic strings stretched along this direction that move in given supergravity background. Then the dynamics of a non-BPS Dp-brane with nonzero electric flux that moves in Dk-brane background will be studied in section (\[fifth\]). In conclusion (\[sixth\]) we outline our result and suggest possible extension of this work.
Hamiltonian formulation of the Non-BPS Dp-brane {#second}
===============================================
As we claimed in the introduction the main goal of this paper is to study the tachyon effective action at the tachyon vacuum. Even if the Lagrangian for a non-BPS Dp-brane in its tachyon vacuum vanishes [@Sen:1999md; @Kluson:2000iy; @Bergshoeff:2000dq; @Garousi:2000tr; @Kutasov:2003er], the dynamics of this configuration is still nontrivial [@Gibbons:2000hf; @Sen:2000kd; @Yee:2004ec; @Kwon:2003qn; @Gibbons:2002tv; @Sen:2002qa] as follows from the fact that the Hamiltonian for a non-BPS Dp-brane at the tachyon vacuum is nonzero.
More precisely, let us introduce the Hamiltonian for a non-BPS Dp-brane that is moving in $9+1$ dimensional background with the metric $$\label{genm}
ds^2=-N^2dt^2+g_{ab}(dx^a+L^adt)
(dx^b+L^bdt) \ , a,b=1,\dots,9$$ and with the spatial dependent dilaton [^3]. Let us now consider the non-BPS action in the form $$\label{aclag}
S=-\int d^{p+1}\xi
e^{-\Phi}V(T)\sqrt{-\det \bA} \ ,$$ where $$\bA_{\mu\nu}=G_{MN}\partial_{\mu} X^M
\partial_{\nu} X^N+F_{\mu\nu}+W(T)
\partial_{\mu}T\partial_{\nu}T \ ,$$ where $M,N=0,1,\dots,9$ and where $V(T),W(T)$ are functions of $T$ that vanish for $T_{\min}= \pm \infty$. Let us fix the gauge by $\xi^\mu=x^{\mu} \ , \mu=0,1,\dots,p$. In what follows we will also use the notation $\bx=(x^1,\dots,x^p)$. With the metric (\[genm\]) the components of the matrix $\bA$ take the form $$\begin{aligned}
\bA_{00}=-N^2+g_{ij}L^iL^j+g_{IJ}
\partial_0X^I\partial_0X^J+W(\partial_0T)^2
\nonumber \\
\bA_{0i}\equiv E^+_i=
g_{ij}L^j+g_{IJ}\partial_0X^I
\partial_iX^J+
F_{0i}+W\partial_0 T\partial_i T
\nonumber \\
\bA_{i0}\equiv -E^-_i=
g_{i0}+g_{ij}L^j+g_{IJ}\partial_iX^I
\partial_0X^J
-F_{0i}+W\partial_i T\partial_0 T \nonumber \\
\bA_{ij}=g_{ij}+g_{IJ}\partial_iX^I
\partial_jX^J+F_{ij}+W\partial_iT\partial_jT \ ,
\nonumber \\\end{aligned}$$ where $i,j=1,\dots,p$ and $I,J=p+1,\dots,9$. Then we can write $$\det \bA=
\bA_{00}\det \bA_{ij}+E^+_iD_{ij}E^-_j
\ ,
D_{ij}=(-1)^{i+j}\triangle_{ji} \ ,$$ where $\triangle_{ji}$ is the determinant of the matrix with j-th row and i-th column omitted. From (\[aclag\]) we obtain the canonical momenta as $$\begin{aligned}
\pi^i=\frac{\delta \mathcal{L}}
{\delta \partial_0 A_i}=
\frac{Ve^{-\Phi}}{\sqrt{-\det\bA}}
\frac{E^+_jD_{ji}+D_{ij}E^-_j}{2} \ ,
\nonumber \\
\pi_T=
\frac{\delta\mathcal{L}}
{\delta \partial_0T}=\frac{e^{-\Phi}
VW}{\sqrt{-\det\bA}}
\left(\dot{T}\det \bA_{ij}
-\frac{E^+_jD_{ji}\partial_iT-
\partial_iTD_{ij}E_j^-}{2}\right) \ ,
\nonumber \\
p_I=\frac{\delta\mathcal{L}}{\delta
\partial_0X^I}=\frac{e^{-\Phi}V}
{\sqrt{-\det\bA}}
\left(g_{IJ}\partial_0X^J\det\bA_{ij}
-\frac{E^+_jD_{ji}g_{IJ}\partial_iX^J
+g_{JI}\partial_iX^JD_{ij}E^-_j}{2}
\right)
\nonumber \\\end{aligned}$$ Note also that $\pi^i$ satisfies the Gauss law constraint $\partial_i\pi^i=0$. The Hamiltonian density is then obtained following Legendre transformation $$\mathcal{H}(\bx)=
\pi^iE_i+\pi_T\dot{T}+p_I\dot{X}^I
-\mathcal{L}
\ .$$ After some length and tedious algebra we obtain the Hamiltonian density as a function of canonical variables in the form $$\begin{aligned}
\label{hdenge}
\mathcal{H}=N
\sqrt{\mK}
-\pi^i F_{ij}L^j
-p_KL^K
+(\pi_T\partial_i T+p_K\partial_i X^K)L^i \ ,
\nonumber \\
\mK=\pi^i g_{ij}
\pi^j+W^{-1}
\pi_T^2+p_Ig^{IJ}p_J+
b_ig^{ij}b_j+
\nonumber \\
+(\pi^i \partial_i T)^2
+(\pi^i \partial_i X^K)g_{KL}
(\pi^j \partial_j X^L)+
e^{-2\Phi}V^2\det \bA_{ij} \ ,
\nonumber \\
b_i=F_{ik}\pi^k+\pi_T\partial_i T+
\partial_i X^Kp_K \ .
\nonumber \\\end{aligned}$$ The form of the Hamiltonian density (\[hdenge\]) considerably simplifies in situation when the tachyon reaches its global minimum ($V(T_{min})=W(T_{min})=0$) and also when its spatial derivatives are equal to zero: $\partial_iT=0$. This state is interpreted as a final state of the unstable Dp-brane decay that does not contain any propagating open string degrees of freedom. On the other hand we see that even in this case there is still nontrivial dynamics as follows from the form of the Hamiltonian density (\[hdenge\]).
To see this more clearly we begin with an explicit example of an unstable Dp-brane in its tachyon vacuum that moves in the background of $N$ coincident Dk-branes. The metric, the dilaton $(\Phi)$, and the R-R field (C) for a system of $N$ coincident Dk-branes is given by $$\begin{aligned}
\label{Dkbac}
g_{\alpha \beta}=H_k^{-\frac{1}{2}}\eta_{\alpha\beta}
\ , g_{mn}=H_k^{\frac{1}{2}}\delta_{mn} \ ,
(\alpha, \beta=0,1,\dots,k \ ,
m, n=k+1,\dots,9) \ , \nonumber \\
e^{2\Phi}=H_k^{\frac{3-k}{2}} \ ,
C_{0\dots k}=H_k^{-1} \ , H_k=1+\frac{\lambda}
{r^{7-k}}
\ ,
\lambda=Ng_sl_s^{7-k} \ ,
\nonumber \\\end{aligned}$$ where $H_k$ is a harmonic function of $N$ Dk-branes satisfying the Green function equation in the transverse space. We will consider a non-BPS Dp-brane with $p<k$ that is inserted in the background (\[Dkbac\]) with its spatial section stretched in directions $(x^1,\dots,x^p)$. For zero electric flux and for tachyon equal to $T_m$ the Hamiltonian density (\[hdenge\]) takes the form $$\begin{aligned}
\label{hamdeni}
\mathcal{H}=N\sqrt{p_Ig^{IJ}p_J+
\partial_i X^Kp_K
g^{ij}\partial_j X^Lp_L
}
=N\sqrt{\mathcal{K}(\bx)} \ .
\nonumber \\\end{aligned}$$ Using (\[hamdeni\]) the canonical equations of motion take the form $$\begin{aligned}
\label{eqx}
\partial_0 X^K(\bx)=
\frac{\delta H}{\delta p_K(\bx)}=
N\frac{g^{KL}p_L+\partial_iX^K
g^{ij}\partial_jX^Lp_L}{
\sqrt{\mathcal{K}(\bx)}}
\nonumber \\\end{aligned}$$ and $$\begin{aligned}
\label{eqp}
\partial_0p_K(\bx)=-\frac{\delta H}
{\delta X^K(\bx)}=
-\frac{\delta N}{\delta X^K(\bx)}\sqrt{\mathcal{K}}
-\nonumber \\
-\frac{1}{2\sqrt{\mathcal{K}}}
\left(\frac{\delta g^{IJ}}{\delta X^K}
p_Ip_J+\partial_iX^IP_I\frac{\delta
g^{ij}}{\delta X^K}\partial_j X^JP_J
\right)
+\partial_i\left[\frac{NP_Kg^{ij}\partial_j
X^Lp_L}{\sqrt{\mathcal{K}}}\right] \ ,
\nonumber \\\end{aligned}$$ where $N=\sqrt{-g_{00}}, \ g_{ij} \ ,
g_{IJ}$ and $\Phi$ are given in (\[Dkbac\]).
To further simplify the problem we restrict ourselves to the case of homogeneous modes on the worldvolume of non-BPS Dp-brane. Then the equations of motions (\[eqx\]) take the form $$\begin{aligned}
\label{eqxh}
\partial_0 X^m=
\frac{p_m}{H_k^{3/4}
\sqrt{\mathcal{K}}}
\ , \nonumber \\
\partial_0Y^u=
\frac{H_k^{1/4}p_u}{\sqrt{\mK}} \ ,
\nonumber \\\end{aligned}$$ where $Y^u \ ,u,v=p+1,\dots,k$ are worldvolume modes that characterise the transverse position of Dp-brane that is parallel with the worldvolume of Dk-branes and $X^m \ , m=k+1,\dots,9$ are worldvolume modes that parametrise transverse positions both to the Dk-branes and to Dp-brane. Thanks to the manifest rotation invariance in transverse $R^{9-k}$ space we will restrict ourselves to the motion in the $(x^8,x^9)$ plane where we introduce the cylindrical coordinates $$X^8=R\cos\theta \ ,
X^9=R\sin\theta \ .$$ Note also since the Hamiltonian does not explicitly depend on $Y^u$ and $\theta$ the corresponding conjugate momenta $p_u \ , p_\theta$ are conserved. As a next step we use the fact that the energy density $$\mE=\sqrt{-g_{00}}\sqrt{\mK}$$ is conserved and replace $\mK$ with $\mE$ and also express $p_R$ as a function of $R$ and conserved quantities $\mE,p_u,p_\theta$ $$p_R=\pm\sqrt{H_k}
\sqrt{\mE^2-p_u^2-\frac{p^2_\theta}{R^2H_k}
} \ .$$ Then the equation of motion (\[eqxh\]) can be written as $$\begin{aligned}
\label{eqt}
\partial_0Y^u=\frac{p_u}{\mE} \ ,
\nonumber \\
\partial_0 \theta=\frac{
p_\theta}{R^2\sqrt{H_k}\mE} \ , \nonumber \\
\partial_0R=
\pm\frac{\sqrt{\mE^2-p_u^2-\frac{p^2_\theta}
{R^2H_k}}}{\sqrt{H_k}\mE} \ . \nonumber \\\end{aligned}$$ In order to study the general properties of the radial motion of the probe non-BPS Dp-brane we will present the similar analysis as was performed in [@Burgess:2003mm]. First of all, note that the Hamiltonian density for the background (\[Dkbac\]) takes the form $$\mH=\sqrt{-g_{00}}
\sqrt{p_ug^{uv}p_v+p_rg^{rr}p_r
+p_\theta g^{\theta\theta}p_\theta}=
\sqrt{p_u^2+\frac{p_R^2}{H_k}+
\frac{p_\theta^2}{R^2H_k}} $$ that implies that $\mH$ is an increasing function of $p_R$ so that the allowed range of $R$ for the classical motion can be found by plotting the effective potential $V_{eff}(R)$ that is defined as $$\label{ved}
V_{eff}(R)=\mH(p_R=0)=
\sqrt{p_u^2+\frac{p_\theta^2}{R^2H_k}}$$ against $R$ and finding those $R$ for which $\mE\geq V_{eff}(R)$. The properties of $V_{eff}$ depend on $H_k$ that is monotonically decreasing function of $R$ with the limit $H_k\rightarrow
\frac{\lambda}{R^{7-k}}$ for $R\rightarrow 0$ and with $H_k\rightarrow 1$ for $R\rightarrow
\infty$. For $p_\theta\neq 0$ we obtain following asymptotic behaviour of the potential (\[ved\]) for $R\rightarrow
0$
- [**k=6**]{}
In this case we obtain $$V_{eff}\rightarrow
\frac{|p_\theta|}{\sqrt{\lambda}
\sqrt{R}}$$ and hence for nonzero $p_\theta$ the potential diverges at the origin.
- [**k=5**]{}
Now the potential in the limit $R\rightarrow 0$ approaches to $$\label{Vef}
V_{eff}=\sqrt{p_u^2+
\frac{p^2_\theta}{\lambda}} \ .$$
- $\mathbf{k<5}$
In this case the effective potential takes the form $$V_{eff}\approx
\sqrt{p_u^2+\frac{p^2_\theta
R^{5-k}}{\lambda}}$$ that again implies that potential approaches the constant $\sqrt{p_u^2}$ in the limit $R\rightarrow 0$.
On the other hand for $R\rightarrow
\infty$ we get $$V_{eff}\rightarrow \sqrt{p_\mu^2} \ .$$ More precisely, looking at the form of the potential for $k=6,5$ it is easy to see that these potentials are decreasing functions of $R$. On the other hand for $k<5$ it can be shown that $V_{eff}$ has extremum at $$\label{locmax}
R_{max}=\left(\frac{\lambda(5-k)}
{2}\right)^{\frac{1}{7-k}} \ .$$ Collecting these results we obtain following pictures for the dynamics of the non-BPS Dp-brane in its tachyon vacuum moving in Dk-brane background. In the first case we consider non-BPS Dp-brane that moves towards the stack of $N$ Dk-branes from the asymptotic infinity $R=\infty$ at $t=-\infty$. It reaches its turning point at $$1-\frac{p_u^2}{\mE^2}-\frac{p_\theta^2}
{\mE^2 R^2_TH_k}=0
\Rightarrow R_T^2+\frac{\lambda}{R_T^{5-k}}
=\frac{p^2_\theta}{\mE^2
\left(1-\frac{p^2_u}{\mE^2}\right)} \ ,$$ and then it moves outwards. On the other hand from the existence of local maxima (\[locmax\]) for $k<5$ it is clear that the Dp-brane can be in bounded region near the stack of $N$ Dk-branes. To see this more precisely let us solve the third equation in (\[eqt\]) in the limit $\frac{\lambda}{R^{7-k}}\gg 1$. In this case we obtain following equation $$\label{dRs}
\frac{dR}{
\sqrt{\left(1-\frac{p^2_u}{\mE^2}\right)
R^{7-k}-\frac{p^2_\theta}{\mE^2\lambda}
R^{2(6-k)}}}=\pm \frac{dt}{\sqrt{\lambda}}$$ that has the solution $$R^{5-k}=\frac{\lambda(\mE^2-p_u^2)}
{p^2_\theta}\frac{1}{
1+\left(\mp\frac{\mE^2-p_u^2}{2\mE p_\theta}t+
\sqrt{R^{k-5}_0-1}\right)^2} \ .$$ We see that now Dp-brane leaves the worldvolume of Dk-branes at $t=-\infty$ and moves outwards until its turning point at $\dot{R}=0$ and then it moves towards the stuck of Dk-branes that it again reaches at $t=\infty$. The precise analysis of the dynamics of the Dp-brane in the region $\frac{\lambda}
{R^{7-k}}\gg 1$ was performed in [@Kluson:2005jr] where more details can be found.
As it is clear from (\[Vef\]) the effective potential takes very simple form when $p_\theta=0$. In this case the differential equation for $R$ is equal to $$\label{dfpu}
\dot{R}=\pm \frac{\sqrt{\mE^2-p_u^2}}
{\sqrt{H_k}\mE}$$ that can be explicitly solved in terms of hypergeometric functions. However in order to gain better physical meaning of this physical situation it is useful to consider the case when $p_u=0$. Then the equation (\[dfpu\]) can be rewritten in more suggestive form $$-H_k^{-1/2}dt^2+H_k^{1/2}dR^2=0$$ that is an equation of the radial geodesics in Dk-brane background.
In summary, we have found that a non-BPS Dp-brane where the tachyon reaches its vacuum value moves in the background of $N$ Dk-branes as a gas of massless particles that are confined to the worldvolume of the original Dp-brane. In the next section we will present more detailed arguments that support validity of this correspondence.
Non-BPS Dp-brane at the Tachyon Vacuum as a Gas of Massless Particles {#third}
=====================================================================
Let us consider the curved background with the metric $$\label{met}
ds^2=-N^2dt^2+g_{ab}(dx^a+L^adt)
(dx^b+L^bdt) \ , a,b=1,\dots,9 \ ,$$ where we presume that $N,L^a,g_{ab}$ and the dilaton $\Phi$ are functions of the coordinates transverse to Dp-brane. As we know from the previous section the dynamics of the non-BPS Dp-brane at the tachyon vacuum is governed by the Hamiltonian $$\begin{aligned}
\label{hamdeng}
H=\int d\bx \mathcal{H} \ ,
\mathcal{H}=
N\sqrt{\mathcal{K}(\bx)}+p_K
\partial_i X^KL^i-p_KL^K \ , \nonumber \\
\nonumber \\
\mK=p_Ig^{IJ}p_J+
\partial_i X^Kp_K
g^{ij}\partial_j X^Lp_L \ . \nonumber \\ \end{aligned}$$ It is now straightforward to determine the canonical equations of motions $$\begin{aligned}
\label{eqpfo}
\partial_0 X^K(\bx)=
N\frac{g^{KL}p_L+\partial_iX^K
g^{ij}\partial_jX^Lp_L}{
\sqrt{\mathcal{K}(\bx)}}
+\partial_iX^KL^i -L^K
\nonumber \\\end{aligned}$$ and $$\begin{aligned}
\label{eqxfo}
\partial_0p_K(\bx)=
-\frac{\delta N}{\delta X^K(\bx)}
\sqrt{\mathcal{K}}
-\frac{1}{2\sqrt{\mathcal{K}}}
\left(\frac{\delta g^{IJ}}{\delta X^K}
p_Ip_J+\partial_iX^Kp_K
\frac{\delta g^{ij}}{\delta X^K}
\partial_j X^Lp_L\right)+
\nonumber \\
+\partial_i\left[\frac{Np_K
g^{ij}\partial_j
X^Lp_L}{\sqrt{\mathcal{K}}}\right]+
\partial_i[p_KL^i]-p_L\partial_iX^L
\frac{\delta
L^i}{\delta X^K}+p_L\frac{\delta L^L}
{\delta X^K} \ .
\nonumber \\\end{aligned}$$ As we argued in the previous section the Dp-brane at the tachyon vacuum with zero electric flux has similar properties as a homogeneous gas of the massless particles embedded in the background of $N$ Dk-branes. Now we would like to show that this correspondence holds in more general situations. To see this we will closely follow very nice analysis performed in [@Sen:2000kd].
We begin with an action for massive particle in general spacetime $$\label{actge}
S=-m\int d\tau \sqrt{-g_{MN}
\dot{Z^M}\dot{Z^N}}=-
m\int d\tau
\sqrt{\bA} \ ,$$ where $\dot{Z}\equiv \frac{dZ}{d\tau}$ and where $Z^M$ are embedding coordinates for massive particle. As a next step we fix the gauge in the form $\tau=Z^0$ so that the action (\[actge\]) takes the form $$\begin{aligned}
\label{actgef}
S=-m\int d\tau
\sqrt{N^2-g_{st}L^sL^t-2g_{st}L^t\dot{Z}^s
-g_{st}\dot{Z}^s\dot{Z}^t}=
\nonumber \\
=-m\int d\tau
\sqrt{\bA} \ , s,t=1,\dots, 9 \ .
\nonumber \\ \end{aligned}$$ Then the conjugate momenta are $$P_s=\frac{\delta S}{\delta
\dot{Z}^s}=
\frac{m\left(
g_{st}\dot{Z}^t+g_{st}L^t\right)}
{\sqrt{
\bA}}$$ and consequently the Hamiltonian takes the form $$\begin{aligned}
H=P_s\dot{Z}^s-L=
=N\sqrt{P_sg^{st}P_t+m^2}-P_sL^s \ .
\nonumber \\\end{aligned}$$ Using the Hamiltonian formalism we can take the limit $m\rightarrow 0$ and we obtain the Hamiltonian for a massless particle moving in general background $$H=N
\sqrt{P_rg^{rs}P_s}-
P_sL^s \ .$$ Then the canonical equations of motion for the massless particle take the form $$\begin{aligned}
\label{mpeqm}
\dot{Z}^s=\frac{\delta H}{\delta P_s}
=N\frac{g^{st}P_t}
{\sqrt{P_rg^{rs}P_s}}-L^s \ , \nonumber \\
\dot{P}_s=-\frac{\delta H}{\delta
Z^s}=
-\frac{\delta N}{\delta Z^s}
\sqrt{P_rg^{rt}P_t}-\frac{N}{2
\sqrt{P_rg^{rt}P_t}}
\frac{\delta g^{rt}}
{\delta Z^s}P_rP_t
+P_r\frac{\delta L^r}{\delta Z^s} \ .
\nonumber \\ \end{aligned}$$ Following [@Sen:2000kd] we will now presume that there exist solution of the equation of motion (\[mpeqm\]) given as $Z^s(\tau),P_s(\tau)$. Consider then the following field configuration on the Dp-brane: $$\label{ansmp}
p_I(x^0,\dots,x^p)=
\left.P_I(\tau)
f(x^0,\dots,x^p)\right|_{\tau=x^0} \ ,$$ where $f$ is an arbitrary function of the variables $(x^i-Z^i(\tau))$ for $i=1,\dots,p$. Then it is clear that $$\left.
\left(\partial_0 f+\partial_i f \partial_\tau
Z^i\right)\right|_{\tau=x^0}=0 \ .$$ We also demand that $X^I$ obey $$\label{xder}
\left.\left(\partial_iX^IP_I+P_i\right)
\right|_{\tau=x^0}=0 $$ but are otherwise unspecified. Inserting the ansatz (\[ansmp\]) into (\[hamdeng\]) we obtain that the Hamiltonian density takes the form $$\begin{aligned}
\label{hhmp}
\mathcal{H}(x^0,\dots,x^p)
=\left(N(X)\sqrt{P_sg^{st}(X)P_t}-
P_sL^s\right)
f(x^0,\dots,x^p) \ .
\nonumber \\\end{aligned}$$ We see that the expression in the bracket has the form of the Hamiltonian for the massless particle where however the metric components still depend on $X^I$ that are arbitrary functions of $t,\bx$. It turns out however that in order to obey the equation of motion for general spacetime we should perform the identification $$\label{idxz}
X^K(x^0,\dots,x^p)=Z^K(\tau) \ .$$ Then the equation of motion (\[eqxfo\]) can be written as $$\begin{aligned}
\left(\partial_\tau P_K+
\frac{\delta N}
{\delta Z^K}\sqrt{
P_rg^{rt}P_t}
+\frac{N}{2\sqrt{P_rg^{rt}P_t}}
\left(\frac{\delta g^{IJ}}
{\delta Z^K}
P_IP_J+p_i
\frac{\delta g^{ij}}
{\delta Z^K}
p_j\right)
-P_L\frac{\delta L^L}{\delta Z^K}
\right)f
-\nonumber \\
-P_K\partial_if
\left(\partial_\tau Z^i
-\frac{Ng^{ij}P_j}
{\sqrt{P_rg^{rt}P_t}}
+L^i\right)\partial_i f=0 \ .
\nonumber \\\end{aligned}$$ We see that this equation is obeyed since the expressions in the brackets are equal to zero thanks to the fact that $Z^s,P_s$ obey the equations of motion (\[mpeqm\]). On the other hand from (\[xder\]) and (\[idxz\]) we get that $P_i=0$ and hence the configuration on a non-BPS Dp-brane in the tachyon vacuum corresponds to the motion of massless particles that have nonzero transverse momenta only. Then the equation (\[eqxfo\]) takes the form $$\begin{aligned}
\partial_0 X^K(\bx)=
\partial_\tau Z^K(\tau)=
N\frac{g^{KL}P_K}
{\sqrt{P_sg^{st}P_t}}-L^K
\nonumber \\\end{aligned}$$ that is clearly obeyed since $Z^K$ obeys (\[mpeqm\]).
The final question, and the most difficult one, is regarded to the form of the function $f(x^0,\dots,x^p)$. We have seen that its form is not determined from the Dp-brane equations of motion. The most natural choice is $$f(x^0,\dots,x^p)=
\prod_{i=1}^p
\delta(x^i-Z^i(x^0)) \ .$$ As follows from (\[hhmp\]) the energy is localised along the line $x^i=Z^i(x^0)$ for $i=1,\dots,p$. Using also the identification (\[idxz\]) we see that in the full $9+1$ dimensional spacetime this solution describes the worldline $x^s=Z^s(\tau)$ for $s=1,\dots,9$. In other words, the Dp-brane worldvolume theory contains a solution whose dynamics is exactly that of massless particle in $(9+1)$ dimensions.
As in the case of NG string solution given in [@Sen:2000kd] the freedom of replacing the $\delta$ function by an arbitrary function of $x^i-Z^i(\tau)$ is slightly unusual. Very nice and detailed discussion considering this issue was given in [@Sen:2003bc]. According to this paper the solution with arbitrary function $f$ should be regarded as a system of high density of massless particles, or more precisely as a system of high density of point-like solutions of the closed string equations of motion.
Motion of Non-BPS Dp-brane with nonzero electric flux {#fourth}
=====================================================
As we have seen in previous section the case when the non-BPS Dp-brane in the tachyon vacuum moves in the general background with zero electric flux can be interpreted as a motion of the gas of massless particles. In order to find the solution of the D-brane equations of motion having the interpretation as a fundamental macroscopic string we should rather consider the case when we switch on the electric flux as well. In fact, let us again consider the Hamiltonian for a non-BPS Dp-brane at the tachyon vacuum that moves in curved background $$\begin{aligned}
\label{hdenf}
\mathcal{H}=
N\sqrt{\pi_i g^{ij}
\pi_j+p_Ig^{IJ}p_J+
b_ig^{ij}b_j
+(\pi^i \partial_i X^K)g_{KL}
(\pi^j \partial_j X^L)}+
\nonumber \\
+p_K\partial_iX^KL^i-
p_KL^K \ , \end{aligned}$$ where $$b_i=F_{ij}\pi^j+
\partial_iX^Kp_K \ .$$ Note that $\pi^i$ also obey the Gauss law constraint $$\label{Glcg}
\partial_i\pi^i=0 \ .$$ Now canonical equations of motion takes the form $$\label{aeq}
\partial_0A_i(\bx)=E_i(\bx)=
\frac{\delta H}
{\delta
\pi^i(\bx)}
=\frac{N}{\sqrt{\mK}}
(g_{ij}\pi^j-F_{ik}g^{kj}b_j
+\partial_iX^Kg_{KL}(\pi^j
\partial_jX^L)) \ ,$$ $$\label{pieqg}
\partial_0\pi^i(\bx)=
-\frac{\delta H}{\delta A_i(\bx)}=
-\partial_j\left[\frac{N}
{\sqrt{\mK}}\left(\pi^jg^{ik}b_k
-\pi^ig^{jk}b_k\right)\right] \ ,$$ $$\label{xeq}
\partial_0X^I(\bx)=
\frac{\delta H}{\delta p_I(\bx)}=
\frac{N}
{\sqrt{\mK}}
\left(g^{IK}p_K+\partial_iXg^{ij}b_j\right)
+\partial_iX^KL^i
-L^K
\ ,$$ $$\begin{aligned}
\label{Pieq}
\partial_0p_I(\bx)=
-\frac{\delta H}{\delta X^I(\bx)}=
\partial_i\left[\frac{N}
{\sqrt{\mK}}
\left(\pi^ig_{IK}\partial_jX^K\pi^j+
p_Ig^{ij}b_j\right)\right]+
\nonumber
\\
+\frac{\delta N}{\delta X^I}
\sqrt{\mK}
-\frac{\sqrt{N}}{2\sqrt{\mK}}
\left(\pi^i\frac{\delta g_{ij}}{\delta
X^I}\pi^j-p_K\frac{\delta g^{KL}}
{\delta X^I}p_L-b_i\frac{\delta g^{ij}}
{\delta X^I}b_j-(\pi^i\partial_iX^K)
\frac{\delta g_{KL}}{\delta X^I}
(\pi^j\partial_jX^L)\right)+ \nonumber \\
+\partial_i[p_KL^i]
-p_L\partial_iX^L\frac{\delta L^i}
{\delta X^K}
+p_L\frac{\delta L^L}{\delta X^K} \ .
\nonumber \\ \end{aligned}$$ Following [@Sen:2000kd] we will now try to find the solution of the equation of motion given above that can be interpreted as the fundamental string solution. To begin with let us consider the Nambu-Goto action for fundamental string $$S=-\int d\tau d\sigma
\sqrt{-\det G_{\alpha\beta}} \ ,
G_{\alpha\beta}=G_{MN}
\partial_\alpha Z^M\partial_\beta Z^N \ ,$$ where $\alpha,\beta=\sigma,\tau$. We fix the gauge so that $Z^0=\tau,Z^1=\sigma$ so that $$G_{\alpha\beta}=
g_{\alpha\beta}+
g_{st}\partial_\alpha Z^s
\partial_\beta Z^t \ ,$$ where $s,t=2,\dots,9$. Then the Hamiltonian takes the form $$H_{NG}=\int d\sigma \mathcal{H}_{NG}(\sigma)\ ,$$ where the Hamiltonian density $\mathcal{H}_{NG}$ is equal to $$\begin{aligned}
\mathcal{H}_{NG}=N
\sqrt{\mK_{NG}}
-P_sL^s+P_s\partial_\alpha X^sL^\alpha \ ,
\nonumber \\
\mK_{NG}=
g_{\sigma\sigma}+
P_s g^{st}P_t+\partial_{\sigma}Z^s
P_sg^{\sigma\sigma}
\partial_\sigma Z^tP_t+
\partial_\sigma Z^s
\partial_\sigma Z^t g_{st}
\ . \nonumber \\\end{aligned}$$ Now the equation of motion of the fundamental string take the form $$\begin{aligned}
\label{eqzs}
\partial_\tau Z^s=\frac{\delta H_{NG}}{
\delta P_s}=
\frac{N\left(
g^{st}P_t+\partial_\sigma Z^s
g^{\sigma\sigma}\partial_\sigma Z^tP_t\right)}
{\sqrt{\mathcal{K}_{NG}}}-L_s
+\partial_\sigma X^sL^\sigma \ ,
\nonumber \\
\partial_\tau P_s=
-\frac{\delta H}{\delta Z^s}=
-\frac{\delta N}{\delta Z^s}
\sqrt{\mathcal{K}_{NG}}-\nonumber \\
-\frac{N}{2\sqrt{\mathcal{K}_{NG}}}
\left(\frac{\delta g_{\sigma\sigma}}
{\delta Z^s}+
P_r\frac{g^{rt}}{\delta Z^s}
P_t+\partial_\sigma Z^rP_r\frac{g^{\sigma\sigma}}
{\delta Z^s}\partial_\sigma Z^tP_t
+\frac{\delta g^{rt}}{\delta Z^s}
\partial_\sigma Z^r\partial_\sigma Z^t
\right)
+\nonumber \\
+\partial_\sigma\left[
\frac{N\left(
P_s g^{\sigma\sigma}
\partial_\sigma Z^r P_r
+g_{st}\partial_\sigma Z^t\right)}
{\sqrt{\mathcal{K}_{NG}}}\right]
+P_t\frac{\delta L^t}{\delta X^s}
-P_t\partial_\sigma
X^t\frac{\delta L^\sigma}
{\delta X^s}+
\partial_\sigma\left[
P_sL^{\sigma}\right] \ .
\nonumber \\\end{aligned}$$ For future use we also define $$P=-\sum_{s=2}^9P_s
\partial_\sigma Z^s \ ,
Z^1(\tau,\sigma)=\sigma \ .$$ Let $Z^s(\tau,\sigma),
P_s(\tau,\sigma) \ , s=2,\dots,9 $ be the solutions of the equation of motion (\[eqzs\]). As was shown in [@Sen:2000kd] it is natural to consider the following field configuration on Dp-brane $$\begin{aligned}
\label{ansg}
\pi_i(x^0,\dots,x^p)=
\partial_\sigma
Z^i(\tau,\sigma)
f(x^0,\dots,x^p)|_{
(\tau,\sigma)=(x^0,x^1)} \ , \nonumber
\\
p_I(x^0,\dots,x^p)=P_I(\tau,\sigma)
f(x^0,\dots,x^p)|_{(\tau,\sigma)
=(x^0,x^1)} \ , \nonumber \\\end{aligned}$$ where $i=1,\dots,p$. Following [@Sen:2000kd] we presume that $f(x^0,\dots,x^p)$ is an arbitrary function of variables $(x^m-Z^m(x^0,x^1))$ for $m=2,\dots,p$ and hence satisfies: $$\begin{aligned}
\left.
\partial_\sigma Z^i
\partial_if\right|_{(\tau,\sigma)=
(x^0,x^1)}=0 \ ,
\left.\left(
\partial_0f+
\partial_i f \partial_\tau Z^i
\right)\right|
_{(\tau,\sigma)=
(x^0,x^1)}=0 \ .
\nonumber \\\end{aligned}$$ We also presume that the fields $X^I(x^0,\dots,x^p)$ and $F_{ij}(x^0,\dots,x^p)$ are subject following set of conditions: $$\begin{aligned}
(\partial_\sigma Z^j\partial_jX^I
-\partial_\sigma Z^I)|_{(\tau,\sigma)=
(x^0,x^1)}=0 \ ,
\nonumber \\
(F_{ij}\partial_\sigma Z^j
+\partial_iX^IP_I+P_i)|_{(\tau,\sigma)=
(x^0,x^1)}=0 \ .
\nonumber \\\end{aligned}$$ With this notation we can easily find that $$\begin{aligned}
\pi^i\partial_iX^I(x^0,\dots,x^p)=
\partial_\sigma Z^I(\tau=x^0,
\sigma=x^1)f(x^0,\dots,x^p)
\nonumber \\
b_i(x^0,\dots,x^p)=
-P_i(\tau=x^0,\sigma=x^1)
f(x^0,\dots,x^p) \ , \nonumber \\
\sqrt{\mK}(x^0,\dots,x^p)=
\sqrt{\mK_{NG}}(\tau=x^0,\sigma=x^1,X^I)
f(x^0,\dots,x^p) \ .
\nonumber \\\end{aligned}$$ We see that due to the nontrivial dependence of the metric on transverse coordinates $X^I$ the expression $\sqrt{\mK_{NG}}$ still depends on $X^I$. As in the case of the particle-like solution studied in previous section it is clear that in the curved spacetime we should demand that the coordinates $X^I$ are related to $Z^I$ as: $$\label{ecs}
X^I(x^0,x^1,x^m=Z^m(x^0,x^1))=
Z^I(x^0,x^1) \ .$$ This condition implies that $$\mathcal{H}(x^0,\dots,x^p)
=\mH_{NG}(\tau=x^0,\sigma=x^1)
f(x^0,\dots,x^p) \ .$$ Then we can show exactly as in [@Sen:2000kd] that the ansatz (\[ansg\]) together with (\[ecs\]) obeys the equations of motion (\[aeq\]),(\[pieqg\]), (\[xeq\]), (\[Pieq\]) as well as the Gauss law constraint (\[Glcg\]). The interpretation of this solution is the same as in the flat space [@Sen:2003bc]. Firstly, the spatial choice of the function $f$ $$f(x^0,\dots,x^p)=
\prod_{m=2}^p
\delta(x^m-Z^m(x^0,x^1))$$ gives the solution that corresponds to the stretched string in the $x^1$ direction that moves the background (\[met\]). On the other hand the solutions with the general form of the functions $f$ should be interpreted as the configurations of the high density of fundamental strings moving in (\[met\]) and that are confined to the worldvolume of the original Dp-brane.
Non-BPS Dp-brane at the Tachyonic Vacuum with nonzero flux in Dk-brane background {#fifth}
=================================================================================
Let us again return to the spatial case of the motion of the non-BPS Dp-brane in its tachyon vacuum in the Dk-brane background (\[Dkbac\]). As in section (\[second\]) we demand that all worldvolume fields are homogeneous $\partial_i X^I=0$ and the electric flux has nonzero component in the $x^1$ direction only: $$\label{gauge}
A_1=f(t)
\ , F_{ij}=0 \ .$$ For homogeneous fields and for the gauge fields given in (\[gauge\]) we get that $b_i=0$ and consequently $\mK$ in (\[hdenf\]) is equal to $$\mK=(\pi_1)^2H_k^{1/2}+
H_k^{-1/2}p_mp_m+H_k^{1/2}
p_up_u \ ,$$ where $p_m,m=k+1,\dots, 9$ are momenta conjugate to coordinates $X^m$ transverse to Dk-brane and to Dp-brane while $p_u,u=p+1,\dots, k$ are momenta conjugate to coordinates $Y^u$ transverse to Dp-brane but parallel to the Dk-brane. With this ansatz the equation of motions (\[aeq\]),(\[pieqg\]), (\[xeq\]), (\[Pieq\]) take the form $$\label{aeg}
\partial_0A_1(\bx)=E_1(\bx)=
\frac{\pi^1}{H_k^{3/4}\sqrt{\mK}}
\ ,$$ $$\label{pieq}
\partial_0\pi^i(\bx)=0 \ ,$$ $$\label{xmeq}
\partial_0X^m(\bx)=
\frac{p_m}{H_k^{3/4} \sqrt{\mK}} \ ,$$ $$\label{xieq}
\partial_0Y^u(\bx)=
\frac{p_uH_k^{1/4}}{ \sqrt{\mK}} \ .$$ It is clear that the solution of (\[pieq\]) consistent with the presumption that all fields are homogeneous is the constant electric flux $\pi_1=\Pi$. Note however that $E_1$ is time dependent as follows from (\[aeg\]) since generally metric components depend on the coordinates $X^m(t)$. To find the trajectory of a non-BPS Dp-brane we express $\sqrt{\mK}$ using the conserved energy density as $$\mathcal{E}=\sqrt{-g_{00}}
\sqrt{\mK}
\Rightarrow
\sqrt{\mK}=\frac{\mathcal{E}}{\sqrt{-g_{00}}}$$ so that $$\partial_0 X^m=
\frac{p_m}{H_k\mathcal{E}} \ ,
\partial_0 Y^u=\frac{p_u}{\mathcal{E}} \ ,
E_1=\frac{\Pi}{H_k\mathcal{E}} \ .$$ Since the Hamiltonian density does not depend on $Y^u$ we get that $p_u=\mathrm{const}$. Using manifest rotation invariance of the transverse $R^{9-k}$ space we restrict ourselves to the motion in $(x^8,x^9)$ plane where we also introduce the $R$ and $\theta$ coordinates defined as $$X^8=R\cos\theta \ ,
X^9=R\sin\theta \ .$$ Using the fact that $p_\theta$ is conserved we express $p_R$ from $\mE$ as $$\begin{aligned}
p_R=\pm\sqrt{H_k}\sqrt{
\mE^2-\Pi^2-p_u^2-\frac{p^2_\theta}{R^2H_k}}
\nonumber \\\end{aligned}$$ so that we get $$\label{dotRs}
\dot{R}=\frac{p_R}{\sqrt{H_k}
\mE}=\pm\frac{\sqrt{
\mE^2-\Pi^2-p_u^2-
\frac{p^2_\theta}{R^2H_k}}}
{\sqrt{H_k}\mE} \ .$$ Since the equation (\[dotRs\]) has the same form as the equation (\[eqt\]) (If we identify $\mE^2-\Pi^2$ in (\[dotRs\]) with $\mE^2$ in (\[eqt\]).) then the analysis of the equation (\[eqt\]) performed in section (\[second\]) holds for (\[dotRs\]) as well. Then we can interpret the solution with nonzero electric flux $\Pi$ as a solution describing the motion of the homogeneous gas of the macroscopic strings stretched in the $x^1$ direction that are confined to the worldvolume of a non-BPS Dp-brane and that move in Dk-brane background.
Conclusion {#sixth}
==========
We have studied the dynamics of the non-BPS Dp-brane at the tachyon vacuum and when this Dp-brane moves in the background where metric and dilaton are functions of the coordinates transverse to Dp-brane. We have shown that in case when there is not any electric flux present on the worldvolume of this Dp-brane, its dynamics is equivalent to the dynamics of the homogeneous gas of massless particles that are confined on the worldvolume of the unstable Dp-brane. At this place we should ask the question how this result is related to the analysis performed in [@Lambert:2003zr] where it was shown that the end product of the tachyon condensation should be the gas of massive closed strings. Relevant problem has been discussed in [@Sen:2003xs]. According to this paper there exists the spread of the energy density from the plane of the brane due to the internal oscillation of the final state of the closed strings. In the classical limit we have delta function localised D-brane and hence according to previous remark the state of closed strings without oscillator excitations. This however also implies that the classical description of such closed strings is given by massless like solution of the equation of motion when the modes on the worldvolume of fundamental string are not function of $\sigma$ [^4]. In other words, the classical result given in this paper can be considered as a manifestation of the *Open Closed Duality Conjecture* proposed in [@Sen:2003iv].
In order to find macroscopic fundamental string solutions we had to, as in the flat spacetime, consider the nonzero electric flux on the worldvolume of the non-BPS Dp-brane. Then we have shown that the dynamics of the unstable D-brane at the tachyon vacuum with the nonzero electric flux corresponds to the dynamics of the gas of stretched fundamental strings.
In concussion, we would like to stress that the results presented in this paper gives the modest contribution to the study of the tachyon condensation. On the other hand we hope that they could be helpful for better understanding of the general properties of the tachyon condensation in curved spacetime.
[**Acknowledgement**]{}
This work was supported by the Czech Ministry of Education under Contract No. MSM 0021622409.
[20]{}
A. Sen, *“Tachyon dynamics in open string theory,”* arXiv:hep-th/0410103.
D. Kutasov, *“D-brane dynamics near NS5-branes,”* arXiv:hep-th/0405058.
S. Thomas and J. Ward, *“Geometrical tachyon kinks and NS5 branes,”* arXiv:hep-th/0502228. J. Kluson, *“Note about non-BPS and BPS Dp-branes in near horizon region of N Dk-branes,”* arXiv:hep-th/0502079. W. H. Huang, *“Tubular solutions in NS5-brane, Dp-brane and macroscopic strings background,”* JHEP [**0502**]{}, 061 (2005) \[arXiv:hep-th/0502023\].
S. Thomas and J. Ward, *“D-brane dynamics near compactified NS5-branes,”* arXiv:hep-th/0501192. B. Chen and B. Sun, *“Note on DBI dynamics of Dbrane near NS5-branes,”* arXiv:hep-th/0501176. J. Kluson, *“Non-BPS Dp-brane in Dk-brane background,”* arXiv:hep-th/0501010. Y. Nakayama, K. L. Panigrahi, S. J. Rey and H. Takayanagi, *“Rolling down the throat in NS5-brane background: The case of electrified D-brane,”* JHEP [**0501**]{}, 052 (2005) \[arXiv:hep-th/0412038\].
B. Chen, M. Li and B. Sun, *“Dbrane near NS5-branes: With electromagnetic field,”* JHEP [**0412**]{}, 057 (2004) \[arXiv:hep-th/0412022\]. S. Thomas and J. Ward, *“D-brane dynamics and NS5 rings,”* JHEP [**0502**]{}, 015 (2005) \[arXiv:hep-th/0411130\]. D. Bak, S. J. Rey and H. U. Yee, *“Exactly soluble dynamics of (p,q) string near macroscopic fundamental strings,”* JHEP [**0412**]{}, 008 (2004) \[arXiv:hep-th/0411099\]. J. Kluson, *“Non-BPS Dp-brane in the background of NS5-branes on transverse R\*\*3 x S\*\*1,”* arXiv:hep-th/0411014. J. Kluson, *“Non-BPS D-brane near NS5-branes,”* JHEP [**0411**]{}, 013 (2004) \[arXiv:hep-th/0409298\].
O. Saremi, L. Kofman and A. W. Peet, *“Folding branes,”* arXiv:hep-th/0409092. D. Kutasov, *“A geometric interpretation of the open string tachyon,”* arXiv:hep-th/0408073. D. A. Sahakyan, *“Comments on D-brane dynamics near NS5-branes,”* JHEP [**0410**]{}, 008 (2004) \[arXiv:hep-th/0408070\]. A. Ghodsi and A. E. Mosaffa, *“D-brane dynamics in RR deformation of NS5-branes background and tachyon cosmology,”* arXiv:hep-th/0408015. K. L. Panigrahi, *“D-brane dynamics in Dp-brane background,”* Phys. Lett. B [**601**]{}, 64 (2004) \[arXiv:hep-th/0407134\]. H. Yavartanoo, *“Cosmological solution from D-brane motion in NS5-branes background,”* arXiv:hep-th/0407079. A. Sen, *“Supersymmetric world-volume action for non-BPS D-branes,”* JHEP [**9910**]{} (1999) 008 \[arXiv:hep-th/9909062\]. J. Kluson, *“Proposal for non-BPS D-brane action,”* Phys. Rev. D [**62**]{} (2000) 126003 \[arXiv:hep-th/0004106\]. E. A. Bergshoeff, M. de Roo, T. C. de Wit, E. Eyras and S. Panda, *“T-duality and actions for non-BPS D-branes,”* JHEP [**0005**]{} (2000) 009 \[arXiv:hep-th/0003221\]. M. R. Garousi, *“Tachyon couplings on non-BPS D-branes and Dirac-Born-Infeld action,”* Nucl. Phys. B [**584**]{} (2000) 284 \[arXiv:hep-th/0003122\]. D. Kutasov and V. Niarchos, *“Tachyon effective actions in open string theory,”* Nucl. Phys. B [**666**]{} (2003) 56 \[arXiv:hep-th/0304045\].
C. P. Burgess, N. E. Grandi, F. Quevedo and R. Rabadan, *“D-brane chemistry,”* JHEP [**0401**]{} (2004) 067 \[arXiv:hep-th/0310010\]. G. W. Gibbons, K. Hori and P. Yi, *“String fluid from unstable D-branes,”* Nucl. Phys. B [**596**]{} (2001) 136 \[arXiv:hep-th/0009061\]. A. Sen, *“Fundamental strings in open string theory at the tachyonic vacuum,”* J. Math. Phys. [**42**]{} (2001) 2844 \[arXiv:hep-th/0010240\]. G. Gibbons, K. Hashimoto and P. Yi, *“Tachyon condensates, Carrollian contraction of Lorentz group, and fundamental strings,”* JHEP [**0209**]{} (2002) 061 \[arXiv:hep-th/0209034\]. A. Sen, *“Time and tachyon,”* Int. J. Mod. Phys. A [**18**]{} (2003) 4869 \[arXiv:hep-th/0209122\].
H. U. Yee and P. Yi, *“Open / closed duality, unstable D-branes, and coarse-grained closed strings,”* Nucl. Phys. B [**686**]{} (2004) 31 \[arXiv:hep-th/0402027\]. O. K. Kwon and P. Yi, *“String fluid, tachyon matter, and domain walls,”* JHEP [**0309**]{} (2003) 003 \[arXiv:hep-th/0305229\].
A. Sen, *“Tachyon matter,”* JHEP [**0207**]{} (2002) 065 \[arXiv:hep-th/0203265\]. A. Sen, *“Field theory of tachyon matter,”* Mod. Phys. Lett. A [**17**]{} (2002) 1797 \[arXiv:hep-th/0204143\].
N. Lambert, H. Liu and J. Maldacena, *“Closed strings from decaying D-branes,”* arXiv:hep-th/0303139.
P. Mukhopadhyay and A. Sen, *“Decay of unstable D-branes with electric field,”* JHEP [**0211**]{} (2002) 047 \[arXiv:hep-th/0208142\]. K. Nagami, *“Rolling tachyon with electromagnetic field in linear dilaton background,”* Phys. Lett. B [**591**]{} (2004) 187 \[arXiv:hep-th/0312149\].
A. Sen, *“Open-closed duality: Lessons from matrix model,”* Mod. Phys. Lett. A [**19**]{} (2004) 841 \[arXiv:hep-th/0308068\]. A. Sen, *“Open-closed duality at tree level,”* Phys. Rev. Lett. [**91**]{} (2003) 181601 \[arXiv:hep-th/0306137\].
S. J. Rey and S. Sugimoto, *“Rolling of modulated tachyon with gauge flux and emergent fundamental string,”* Phys. Rev. D [**68**]{} (2003) 026003 \[arXiv:hep-th/0303133\]. S. J. Rey and S. Sugimoto, *“Rolling tachyon with electric and magnetic fields: T-duality approach,”* Phys. Rev. D [**67**]{} (2003) 086008 \[arXiv:hep-th/0301049\].
A. Sen, *“Open and closed strings from unstable D-branes,”* Phys. Rev. D [**68**]{} (2003) 106003 \[arXiv:hep-th/0305011\]. A. A. Tseytlin, *“Spinning strings and AdS/CFT duality,”* arXiv:hep-th/0311139.
[^1]: For recent review and extensive list of references, see [@Sen:2004nf].
[^2]: Similar problems have been discussed in [@Thomas:2005fw; @Kluson:2005jr; @Huang:2005rd; @Thomas:2005am; @Chen:2005wm; @Kluson:2005qx; @Nakayama:2004ge; @Chen:2004vw; @Thomas:2004cd; @Bak:2004tp; @Kluson:2004yk; @Kluson:2004xc; @Saremi:2004yd; @Kutasov:2004ct; @Sahakyan:2004cq; @Ghodsi:2004wn; @Panigrahi:2004qr; @Yavartanoo:2004wb].
[^3]: In this paper we will consider the case when the metric and dilaton are functions of the coordinates transverse to Dp-brane worldvolume. This restriction is relevant for the study of the probe non-BPS Dp-brane in the Dk-brane background.
[^4]: For recent review of some aspects of classical string solutions, see [@Tseytlin:2003ii].
|
---
abstract: 'The delayed choice experiments of the type introduced by Wheeler and extended by Englert, Scully, Süssmann and Walther \[ESSW\], and others, have formed a rich area for investigating the puzzling behaviour of particles undergoing quantum interference. The surprise provided by the original delayed choice experiment, led Wheeler to the conclusion that “no phenomenon is a phenomenon until it is an observed phenomenon", a radical explanation which implied that “the past has no existence except as it is recorded in the present". However Bohm, Dewdney and Hiley have shown that the Bohm interpretation gives a straightforward account of the behaviour of the particle without resorting to such a radical explanation. The subsequent modifications of this experiment led both Aharonov and Vaidman and \[ESSW\] to conclude that the resulting Bohm-type trajectories in these new situations produce unacceptable properties. For example, if a cavity is placed in one arm of the interferometer, it will be excited by a particle travelling down the [*other*]{} arm. In other words it is the particle that does [*not*]{} go through the cavity that gives up its energy! If this analysis is correct, this behaviour would be truly bizarre and could only be explained by an extreme non-local transfer of energy that is even stronger than that required in an EPR-type processes. In this paper we show that this conclusion is not correct and that if the Bohm interpretation is used correctly, it gives a [*local*]{} explanation, which actually corresponds exactly to the standard quantum mechanics explanation offered by Englert, Scully, Süssmann and Walther \[ESSW\].'
author:
- 'B. J. Hiley and R. E. Callaghan.'
date: |
Theoretical Physics Research Unit\
Birkbeck College, University of London\
Malet Street, London WC1E 7HX, England[^1]
title: 'Delayed Choice Experiments and the Bohm Approach.'
---
Introduction.
==============
The idea of a delayed choice experiment was first introduced by Wheeler [@w78] and discussed in further detail by Miller and Wheeler [@mw83]. They wanted to highlight the puzzling behaviour of a single particle[^2] in an interferometer when an adjustment is made to the interferometer by inserting (or removing) a beam splitter at the last minute. Wheeler argued that this presents a conceptual problem even when discussed in terms of standard quantum mechanics (SQM) because the results seemed to imply that there was a change in behaviour from wave-like phenomenon to particle-like phenomenon or vice-versa well after the particle entered the interferometer.
The example Miller and Wheeler [@mw83] chose to illustrate this effect was the Mach-Zender interferometer shown in Figure 1. In this set up a movable beam splitter $ BS_{2}$ can either be inserted or removed just before the electron is due to reach the region $ I_{2}$. When $BS_{2}$ is not in position, the electron behaves like a particle following one of the paths, 50% of the time triggering $D_{1}$ and the other 50% of the time triggering $D_{2}$. However when the beam splitter is in place, the electron behaves like a wave following “both" paths, and the resulting interference directs all the particles into $D_{1}$. Wheeler’s claim is that by delaying the choice for fixing the position of the final beam splitter forces the electron to somehow ÔdecideÕ whether to behave like a particle or a wave long after it has passed the first beam splitter $BS_{1}$, but before it has reached $I_{2}$. Experiments of this type, which have been reviewed in Greenstein and Zajonc [@gz97], confirm the predictions of quantum theory and raises the question “How is this possible?”
![ Sketch of the Wheeler delayed choice experiment](Figure1.pdf){width="4in"}
Wheeler [@w78] resolves the problem in the following way.
> Does this result mean that present choice influences past dynamics, in contravention of every formulation of causality? Or does it mean, calculate pedantically and don’t ask questions? Neither; the lesson presents itself rather like this, that the past has no existence except as it is recorded in the present.
Although Wheeler claims to be supporting Bohr’s position, Bohr [@b61]actually comes to a different conclusion and writes
> In any attempt of a pictorial representation of the behaviour of the photon we would, thus, meet with the difficulty: to be obliged to say, on the one hand, the photon always chooses [*one*]{} of the two ways and, on the other hand, that it behaves as if it passed [*both*]{} ways.
Bohr’s conclusion is not that the past has no existence until a measurement is made, but rather that it was no longer possible to give ‘pictures’ of quantum phenomena as we do in classical physics. For Bohr the reason lay in the ‘indivisibility of the quantum of action’ as he put it, which implies it is not possible to make a sharp separation between the properties of the observed system and the observing apparatus. Thus it is meaningless to talk about the path taken by the particle and in consequence we should simply give up attempts to visualise the process. Thus Bohr’s position was, to put it crudely, ‘calculate because you cannot get answers to such questions’, a position that Wheeler rejects.
But it should be quite clear from the literature that many physicists even today do not accept either Bohr’s or Wheeler’s position and continue to search for other possibilities of finding some form of image to provide a physical understanding of what could be actually going on in these situations, hence the continuing debate about the delayed choice experiment.
By now it is surely well known that the Bohm interpretation (BI)[^3] (Bohm and Hiley [@bh87], [@bh93] Holland [@h93]) does allow us to form a picture of such a process and reproduce all the known experimental results. Indeed Bohm, Hiley and Dewdney [@bhd85] have already shown how the above Miller-Wheeler experiment can be understood in a consistent manner while maintaining the particle picture. There is no need to invoke non-locality here and the approach clearly shows there is no need to invoke the past only coming into being by action in the present.
There exists a large volume of literature showing how the BI can also be used in many other typical quantum situations, allowing us to consistently account for these processes without the need for the type of explanation suggested by Wheeler. Indeed application of the BI avoids some of the more spectacular paradoxes of SQM. Particles do not go through both slits at the same time, cats do not end up in contradictory states such as being simultaneously alive and dead, and there is no measurement problem.
In spite of all these results there is still a great reluctance to accept this explanation for reasons that we have never understood[^4]. This reluctance is shown by the many attempts to show the explanation is in some way wrong or predicts unacceptable features. For example Englert, Scully, Süssmann and Walther \[ESSW\] [@essw92], Aharonov and Vaidman [@av96] and Aharonov, Englert and Scully [@aes99] have analysed the Bohm approach in detail and concluded that the explanation offered by the trajectories is too bizarre to be believable. Unfortunately these analyses have not been carried out correctly and the conclusions they reach are wrong because they have not used the Bohm approach correctly. The purpose of this paper is to clarify the Bohm approach as defined in Bohm and Hiley [@bh93] and to show what is wrong with the above arguments. We will go on to show if treated correctly the Bohm approach does not produce the bizarre behaviour predicted by the above authors. In fact the trajectories are essentially those that we would expect and there is no need for non-locality as previously suggested by Dürr, Fusseder and Goldstein [@dfg93] , Dewdney, Hardy and Squires [@dhs93] and indeed by ourselves in Hiley, Callaghan and Maroney [@hcm00].
In this paper we will discuss these issues and correct the conclusion drawn by the various authors listed in the previous paragraph. In particular the contents of this paper are as follows. In section 2 we briefly re-examine the BI account presented in Bohm, Hiley and Dewdney [@bhd85] for the original delayed choice experiment outlined by Wheeler [@w78]. In section 3 we move on to consider the delayed choice experiment introduced by ESSW [@essw92] where a microwave cavity is introduced into one of the arms of the interferometer. This case is examined in detail both from the point of view of SQM and the BI. We explain the principles involved by first replacing the cavity by a spin flip system. This simplifies the mathematics so that we can bring out the principles involved more clearly. Then in section 4 we outline how the argument goes through with the cavity in place. This requires us to use an extension of the BI applied to quantum field theory.
We conclude that if the structure of the quantum potential that appears in the BI is correctly analysed we find there is no need for any non-local energy transfer in the above experiments. In fact it is only the atom that goes through the cavity that gives up its excitation energy to the cavity even in the BI. This is the opposite conclusion reached by Aharonov and Vaidman [@av96] and by ESSW. Our results confirm that there is no difference between the account given by BI and that given by ESSW claiming to use SQM for these experiments. Thus the conclusion that the Bohm trajectories are ‘metaphysical’ or ‘surreal’ does not follow from the arguments used by ESSW. We finally discuss how our results provide the possibility of new insights into the role of measurement in quantum physics.
Do quantum particles follow trajectories?
=========================================
There is a deeply held conviction as typified by Zeh [@z98] that a quantum particle cannot and does not have well-defined simultaneous values of position and momentum. This surely is what the uncertainty principle is telling us. Actuality it is not telling us this. What the uncertainty principle does say is that we cannot [*measure*]{} simultaneously the exact position and momentum of a particle. This fact is not in dispute. But not being able to measure these values simultaneously [*does not mean that they cannot exist simultaneously for the particle*]{}. Equally we cannot be sure that a quantum particle actually [*does not have simultaneous*]{} values of these variables because there is no experimental way to rule out this possibility either. The uncertainty principle only rules out simultaneous measurements. It says nothing about particles [*having or not having*]{} simultaneous $x$ and $p$. Thus both views are [*logically*]{} possible.
As we have seen Wheeler adopts an extreme position that not only do the trajectories not exist, but that the past does not exist independently of the present either. On the other hand the BI assumes particles could have exact values of position and momentum and then simply explores the consequences of this [*assumption*]{}. Notice we are not [*insisting*]{} that the particles do actually have a simultaneous position and momentum. How could we in view of the discussion in the previous paragraph?
If we adopt the assumption that quantum particles do have simultaneous $x$ and $p$, which are, of course, unknown to us without measurement, then we must give up the insistence that the actual values of dynamical variables possessed by the particle are always given by the eigenvalues of the corresponding dynamical operators. Such an insistence would clearly violate what is well established through theorems such as those of Gleason [@g57] and of Kochen and Specker [@ks67]. All we insist on is that a measurement produces in the process of interaction with a measuring instrument an eigenvalue correspond to the operator associated with that particular instrument. The particles have values for the complementary dynamical variable but these are not the eigenvalues of the corresponding dynamical operator [*in the particular representation defined by the measuring instrument*]{}.
This implies that the measurement can and does change the complementary variables. In other words, measurement is not a passive process; it is an active process changing the system under investigation in a fundamental and irreducible way. This leads to the idea that measurement is participatory in nature, remarkably a conclusion also proposed by Wheeler himself (see Patton and Wheeler [@pw75]). Bohm and Hiley [@bh93] explain in more detail how this participatory nature manifests itself in the BI. It arises from the quantum potential induced by the measuring apparatus. We will bring this point out more clearly as we go along.
By assuming that a particle has simultaneously a precise position and momentum we can clearly still maintain the notion of a particle trajectory in a quantum process. Bohm and Hiley [@bh93] and Holland [@h93] collect together a series of results that show how it is possible to define trajectories that are consistent with the Schrödinger equation. The mathematics is unambiguous and exactly that used in the standard formalism. It is simply interpreted differently. The equation for the trajectories contain, not only the classical potential in which the particle finds itself, but an additional term called the quantum potential, which suggests there is a novel quality of energy appearing only in the quantum domain. (See Feynman et al [@fls65] for an alternative suggestion of this kind.)
Both the trajectory and the quantum potential are determined by the real part of the Schrödinger equation that results from polar decomposition of the wave function. We find that the amplitude of the wave function completely determines the quantum potential. In its simplest form this suggests that some additional physical field may be present, the properties of which are somehow encoded in the wave function. One of the features of our early investigations of the BI was to find out precisely what properties this field must have in order to provide consistent interpretation of quantum processes. It is the reasonableness of the physical nature of this potential and the trajectories that is the substance of the criticisms of Aharonov and Vaidman [@av96] and of ESSW [@essw92] and of Scully [@s98].
To bring out the nature of these criticisms let us first recall how the original Wheeler interferometer can be discussed in the BI (see Bohm, Hiley and Dewdney [@bhd85]). As we have already indicated, the advantage of the BI in this case is that there exists a very straightforward explanation without the need of any suggestion of the type invoking “present action determining the past".
We first consider the case where the incident particles are either electrons, or neutrons or even atoms and we restrict ourselves to the non-relativistic domain. Here it is assumed that the particle [*always*]{} follows one and only one of the possible paths. On the other hand the quantum field, described by $\psi $, satisfies the Schrödinger equation which is defined globally. The physical origins of this field will not concern us in this paper. However our investigations suggest that this field is regarded as a field of [*potentialities*]{}, rather than a field of [*actualities*]{}. This removes the situation that arises in SQM when we have two separated wave packets and then try to treat the packets as real. It is this assumption of wave packets as being actual that leads to the debate about live and dead cats.
In the BI, although there is talk of ‘empty wave packets’ only one wave packet contains energy. To conclude otherwise would violate energy conservation. The ‘active’ energy is the energy of the packet that contains the particle. The empty packet remains a potentiality. In an attempt to highlight this difference we are led to introduce the notions of ‘active information’ and ‘passive information’. The passive information cannot be discarded because it can become active again when the wave packets overlap later (see Bohm and Hiley [@bh93] for a detailed discussion of this point). It should be noted that the arguments we use in the rest of this paper do not depend on this or any other specific interpretation of the quantum potential energy. We have added these comments to enable the reader to relate to the discussion in Bohm and Hiley [@bh93].
Details of Wheeler’s delayed choice experiment.
===============================================
Let us now turn to consider specific examples and begin by recalling the Wheeler delayed choice experiment using a two-beam interference device based on a Mach-Zender interferometer as shown in figure 1. We will assume the particles enter one at a time and each can be described by a Gaussian wave packet of width very much smaller than the dimensions of the apparatus so that the wave packets only overlap in regions $I_{1}$ and $I_{2}$. Otherwise the wave packets have zero overlap.
The specific region of interest is $I_{2}$, which contains the movable beam splitter $BS_{2}$. In BI it is the quantum potential in this region that determines the ultimate behaviour of the particle. This in turn depends upon whether the $BS_{2}$ is in place or not at the time the particle approaches the region $I_{2}$. The position of $BS_{2}$ only affects the particle behaviour as it approaches the immediate neighbourhood of $I_{2}$. Thus there is no possibility of the “present action determining the past" in the way Wheeler suggests. The past is past and cannot be affected by any activity in the present. This is because the quantum potential in $I_{2}$ depends on the actual position of $BS_{2}$ at the time the particle reaches $I_{2}$. We will now show how the results predicted by the BI agree exactly with the experimental predictions of SQM.
Interferometer with $BS_{2}$ removed.
-------------------------------------
Let us begin by first quickly recalling the SQM treatment of the delayed-choice experiment. When $BS_{2}$ is removed (see figure 2) the wave function arriving at $D_{1}$ is $ - \psi_{1}$. (The $ \frac{\pi}{2}$ phase changes arise from reflections at the mirror surfaces). This clearly gives a probability $|\psi_{1}|^{2}$ that $D_{1}$ fires. The corresponding wave function arriving at $D_{2}$ is $i\psi_{2}$, giving a probability $|\psi_{2}|^{2}$ that $D_{2}$ fires.
If $BS_{1}$ is a 50/50 beam splitter, then each particle entering the interferometer will have a 50% chance of firing one of the detectors. This means that the device acts as a particle detector, because the particle will either take path 1, $BS_{1}M_{1}D_{1}$, trigging the detector $D_{1}$. Or it will travel down path 2, $BS_{1}M_{2}D_{2}$, triggering detector $D_{2}$.
![ Interferometer acting as a particle detector.](Figure2.pdf){width="4in"}
Now let us turn to consider how the BI analyses this experiment. Here we must construct an ensemble of trajectories, each individual trajectory corresponding to the possible initial values of position of the particle within the incident wave packet. One set of trajectories will follow the upper arm of the apparatus, while the others follow the lower arm. We will call a distinct group of trajectories a ‘channel’. Thus the wave function in channel 1 will be $\psi_{1}(\br,t) = R_{1}(\br,t)\exp[iS_{1}(\br,t)] $ away from the regions $I_{1}$ and $I_{2}$ so that the Bohm momentum of the particle will be given by $$\bp_{1}(\br,t) = \nabla S_{1}(\br ,t) %1$$ and the quantum potential acting on these particles will be given by $$Q_{1}(\br ,t) = - \frac{1}{2m}\frac{\nabla^{2}R_{1}(\br ,t)}{R_{1}(\br ,t)} %2$$ There will be a corresponding expression for particles travelling in channel 2.
All of this is straightforward except in the region $I_{2}$, which is of particular interest to the analysis. Here the wave packets from each channel overlap and there will be a region of interference because the two wave packets are coherent. To find out how the trajectories behave in this region, we must write[^5] $$\Psi = -\psi_{1} +i \psi_{2} = R\exp[iS] %3$$ and then use $$\bp = \nabla S \hspace{0.5cm}\mbox{and} \hspace{0.5cm}Q = - \frac{1}{2m}\frac{\nabla^{2}R}{R} %4$$ Thus to analyse the behaviour in the region $I_{2}$, we must write $$Re^{iS}=R_{1}e^{iS_{1}} + R_{2}e^{iS_{2}}$$ so that $$R^{2} = R_{1}^{2} + R_{2}^{2} + 2R_{1}R_{2}\cos\Delta S' %5$$ where $\Delta S' = S_{2} - S_{1}$. Equation (5) clearly shows the presence of an interference term in the region $I_{2}$ since there is a contribution from each beam $\psi_{1}$ and $\psi_{2}$, which depends on the phase difference $\Delta S'$. We show the behaviour of the quantum potential in this region in figure 4.
The particles following the trajectories then ‘bounce off’ this potential as shown in figure 3 so that the particles in channel 1 end up triggering $D_{2}$, while the trajectories in channel 2 end up triggering $D_{1}$. We sketch the overall behaviour of the channels in figure 5. Notice that in all of this analysis the quantum potential is local.
![Trajectories in the region $I_{2}$ without $BS_{2}$ in place.[]{data-label="default"}](Figure9.pdf){width="3.2in"}
![Calculation of quantum potential in region $I_{2}$ without $BS_{2}$ in place.](Figure10.pdf){width="3.5in"}
Interference experiment with beam splitter $BS_{2}$ in place.
--------------------------------------------------------------
Let us now consider the case when $BS_{2}$ is in place (see figure 6). We will assume that beam splitter $BS_{2}$ is also a 50/50 splitter. Using SQM the wave function at $D_{1}$ is $$\Psi_{D_{1}} = -(\psi_{1} +\psi_{2}) %6$$ while the wave function at $D_{2}$ is $$\Psi_{D_{2}} = i(\psi_{2}-\psi_{1}) %7$$ Since $R_{1} = R_{2}$, and the wave functions are still in phase, the probability of triggering $D_{1}$ is unity, while the probability of triggering $D_{2}$ is zero. This means that all the particles end up trigging $D_{1}$. Thus we have 100% response at $D_{1}$ and a zero response at $D_{2}$ and conclude that the apparatus acts as a wave detector so that we follow Wheeler [@w78] and say (loosely) that in SQM the particle “travels down both arms", finally ending up in detector $D_{1}$. In this case the other detector $D_{2}$ always remains silent.
![Sketch of the Bohm trajectories without $BS_{2}$ in place.](Figure4.pdf){width="4.5in"}
How do we explain these results in the BI? First we must notice that at beam splitter $BS_{1}$ the top half of the initial positions in the Gaussian packet are reflected while the bottom half are transmitted. This result is discussed in detail in Dewdney and Hiley [@dh82]. As the two channels converge on beam splitter $BS_{2}$, the trajectories in channel 1 are now transmitted through it, while those in channel 2 are reflected. Thus all the trajectories end up trigging $D_{1}$. It is straightforward to see the reason for this. The probability of finding a particle reaching $D_{2}$ is zero and therefore all the particles in channel 1 must be transmitted. The resulting trajectories are sketched in figure 7.
![Interferometer acting as a wave detector.](Figure5.pdf){width="4.5in"}
The delayed choice version of the interferometer.
--------------------------------------------------
Now let us turn to consider what happens when beam splitter $BS_{2}$ can be inserted or removed once the particle has entered the interferometer, passing $BS_{1}$ but not yet reached $BS_{2}$. We saw above that this caused a problem if we followed the line of argument used by Wheeler. Applying the BI presents no such problem. The particle travels in one of the channels regardless of whether $BS_{2}$ is in position or not. The way it travels once it reaches the region $I_{2}$ depends on whether $BS_{2}$ is in position or not. This in turn determines the quantum potential in that region, which in turn determines the ultimate behaviour of the particle.
If the beam splitter is absent when a particle reaches $I_{2}$, it is reflected into a perpendicular direction no matter which channel it is actually in as shown in figure 5. If $BS_{2}$ is in place then the quantum potential will be such as to allow the particle in channel 1 to travel through $BS_{2}$. Whereas if the particle is in channel 2 it will be reflected at $BS_{2}$ so that all the particles enter detector $D_{1}$ as shown in figure 7.
![Sketch of trajectories with $BS_{2}$ in place.](Figure6.pdf){width="4.5in"}
The explanation of the delayed choice results is thus very straightforward and depends only on the local properties of the quantum potential in the region of $I_{2}$ at the time the particle enters that region. The value of the quantum potential in $I_{2}$ is determined only by the actual position of $BS_{2}$. Hence there is no delayed choice problem here. There is no need to claim that “no phenomenon is a phenomenon unless it is an observed phenomenon". The result simply depends on whether $BS_{2}$ is in position or not at the time the particle reaches $I_{2}$ and this is independent of any observer being aware of the outcome of the experiment. Remember BI is an ontological interpretation and the final outcome is independent of the observerÕs knowledge.
Note further that in these experiments the Bohm trajectories do not cross as correctly concluded by ESSW [@essw92]. Let us now go on to see if this feature still holds in the modified experiments considered by ESSW.
Variations of the Delayed-Choice Experiment.
============================================
ESSW have modified the interferometer shown in figure 6 by removing $BS_{2}$ altogether and placing a micromaser cavity in one of the paths as shown in figure 8.
![ Interferometer with cavity in place and $BS_{2}$ removed.[]{data-label="default"}](Figure7.pdf){width="4.5in"}
This cavity has the key property that when a suitably excited atom passes through the cavity, it gives up all its internal energy of excitation to the cavity without introducing any random phase factors into the centre of mass wave function which continues unmodified. This means that when the wave packet $\psi_{2} $ reaches $I_{2}$ it is still coherent with the wave packet $\psi_{1}$ that travels in channel 1. Thus any loss of interference cannot be explained by the traditional assumption that it is loss of phase coherence that destroys interference. This point has been clearly discussed and explained in Scully, Englert and Walther [@sew91] and Scully and Walther [@sw98]and has been experimentally confirmed by Dürr, Nonn and Rempe [@dnr98]. We do not question the validity of this assumption.
What ESSW argued was that since there is still coherence in the region $I_{2}$, the behaviour of the Bohm trajectories should be as shown in figure 5. Thus the particles travelling down channel 1 should trigger $D_{2}$, while those travelling down channel 2 should trigger $D_{1}$. However, SQM and experiment show that the particles that trigger $D_{2}$ can lose their internal energy, whereas the particles triggering $D_{1}$ [*never*]{} lose any internal energy. If the trajectories are as in figure 3 when the cavity is in place in channel 2 as ESSW maintain, then the particles that travel through the cavity never lose their energy while those not passing through the cavity can give up their internal energy to the cavity. If this conclusion is correct then the Bohm trajectories are truly bizarre and surely ESSW would be right to conclude that these are not reliable and should be regarded as ‘surreal’. But do the trajectories still behave as they are shown in figure 3?
This conclusion was supported by a different experiment reported by Aharonov and Vaidman [@av96]. They considered a bubble chamber rather than a microwave cavity and replaced the excited atoms with particles that can ionise the liquid molecules in the bubble chamber. A significant part of their paper was concerned with an investigation into the relation between weak measurements and the BI. This discussion is of no relevance to the discussions in this paper. However, they also considered what they called ‘robust’ measurements and it is for these processes that they reach the same conclusion as ESSW. They also conclude that the particle that does not pass through the bubble chamber causes the bubbles to appear.
Aharonov and Vaidman note that in their experiment the bubbles develop very slowly compared with the transition time of the particle. The significance of this remark is that that the wave function of the apparatus ‘pointer’ (i.e. the bubbles), although orthogonal in momentum space, do not significantly changed in position space because the bubbles take time to form. So by the time they are formed to a significant radius, the particle has already reached the detector. They claim that because of the slow speed of bubble formation, the trajectories are unaffected by the measurement and behave in exactly the same way as they would have behaved had no measurement been made, i.e., as shown in figure 3.
Hence they conclude that slow development of the bubbles imply that ‘trajectories still don’t cross’, therefore the particles that do [*not*]{} go through the chamber somehow ionise the liquid. But Hiley, Callaghan and Maroney [@hcm00] have already shown that this conclusion is incorrect because Aharonov and Vaidman [@av96] had not used the BI correctly. In fact Aharonov and Vaidman clearly state that they are not using the actual Bohm approach “because the Bohm picture becomes very complex". Indeed they make it very clear by writing
> The fact that we see these difficulties follows from our particular approach to the Bohm theory in which the wave is not considered to be a reality.
But the whole point of the BI is to assume the wave does have a ‘reality’. This has been emphasised in all the key publications on the BI such as Bohm [@b52], Bohm and Hiley [@bh93], Holland [@h93] and Dürr, Goldstein and Zanghi [@dgz92]. To emphasise the point further we will quote from Bohm and Hiley [@bh93]
> As we have also suggested, however, this particle is never separated from a new type of quantum wave field that belongs to it and that it affects it fundamentally.
In spite of the admission that they are using a different interpretation from BI, Aharonov and Vaidman [@av96] go on to conclude that this unreasonable behaviour must also be attributed to the BI discussed in Bohm [@b52] and in Bohm and Hiley [@bh93] without giving any reasons for this conclusion.
This criticism was repeated again without further justification in Aharonov, Englert and Scully [@aes99]. We do not find their conclusion surprising. Their model will always have this problem but, we emphasise again, this model is not the one proposed in Bohm and Hiley [@bh93]. As we showed in Hiley, Callaghan and Maroney [@hcm00] (and will repeat the argument later in this section) in this case the trajectories [*actually do cross*]{}. Each particle that goes through the chamber ionises the liquid leaving a track upon which bubbles eventually form. We will also show that one does not have to wait for the bubbles to form, it is sufficient to consider only the initial ionisation process from which a bubble will eventually form. So there is a natural and reasonable local explanation of the whole process because it is the particles that do go through the bubble chamber that cause the ionisation and then move on to fire the detector $D_{2}$ even though the actual bubbles form later.
The ESSW [@essw92] experiment in which the bubble chamber is replaced by a micromaser cavity requires a more subtle analysis, which was missed by Dewdney, Hardy and Squires [@dhs93]. In this case there are no atoms to be ionised in the cavity, so it looks as if the final wave function describing the cavity will remain entangled with the wave function of the atom, giving what looks like a typical state that arises in the EPR situation. Thus Dewdney, Hardy and Squires [@dhs93] argued that there must be a non-local exchange of energy between atom and cavity. This seemed like a plausible explanation of the behaviour for those that are familiar with the BI. Indeed it must be admitted that in our previous paper Hiley, Callaghan and Maroney [@hcm00] we came to a similar conclusion. However here we show that this conclusion is wrong and that we do not need non-locality to account for the ESSW delayed choice experiment shown in figure 8. All of this will be discussed in section 4.4.
The Aharonov-Vaidman version of the experiment.
-----------------------------------------------
Let us then start with the Ahronov-Vaidman [@av96] version of the criticism of the BI because it is easier to bring out the error in their analysis. Firstly let us recall what happens according to SQM. In the region $I_{1}$ the wave function is $\Psi = \psi_{1} + \psi_{2}$. If the particle triggers the bubble formation process we can regard the bubble chamber as acting like a measuring device and the particle gives up some of its energy causing the wave function to collapse to $\psi_{2}$. As a consequence there is no interference in the region $I_{2}$ so that the particle goes straight through and fires detector $D_{2}$. Thus the particles that trigger $D_{2}$ have lost some energy as can be checked experimentally. On the other hand if the bubble formation process is not triggered, then the wave function collapses to $\psi_{1}$ so again the particle goes straight through the region $I_{2}$ with all its energy intact, eventually triggering $D_{1}$.
What happens according to the BI? There is no collapse so the wave function is still a linear combination with both $\psi_{1}$ and $\psi_{2}$ present in region $I_{2}$. At first sight it seems that if we use this wave function $\Psi$ to calculate the quantum potential in the region $I_{2}$ it should be the same as shown in figure 4. This surely would mean that the trajectories should be as shown in figure 3 mplying that the particles that travel in channel 1 eventually triggering $D_{2}$. Since the particles that trigger $D_{2}$ can be shown to have lost some energy, it would mean that the BI predicts that bubble formation is triggered by the particle that does not go through the bubble chamber.
The key question then is whether the coherence of the wave function $\Psi = \psi_{1} + \psi_{2}$ is somehow ‘destroyed’. One way out might be to appeal to the irreversibility involved in the bubble forming process. However Aharonov and Vaidman [@av96] point out that the bubbles form relatively slowly so that they will not have formed until long after the particle has passed the region $I_{2}$. This means that the effective wave function just after the particle has passed through $I_{2}$ is $$\Psi(\br ,y ,t)=[\psi_{1}(\br ,t)+\psi_{2}(\br ,t)]\Phi(y,t)$$ because as they put it “Éthe density of the wave function is not changed significantly during the time of motion of the particle." (Aharonov and Vaidman [@av96a]) The implication here is that the apparatus wave function $\Phi(y,t)$ has not changed sufficiently before the particle arrives at the detector for us to write $$\Psi (\br , y ,t)=\psi_{1}(\br ,t)\Phi_{NB}(y,t) + \psi_{2}(\br ,t)\Phi_{B}(y ,t) %8$$ where $\Phi_{NB}(y,t)[\Phi_{B}(y,t)]$ is the wave function of the bubble chamber when bubbles have not formed \[have formed\]. Wave function (8) could now be used in the standard Bohmian way to show that the quantum potential is no longer as shown in figure 4 (see Bohm and Hiley [@bh84] for details).
But all of this is irrelevant because measurement does not play a special role in the BI as it does in SQM. We must concentrate on the processes occurring at the particle level in the bubble chamber. Thus, when the particle enters the bubble chamber, the process that is central to the BI analysis is the ionisation process that takes place in the molecules of the liquid. It is this ionisation that leads to a loss of coherence not because of irreversibility, but because the wave functions involved in the process no longer overlap and are spatially distinct.
To show how this works we must first write down in detail the final wave function of all the particles involved in the ionisation process after ionisation has actually taken place. To make the argument as simple as possible and bring out clearly the principles involved, we will assume that the ionising particle that enters the bubble chamber will ionise one and only one liquid molecule and that, furthermore, there is 100% chance of this happening. We will sketch how to deal with a more realistic situation in section 4.3.1.
Let the wave function of the unionised liquid molecule be $\Psi_{UIL}(\br_{L},\br_{e})$ where $\br _{L}$ is the centre of mass co-ordinate of the liquid molecule and $\br_{e}$ is the position of the electron that will be ejected from the molecule on ionisation. Immediately after the ionisation has taken place the wave function of the ionised molecule will be $\Psi_{IL}(\br_{L})$ and the wave function of the ejected electron will be $\phi(\br_{e})$. The final wave function will then be $$\Psi(\br ,\br_{L} ,\br_{e})=\psi_{1}(\br)\Psi_{UIL}(\br_{L} ,\br_{e})+\psi_{2}(\br)\Psi_{IL}(\br_{L})\phi(\br_{e}) %9$$ Here $\psi_{i}(\br)$, with $i = (1, 2)$, are the respective wave functions of the ionising particle at position $\br$.
To work out what happens in the BI we must write the final wave function in the form $$\begin{aligned}
\Psi(\br ,\br_{L},\br_{e}) & = & R(\br ,\br_{L},\br_{e})e^{iS(\br ,\br_{L}, \br_{e})}\nonumber \\
& = & (R_{1}(\br)e^{iS_{1}(\br)}) (R_{UIL}(\br_{L} ,\br_{e})e^{iS_{UIL}(\br_{L} ,\br_{e})})\nonumber \\
& + & (R_{2}(\br)e^{iS_{2}(\br)})(R_{IL}(\br_{L})e^{iS_{IL}(\br_{L})})(R_{e}(\br_{e})e^{iS_{e}(\br_{e})}) \nonumber\end{aligned}$$ And then use equation (5), which in this case becomes $$R^{2} = (R_{1}R_{UIL})^{2} + (R_{2}R_{IL}R_{e})^{2} +2R_{1}R_{2}R_{UIL}R_{IL}R_{e}\cos\Delta S'$$ We can calculate the quantum potential from this expression and see what effect this has in the region $I_{2}$.
Recall that the quantum potential must be evaluated for the actual positions of [*all*]{} the particles concerned. Remember yet again that this is an ontological approach and the results do not depend on us [*knowing*]{} these positions. The positions of all the particles are [*actual*]{} even though we do not know what these positions are.
The key to the disappearance of the interference term is the position of the ionised electron. If the ionising particle passes along channel 1, there will be no ionisation so that the ionised electron will still be in the liquid molecule. Thus the probability of finding the electron outside the molecule is zero. Hence $R_{e} = 0$ so the interference term in equation (10) will be zero, and therefore there will be no interference in region $I_{2}$. This means that the atom will go straight to detector $D_{1}$.
If however the ionising particle passes down channel 2 it will, by the assumption we are making about the efficiency of the ionisation process, ionise a liquid molecule. In this case the probability of finding the electron in the unionised atom will be zero. In this case $R_{UIL} = 0$ and again the interference in equation (10) vanishes. As there is no interference in region $I_{2}$ the ionising particle goes straight through to trigger $D_{2}$.
If we look at this in terms of trajectories we find that because the interference term in equation (10) is always zero, trajectories now always cross. The interference does not vanish because the ionising atom undergoes a randomisation of its phase, but because of final positions of particles involved in the interaction processes are such that their wave packets do not overlap and it is this fact that destroys interference.
Note the change in the position of the ionised electron is immediate and we do not have to wait until any bubbles form. [*The rate of bubble formation is irrelevant*]{}. The key point is that the ionised electron must be sufficiently far removed from the neighbourhood of the ionised liquid molecule so that the bubble can eventually form on the ionised molecule. It is not necessary to invoke any principle of irreversibility at this stage. All that is necessary is that the ionised electron does not return to the neighbourhood of the ionised liquid molecule before the bubble formation starts. Of course irreversibility would help to ensure that this return would not take place at all but this is not essential here. The essential point is to ensure [*the probability of finding the electron back in its original molecule is zero so that the probability of the molecule remaining ionised is unity*]{}.
In the above argument we have emphasised the role played by the quantum potential, but exactly the same result would be obtained if we use the guidance condition $\bp = \nabla S$ . We will show how all this works in more detail in a related example below (see equations (22)-(24)).
To summarise then we see that in this case the trajectories do cross. Thus the BI trajectories do not behave in a ‘bizarre’ fashion as claimed. The ionising particles going through the bubble chamber, ionise the liquid and then go on to fire $D_{2}$. While those that do not go through the bubble chamber, do not ionise the liquid and go straight on to fire $D_{1}$. There is no need to introduce any non-local exchange of energy. This is exactly what we would expect from quantum mechanics. Therefore the conclusion drawn by Aharonov and Vaidman [@av96] and of Aharonov, Englert and Scully [@aes99] concerning the BI in this situation is simply wrong.
The ESSW experiment with the cavity in place.
---------------------------------------------
Now let us move on to consider the subtler conditions introduced by ESSW [@essw92] and replace the bubble chamber with a micromaser cavity. Here we have no ionised electron whose position has become changed by moving a significant distance so we cannot use the same argument. Something different must be involved in order to suppress the interference in region $I_{2}$ if we are not to get the bizarre results claimed.
We should also notice that we must change the argument even in the case of SQM because the cavity is not a measuring device in the same sense as a cloud chamber is a measuring device. So let us first remind ourselves how SQM deals with the interferometer with the cavity in place. Let us write the state of the unexcited cavity as $|0\rangle_{c} $ while the excited cavity is written as $|1\rangle_{c} $ . The final wave function can be then written in mixed notation in the form $$\Psi = \psi_{1}|0\rangle_{c} +\psi_{2}|1\rangle_{c} %11$$ Here $\psi_{1}$ and $\psi_{2}$ describe the centre of mass wave function of the atom in channels 1 and 2 respectively. Given the wave function (11), the probability of the final outcome is given by $$|\Psi|^{2} = |\psi_{1}|^{2} + |\psi_{2}|^{2} %12$$ as the two cavity states are orthogonal. Thus there is a 50/50 chance of a particle triggering one of the detectors. In fact because of the linearity of the Schršdinger equation, the wave packet in channel 1, $\psi_{1}$, will trigger $D_{1}$, while the wave packet in channel 2, $\psi_{2}$, will trigger $D_{2}$. Indeed the probability of finding the cavity excited is given by $$|_{c}\langle1|\Psi\rangle|^{2} = |\psi_{2}|^{2} %13$$ From this result it is reasonable to argue that the atom that travels in channel 2 gives up its internal energy to the cavity and then travels on to trigger $D_{2}$. The atom that travels in channel 1, triggers $D_{1}$ and does not lose any internal energy because it does not go anywhere near the cavity. From the standard point of view all of this is very satisfactory and very unremarkable.
How does the Bohm Interpretation deal with this type of experiment?
-------------------------------------------------------------------
In order to bring out the principles involved in the BI we want to first replace the cavity, which would involve the mathematical complications of having to use quantum field theory, with a device that is simpler to deal with in the BI, but which presents the same conceptual problems as the cavity. It is not that the BI cannot be applied to field theory (See for example Bohm, Hiley and Kaloyerou [@bhk87] and Bohm and Hiley [@bh93]) but that field theory brings in unnecessary complications that have little to do with the principles governing the form of the quantum potential in the region $I_{2}$. Once the principles are clear we can then return to discuss how to treat the behaviour of the quantum field in the cavity.
To this end recall the neutron scattering example discussed in Feynman [@f61]. Consider a coherent beam of polarised neutrons being scattered off a polarised crystal target. Here two processes are involved. (1) The neutron scatters without spin flip or (2) the neutron produces a spin flip in the crystal. What Feynman argues is that there is no interference between the wave functions describing these two different processes even though the coherence between the neutron wave functions is maintained because, as he puts it, a spin flip is a potential measurement.
Clearly the spin-flip example is different from the case of bubble chamber ionisation because the spin-flipped atom is heavy and it is assumed to have exactly the same position coordinate whether it has been flipped or not. Thus we cannot use the spatial separation of wave packets to account for the lack of interference between the two channels when they meet again in the region $I_{2}$. Something else must be involved.
To see what this is let us consider the experiment where we replace the incident atoms by a polarised neutron beam and the cavity in figure 8 is replaced by a polarised crystal. Furthermore let us assume again for simplicity that the efficiency of the spin-flip process is 100% so that whenever a neutron enters channel 2, a spin is flipped. We can make things conceptually even simpler by replacing the crystal by a single polarised atom. We will also consider the idealised case that when one neutron passes the polarised atom there will be a spin-flip process every time, i.e. 100% efficiency. This is not very realistic but it brings out the basic principles involved. The wave function for this process will be $$\Psi = \psi_{1}|\uparrow\rangle +\psi_{2}|\downarrow\rangle %14$$ where again $\psi$ is the centre of mass wave function of the neutron and the ket describes the spin state of the crystal atom. Clearly since the spin states of the atom are orthogonal, the probability distribution the neutrons after they have passed through the region $I_{2}$ will have the same form as equation (11). Since the probability of the detector $D_{1}$ firing will be given by $$|\langle\downarrow|\Psi\rangle|^{2} = |\psi_{1}|^{2} %15$$ we can infer safely that the neutrons in channel 1 will pass straight through the region $I_{2}$ and trigger $D_{1}$. Similarly those travelling in channel 2 trigger $D_{2}$. Thus SQM analysis for this system is exactly the same as it is for the cavity.
Now let us turn to consider how the BI deals with this situation. Recall that we must use wave functions throughout so that we must write equation (14) in the form $$\Psi(\br_{1},\br_{2}) = \psi_{1}(\br_{1})\Phi_{\uparrow}(\br_{2}) + \psi_{2}(\br_{1})\Phi_{\downarrow}(\br_{2}) %16$$ where $\Phi(\br_{2})$ is the wave function of the polarised scattering centre. We must then write the final wave function in the form $$\Psi =Re^{iS} = (R_{1}e^{iS_{1}})(R_{\uparrow}e^{iS_{\uparrow}}) +(R_{2}e^{iS_{2}})(R_{\downarrow}e^{iS_{\downarrow}}) %17$$ From this we can determine the quantum potential which in this case is given by $$%18
Q = - \frac{1}{2m_{n}}\frac{\nabla_{\br_{1}}^{2}R}{R} - \frac{1}{2m_{c}}\frac{\nabla_{\br_{2}}^{2}R}{R}$$ If we assume that the crystal atom is heavy and has negligible recoil, only the first term in the quantum potential need concern us. To evaluate this term we need to write $R$ in the form $$R^{2} = (R_{1}R_{\uparrow})^{2} + (R_{2}R_{\downarrow})^{2} +2R_{1}R_{2}R_{\uparrow}R_{\downarrow}\cos\Delta S' %19$$ Now we come to the crucial point of our discussion. We need to evaluate this term for each [*actual*]{} trajectory. Consider a neutron following a trajectory in channel 1. Recall that the quantum potential is evaluated at the position of all the particles involved in the actual process. This means we must take into account the contribution of the spin-state of the atom in channel 2. But since we are assuming 100% efficiency, the probability of finding the atom’s spin flipped when the neutron is in channel 1 is [*zero*]{}. This means that $R_{\downarrow} = 0$ so that $$R^{2} = (R_{1}R_{\uparrow})^{2} %20$$ Thus there is no interference in the quantum potential in the region $I_{2}$ so the neutron goes straight through and triggers $D_{1}$. Notice once again this is opposite to the conclusion reached by ESSW [@essw92] for the case of the cavity.
Now we will consider a neutron following a trajectory in channel 2. Since in this case there is 100% probability of finding the atom in a spin-flipped state and zero probability of finding the atom with spin up, we must now put $R_{\uparrow} = 0 $ . This means that $$R^{2} = (R_{1}R_{\downarrow})^{2} %21$$ So again there is no interference in the region $I_{2}$ and the neutrons go straight through to trigger $D_{2}$.
We can confirm this behaviour by looking directly at the phase and calculating the momentum of the neutron using $$\bp_{n} = \nabla_{\br_{1}}S %22$$ In the general case $S$ is given by $$%23
\tan S = \frac{(R_{1}R_{\uparrow})\sin (S_{1} + S_{\uparrow}) + (R_{2}R_{\downarrow})\sin (S_{2} +S_{\downarrow})}{(R_{1}R_{\uparrow})\cos (S_{1} + S_{\uparrow}) + (R_{2}R_{\downarrow})\cos (S_{2} + S_{\downarrow})}$$ Thus for a neutron in channel 1 this reduces to $$S = S_{1}(\br_{1}) + S_{\uparrow}(\br_{2}) %24$$ which confirms that the neutrons in channel 1 go straight through $I_{2}$ and trigger $D_{1}$. Clearly for those neutrons in channel 2, $D_{2}$ is triggered.
Thus if the BI is analysed correctly we see that the behaviour of the trajectories is exactly the same as predicted by SQM using the arguments of ESSW [@essw92]. So what has gone wrong with their argument in applying the BI? The mistake they and others make stems from the behaviour of the trajectories shown in figure 3, in figure 7 and, incidentally, also from the trajectories calculated by Philippidis, Dewdney and Hiley [@pdh79] for the two-slit interference experiment. The characteristic feature of those trajectories is that they do not cross. By now it should be clear that we cannot assume in general that ‘trajectories do not cross’. For example although we know the rule holds for systems described by wave functions of the form of equation (3), we know that it does not hold for [*mixed states*]{}. Here trajectories actually cross because there is no coherence between the separate components of the mixed state. But it is not only in mixed states that they cross. As we have shown above they also cross for [*pure states*]{} of the type (8), even though the phases of the centre of mass wave functions are not randomised. Thus there is no universal rule for trajectories not crossing. Each case must be considered separately for [*all*]{} types of pure states.
### What happens if the efficiency in less than 100%?
The example in the last sub-section assumed that the efficiency of the interaction was 100%. This is actually very unrealistic so what happens in the case when the spin-flip is not 100% efficient? Suppose only a fraction $a^{2}$ of the neutrons induce a spin-flip. Here the final wave function is $$\Psi = (\psi_{1} + b\psi_{2})|\uparrow\rangle +a\psi_{2}|\downarrow\rangle$$ with $a^{2} + b^{2} = 1$ . We now argue in the following way. Divide the neutrons that travel in channel 2 into two groups, those that cause a spin-flip and those that do not. If one of the neutrons cause a spin-flip, $\psi_{1}$ gives a zero contribution to the quantum potential by the argument given above so these neutrons travel straight through the region $I_{2}$ and trigger $D_{2}$. Thus their behaviour is the same as the behaviour predicted by ESSW [@essw92] using SQM.
The rest see a quantum potential in the region $I_{2}$. For this sub-set the quantum potential has an interference structure weakened by the factor b appearing in front of $\psi_{2}$. These neutrons end up in detector $D_{1}$. Thus while all the neutrons travelling in channel 2 that have not been involved in a spin-flip have their trajectories deflected to the detector $D_{1}$, the neutrons travelling in channel 1 can end up firing either $ D_{1}$ or $D_{2}$. The fraction that will travel straight through $I_{2}$ will depend on the factor $b$. Once again there is of course no problem with any non-local transfer of energy because these neutrons are not involved in any spin flip process.
Treatment of the cavity in the Bohm Interpretation.
---------------------------------------------------
To complete our description of the delayed choice experiments discussed above we must now show how the BI can be applied to the micromaser cavity. Here we can no longer delay the argument and we must use field theory. Fortunately the generalisation of the BI to enable it to be applied to field theory has already been discussed in Bohm [@b52], Bohm, Hiley and Kaloyerou [@bhk87], Bohm and Hiley [@bh93], Kaloyerou [@k93] and Holland [@h93], [@h93a]. We will not be concerned with all the details here but will be content to outline the principles, leaving the details for a later publication.
Let us return to consider figure 8 with the micromaser cavity in place. We are assuming that when an excited atom enters the cavity, there is a local coupling between this excited atom and the field in the cavity described by a local interaction Hamiltonian given by $$H_{I} = g\hat{\psi}_{a}(\br_{1})\hat{A}(\br_{1})\psi_{1}(\br_{1}) %26$$
Here $\hat{\psi}_{a}(\br_{1})$ is the excited internal state of the atom, $\hat{A}_{1}(\br_{1})$ is the field variable in the cavity and $\psi_{1}(\br_{1})$ is the centre of mass wave function which is not affected during the process.
Standard quantum mechanics would write the wave function after the particles have passed through the Mach-Zender as $$\Psi(\br) = \psi_{1}(\br)|0\rangle_{c} + \psi_{2}(\br)|1\rangle_{c} %27$$ where, as before, $|0\rangle_{c}$ describes the state of the cavity when the atom does not pass through it and $|1\rangle_{c} $ is the excited state of the cavity after the atom has passed through it. Since $|0\rangle_{c}$ is orthogonal to $|1\rangle_{c} $, the intensity on the screen proportional to $$|\Psi|^{2} =|\psi_{1}|^{2} + |\psi_{2}|^{2} %28$$ which clearly shows no interference between the two beams.
How does the Bohm approach deal with this system? First we need to write the wave function (27) in a form that is more appropriate for this approach, namely, $$\Psi(\br ,t) = \psi_{1}(\br)\Phi(\phi_{1}(\br_{1})) + \psi_{2}(\br)\Phi(\phi_{0}(\br_{1}))$$ Here $\Phi(\phi(\br_{1}))$ is the wave functional of the cavity field $\phi$, where $\phi_{1},(\phi_{0})$ is the excited (unexcited) field respectively. Now we must write the final wave functional as $$\Psi(\br,\Phi) = R(\br,\Phi)exp[iS(\br,\Phi)] %30$$ Then the trajectories can be evaluated from $$\bp(\br,t) =\int d^{3}\br_{1}\nabla_{\br}S(\br,\Phi(\br_{1},t))$$ while the quantum potential is now given by $$\begin{aligned}
Q = - \frac{1}{2}\int d^{3}\br_{1}\left [ \frac{1}{m}\frac{\nabla_{r}^{2}R}{R} + \frac { \frac{\delta^{2}R}{(\delta\Phi)^{2}}} {R} \right ]\end{aligned}$$ As has now become apparent, the use of fields has made the whole calculation more complicated. However we again assume the effect of the interaction of the cavity on the centre of mass wave function of the atom is negligible so that we need only consider the contribution of the first term in equation (32) as we did in equation (18).
To find the final effect of the cavity on the subsequent behaviour of the atom in region $I_{2}$ we must calculate the quantum potential effecting those atoms in channel 1 with the cavity in the unexcited state so the $R(É\phi_{e}(\br)É) = 0$. Thus the atoms described by wave function $\psi_{1}$ are unaffected by $\psi_{2}$ so that they go straight through. Clearly those atoms in channel 2 also go straight through the region $I_{2}$. Thus we draw exactly the same conclusion that we arrived at using the spin-flip argument. The atom that goes through the cavity gives up its internal energy to the cavity and then goes straight through the region $I_{2}$ ending up triggering detector $D_{2}$. This is exactly as predicted by standard quantum mechanics and there is no non-local exchange of energy between the atom arriving at $D_{2}$ since it actually passes through cavity, exchanging its energy locally.
Thus in all the cases we have considered in section 4 we get no bizarre features and we cannot conclude that the trajectories are ‘surreal’. The behaviour is exactly as predicted by SQM using the arguments of ESSW [@essw92]. Furthermore all energy exchanges are local.
Measurement in quantum mechanics.
=================================
In the above analysis we have seen that the BI gives a perfectly acceptable account of how the energy is exchanged with the cavity and the claims that the trajectories are ‘surreal’ have not been substantiated. One of the confusions that seems to have led to this incorrect conclusion lies in role measurement plays in the BI.
One of the claims of the BI is that it does not have a measurement problem. A measurement process is simply a special case of a quantum process. One important feature that was considered by Bohm and Hiley [@bh84] was to emphasise the role played by the macroscopic nature of the measuring instrument. Their argument ran as follows. During the interaction of this instrument with the physical system under investigation, the wave functions of all the components overlap and become ‘fused’. This fusion process can produce a very complex quantum potential, which means that during the interaction the relevant variables of the observed system and the apparatus can undergo rapid fluctuation before settling into distinct sets of correlated movement. For example, if the system is a particle, it will enter into one of a set of distinct channels, each channel corresponding to a unique value of the variable being measured. It should be noted that in this measurement process the complementary variables get changed in an unpredictable way so that the uncertainty principle is always satisfied, thus supporting the claim concerning participation made in section 2.
All of this becomes very clear if we consider the measurement of the spin components of a spin one-half particle using a Stern-Gerlach magnet. As the particle enters the field of the Stern-Gerlach magnet, the interaction with the field produces two distinct channels. One will be deflected ‘upwards’ generating a channel that corresponds to spin ‘up’. The other channel will be deflected ‘downwards’ to give the channel corresponding to spin ‘down’. In this case there are no rapid fluctuations as the calculations of Dewdney, Holland and Kyprianidis [@dhk86] show. Nevertheless the interaction with the magnetic field of the Stern-Gerlach magnet produces two distinct channels, one corresponding to the spin state, $\psi (+)$ and the other to $\psi (-)$. There is no quantum potential linking the two beams as long as the channels are kept spatially separate.
Thus it appears from this argument that a necessary feature of a measurement process is that we must produce spatially separate and macroscopically distinct channels. To put this in the familiar language of SQM, we must find a quantum process that produce separate, non-overlapping wave packets in space, each wave packet corresponding to a unique value of the variable being measured. In technical terms this means that we must ensure that there is no intersection between the supports of each distinct wave packet, eg. $\sup(\psi_{i}) \cap \sup(\psi_{j}) = \O$ for $i \neq j$.
Clearly this argument cannot work for the case of the cavity shown in figure 2. Indeed $\sup(\phi_{0}) \cap \sup(\phi_{1}) \neq \O$ since both the excited and un-excited fields are supported in the same cavity. This was one of the main factors why Dewdney, Hardy, and Squires, [@dhs93] and Hiley, Callaghan and Maroney [@hcm00] were content to introduce non-local exchanges of energy as a solution to the ESSW challenge.
What these authors all assumed was that the non-overlapping of ‘wave packets’ was a [*necessary and sufficient*]{} condition. What we have argued above is that it is not a necessary condition but it is merely a [*sufficient*]{} condition. What is necessary is for [*there to be a unit probability of the cavity being in a particular energy eigenstate, all others, of course, being zero*]{}. What this ensures is that as long as the energy is fixed in the cavity, there will be no quantum potential coupling between the occupied particle channel and the unoccupied channel. This also ensures that the particle will always behave in a way that is independent of all the other possibilities. This means that in the example shown in figure 2 considered above, the atom passing through the cavity travels straight through the region $I_{2}$ and fires detector $D_{2}$. This also means that in the language of SQM, the atom behaves as if the wave function had collapsed. Again in this language it looks as if the cavity is behaving as a measuring instrument even though there has been no amplification to the macroscopic level and no irreversible process has occurred. This is why Bohm and Hiley [@bh84] emphasised that in the Bohm approach there is no fundamental difference between ordinary processes and what SQM chooses to call ‘measurement processes’.
Notice that [*we do not need to know*]{} whether the cavity has had its energy increased or not for the interference terms to be absent. This is because we have an ontological theory, which means that there is a well-defined process actually occurring regardless of our knowledge of the details of this process. This process shows no interference effects in the region $I_{2}$ whether we choose to look at the cavity or not. We can go back at a later time after the particle has had time to pass through the cavity and through the region $I_{2}$ but before $D_{1}$ or $D_{2}$ have fired and find out the state of the cavity as ESSW [@essw92] have proposed. What we will find is this measurement in no way affects the subsequent behaviour of the atom and $D_{2}$ will always fire if the cavity is found in an excited state. Thus there is no call for the notion such as “present action determining the past" as Wheeler suggests.
All we are doing in measuring the energy in the cavity at a later time is finding out which process [*actually*]{} took place. The fact that this process may require irreversible amplification is of no relevance to the vanishing of the interference effects. In other words there is no need to demand that the measurement is not complete until some irreversible macroscopic process has been recorded. These results confirm the conclusions already established by Bohm and Hiley [@bh84] and there is certainly no need to argue “no phenomenon is a phenomenon until it is an observed phenomenon."
Conclusion.
===========
What we have shown in this paper is that some of the specific criticisms of the Bohm interpretation involving delayed choice experiments are not correct. The properties of the trajectories that led Scully [@s98] to term them as ‘surreal’ were based on the incorrect use of the BI. Furthermore if the approach is used correctly then there is no need to invoke non-locality to explain the behaviour the particles in relation to the added cavity. The results then agree exactly with what Scully predicts using what he calls ‘standard quantum mechanics’.
This of course does not mean that non-locality is removed from the BI. In the situation discussed by Einstein, Podolsky and Rosen [@epr35], it is the entangled wave function that produces the non-local quantum potential, which in turn is responsible for the corresponding non-local correlation of the particle trajectories. The mistake that has been made by those attempting to answer the criticism is to assume wave functions of the type shown in equations (9), (16) and (29) are similar in this respect to EPR entangled states. They are not. They are not because of the specific properties of systems like the micromaser cavity and polarised magnetic target. The essential property of these particular systems is that we can attach a unit probability to one of the states even though we do not know which state this is. The fact that a definite result has actually occurred is all that we need to know. When this situation arises then all of the other potential states give no contribution to the quantum potential or the guidance condition so that there is no interference.
This is not the same situation as in the case for the EPR entangled wave function. In this case neither particle is in a well-defined individual state. This is reflected in the fact that there is only a 50% chance of finding one of the two possible states of one of the particles. Therefore the interference between the two states in the entanglement is not destroyed and it is this interference that leads to quantum non-locality. However the interference is destroyed once we have a process that puts one of the particles into a definite state. In conventional terms this can be used as a record of the result and then the process is called a ‘measurement’. But in the BI there is no need to record the result. The fact that one result will occur with probability one is sufficient to destroy interference. Thus delaying an examination of the ‘reading’ is irrelevant. The process has occurred and that is enough to destroy interference.
It should be noted that in all of these discussions we offer no physical explanation of why there is a quantum potential or why the guidance condition takes the form it does. The properties we have used follow directly from the Schrödinger equation itself and the assumption that we have made about the particle possessing a simultaneous actual position and momentum. As has been pointed out by Polkinghorne [@p02], the key question is why we have the Schrödinger equation in the first place. Recently de Gosson [@g01] has shown that the Schrödinger equation can be shown rigorously to exist in the covering groups of the symplectic group of classical physics and the quantum potential arises by projecting down onto the underlying group. One of us, Hiley [@h03], [@h04] has recently argued that a similar structure can arise by regarding quantum mechanics as arising from a non-commutative geometry where it is only possible to generate manifolds by projection into the so-called ‘shadow’ manifolds. Here the mathematical structure is certainly becoming clearer but the implications for the resulting structure of physical processes need further investigation.
[99]{}
Wheeler, J. A., \[1978\], The ÒPastÓ and the ÒDelayed-ChoiceÓ Double-slit Experiment, in [*Mathematical Foundations of Quantum Theory*]{}, ed. Marlow, pp. 9-47, Academic Press, New York. Miller, W. A. and Wheeler, J. A., \[1983\], Delayed-Choice Experiments and Bohr’s Elementary Quantum Phenomenon, p.140-51. [*Proc. Int. Symp. Found. of Quantum Mechs*]{}. Tokyo. (Physical Society of Japan) Greenstein, G and Zajonc, A. G., \[1997\], [*The Quantum Challenge*]{}, Jones and Bartlett, Sudbury MA. Bohr, N., \[1961\], [*Atomic Physics and Human Knowledge*]{}, p. 51, Science Editions, New York. Bohm, D. and Hiley, B. J., \[1987\], An Ontological Basis for Quantum Theory: I - Non-relativistic Particle Systems, [*Phys. Reports*]{} [**144**]{}, 323-348. Bohm, D. and Hiley, B. J., \[1993\], [*The Undivided Universe: an Ontological Interpretation of Quantum Theory*]{}, Routledge, London. Holland, P. R., \[1993\], [*The Quantum Theory of Motion*]{}, Cambridge University Press, Cambridge. Bohm D., Hiley, B. J., and Dewdney, C., \[1985\], A Quantum Potential Approach to the Wheeler Delayed-Choice Experiment. [*Nature*]{}, [**315**]{}, 294-297. Englert, J., Scully, M. O., Süssman, G. and Walther, H., \[1992\], Surrealistic Bohm Trajectories, [*Z. Naturforsch*]{}. [**47a**]{}, 1175-1186. Aharonov, Y. and Vaidman, L. \[1996\] About Position Measurements which do not show the Bohmian Particle Position, in [*Bohmian Mechanics and Quantum Theory: an Appraisal*]{}, ed J. T. Cushing, A Fine and S. Goldstein, Boston Studies in the Philosophy of Science, 184, 141-154, Kluwer, Dordrecht. Aharonov, Y. and Vaidman, L. \[1996\], [*ibid*]{}, p. 151. Aharonov, Y., Englert, B-G. and Scully, M. O., \[1999\], Protective measurements and Bohm trajectories, [*Phys. Lett.*]{}, [**A263**]{}, 137-146. Dürr, D., Fusseder,W and Goldstein, S. \[1993\], Comment on “Surrealistic Bohm Trajectories”, [*Z. Naturforsch*]{}. [**48a**]{}, 1261-2. Dewdney, C., Hardy, L. and Squires, E. J., \[1993\], How late Measurements of Quantum Trajectories can fool a Detector, [*Phys. Lett.*]{}, [**184**]{}, 6-11. Hiley, B. J., Callaghan, R. E. and Maroney, O., \[2000\], Quantum Trajectories, Real, Surreal, or an Approximation to a Deeper Process. [*quant-ph/0010020*]{}. Polkinghorne, J., \[2002\], [*Quantum Theory: a very short introduction*]{}, Oxford University Press, Oxford. Zeh, D. \[1998\], Why BohmÕs Quantum Theory? [*Quant-ph/9812059*]{}. Gleason, A. M., \[1957\], Measures on the Closed Sub-spaces of Hilbert Space, [*J. Maths. Mechs.*]{}, [**6**]{}, 885-93. Kochen, S. and Specker, E. P., \[1967\], The Problem of Hidden Variables in Quantum Mechanics, [*J. Math. Mech*]{}. [**17**]{}, 59-87. Patton, C. M., and Wheeler, J. A., \[1975\], Is physics legislated by cosmogony? [*In Quantum Gravity*]{}, eds. Isham, C., Penrose, R., and Schama, D., pp. 538-605, Clarendon Press, Oxford. Feynman, R. P., Leighton, R. B. and Sands, M., \[1965\], [*The Feynman Lectures on Physics*]{} III, p. 21-12, Addison-Wesley, Reading, Mass. Scully, M. O., \[1998\], Do Bohm trajectories always provide a trustworthy physical picture of particle motion, [*Phys. Scripta*]{}, [**T76**]{}, 41-46. Dewdney, C. and Hiley, B. J.,\[1982\], A Quantum Potential Description of One-dimensional Time-dependent Scattering from Square Barriers, [*Found. Phys*]{}. [**12**]{}, 27-48. Scully, O. M., Englert, B. G. and Walter, H., \[1991\], Quantum Optical Tests of Complementary, [*Nature*]{}, [**351**]{}, 111-116. Scully, O. M., and Walther, H., \[1998\], An Operational Analysis of Quantum Eraser and Delayed Choice, [*Found. Phys*]{}., [**28**]{}, 399-413. Dürr, S., Nonn, T. and Rempe, G., \[1998\], Origin of quantum mechanical complementarity probed by a Ôwhich-wayÕ experiment in an atom interferometer, [*Nature*]{}, [**395**]{}, 33-37. Bohm, D., \[1952\], A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, I, [*Phys. Rev.*]{}, [**85**]{}, and II, 166-179; [**85**]{},180-193. Dürr, D., Goldstein, S. and Zanghi, N., \[1992\], Quantum Equilibrium and the Origin of Absolute Uncertainty, [*J. Stat. Phys.*]{}, [**67**]{}, 843-907. Bohm, D. and Hiley, B. J., \[1984\], Measurement Understood Through the Quantum Potential Approach, [*Found. Phys*]{}., [**14**]{}, 255-74. Bohm, D. and Hiley, B. J. and Kaloyerou, P.N., \[1987\], An Ontological Basis for the Quantum Theory: II -A Causal Interpretation of Quantum Fields, [*Phys. Reports*]{}, [**144**]{}, 349-375. Feynman, R. P., \[1961\], [*Theory of Fundamental Processes*]{}, p. 3, Benjamin, New York. Philippidis, C., Dewdney, C. and Hiley, B. J., \[1979\], Quantum Interference and the Quantum Potential, [*Nuovo Cimento*]{}, [**52B**]{}, 15-28. Bohm, D., Hiley, B. J. and Kaloyerou, P.N., \[1987\] An Ontological Basis for the Quantum Theory: II -A Causal Interpretation of Quantum Fields, [*Physics Reports*]{}, [**144**]{}, 349-375. Kaloyerou, P. N., \[1993\], The Causal Interpretation of the Electromagnetic Field, [*Physics Reports*]{}. [**244**]{}, 287-385. Holland, P. R., \[1993a\], The de Broglie-Bohm Theory of motion and Quantum Field Theory, [*Physics Reports*]{}, [**224**]{}, 95-150. Dewdney, C., Holland, P. R. and Kyprianidis, A.,\[1986\], [*Phys. Letts.*]{}, [**119A**]{}, 259. Einstein, A., Podolsky, B., and Rosen, N., \[1935\], Can Quantum-Mechanical Description of Physical Reality be Considered Complete, [*Phys. Rev.*]{}, [**47**]{}, 777-80. de Gosson, M., \[2001\], [*The Principles of Newtonian and Quantum Mechanics: the need for Plank’s Constant*]{}, Imperial College Press, London, 2001. Hiley, B. J., \[2003\], Phase Space Descriptions of Quantum Phenomena, [*Proc. Int. Conf. Quantum Theory: Reconsideration of Foundations 2*]{}, Växjö, Sweden. Hiley, B. J., \[2005\], Non-commutative Quantum Geometry: a re-appraisal of the Bohm approach to quantum theory, in [*Quo Vadis Quantum Mechanics*]{}, ed A. Elitzur, S. Dolev, and N. Kolenda, pp. 299-324,Springer, Berlin.
[^1]: b.hiley@bbk.ac.uk
[^2]: We will begin the discussion using electron interference as it can be described by the Schrödinger equation. A discussion of photons requires field theory. We will discuss the role of photons later in the paper.
[^3]: In this paper we will use the term ‘Bohm interpretation’ to stand for the interpretation discussed in Bohm and Hiley [@bh93] and should be distinguished from what is called ‘Bohmian mechanics’. Although both use exactly the same form of mathematics, the interpretations differ in some significant ways.
[^4]: John Polkinghorne [@p02] has recently written “Certainly I have always felt the work you and Bohm did was far too readily dismissed by most physicists, who never gave it serious consideration and whose prejudice that it was ‘obviously wrong’ was itself obviously wrong” (Polkinghorne, private communication to BJH).
[^5]: For simplicity we will not write down normalised wave functions
|
---
abstract: 'The lepton polarization asymmetry in the $B\rightarrow\ell^{+}\ell^{-}$ decay, when one of the leptons is polarized, is investigated using the most general form of the effective Hamiltonian. The sensitivity of the asymmetry to the new Wilson coefficients is studied.Moreover, correlations between the lepton polarization asymmetry and the branching ratio is studied. It is observed that, there are not exist such regions of new Wilson coefficients, which the value of branching ratio coincides with SM result while the lepton polarization does not, i.e new physics effects can be established by studying lepton polarization only.'
author:
- |
\
[**V. Bashiry**]{} [^1]\
Physics Department, Middle East Technical University\
Ankara, Turkey\
title: ' [**Lepton Polarization Asymmetry in $B\rightarrow\ell^{+}\ell^{-}$ decay Beyond the Standard Model**]{}'
---
Introduction
============
The study of rare B-decays is one of the most important research areas in particle physics. These decays induced by flavor changing neutral currents (FCNC) and provide a promising ground for testing the structure of weak interactions. These decays are forbidden in the standard model(SM) at tree level and for this reason represent “ very good laboratory ” for checking predictions of the SM at loop level. Moreover, these decays are very sensitive to the new physics beyond the SM, since loop with new particles can give considerable contribution to SM result. The new physics effects in rare decays can appear in two different ways, namely modification of Wilson coefficients existing is SM or through new operators with new Wilson coefficients which absent in SM.\
The rare pure leptonic $B_{q}\rightarrow \ell^{+} \ell^{-} (q = d, s$ and $\ell
= e,\mu, \tau$ )decays are very good probes to test new physics beyond the standard model , mainly to reveal the Higgs sector\[1-3\].\
In aim of the present work is , investigation of the lepton polarization as a tool for establishing new physics beyond the SM, using the most general form of effective Hamiltonian. More precisely our goal is following: Can we find such a regions of new Wilson coefficients for which lepton polarization differ from SM prediction, while branching ratio coincides with SM result.\
Note that lepton polarization for $B_{q}\rightarrow \ell^{+}
\ell^{-}$ decay is studied in [@4].\
The paper is organized as follows: In Section 2 , we present the theoretical expression for the decay widths and lepton polarizations. Section 3 is devoted to numerical analysis and conclusion.
Double-Lepton polarization Asymmetry
====================================
In this section we obtained the expression for decay width and lepton polarization asymmetry, using the more general model independent form of effective hamiltonian. The effective Hamiltonian for the $b\rightarrow s \ell^{+}\ell^{-}$ transition in terms of twelve model independent four Fermi interactions can be written in following form [@5; @6] $$\begin{aligned}
{\cal{H}}_{eff}&=&\frac{G_{F}\alpha}{\sqrt{2}\pi}V_{ts}V^{*}_{tb}
\Bigg\{C_{SL}\overline{s}i\sigma_{\mu\nu}\frac{q^{\nu}}{q^{2}}\,L\,
b\overline{\ell}\gamma^{\mu}\ell\,+
C_{BR}\overline{s}i\sigma_{\mu\nu}\frac{q^{\nu}}{q^{2}}\,R\,
b\overline{\ell}\gamma^{\mu}\ell \nonumber \\ &+&
C^{tot}_{LL}\overline{s}_{L}\gamma_{\mu}b_{L}\overline{\ell}_{L}\gamma^{\mu}\ell_{L}+
C^{tot}_{LR}\overline{s}_{L}\gamma_{\mu}b_{L}\overline{\ell}_{R}\gamma^{\mu}\ell_{R}+
C_{RL}\overline{s}_{R}\gamma_{\mu}b_{R}\overline{\ell}_{L}\gamma^{\mu}\ell_{L}
\nonumber \\ &+&
C_{RR}\overline{s}_{R}\gamma_{\mu}b_{R}\overline{\ell}_{R}\gamma^{\mu}\ell_{R}+
C_{LRLR}\overline{s}_{L}b_{R}\overline{\ell}_{L}\ell_{R}+
C_{RLLR}\overline{s}_{R}b_{L}\overline{\ell}_{L}\ell_{R}
\nonumber\\ &+&
C_{LRRL}\overline{s}_{L}b_{R}\overline{\ell}_{R}\ell_{L}+
C_{RLRL}\overline{s}_{R}b_{L}\overline{\ell}_{R}\ell_{L}+
C_{T}\overline{s}\sigma_{\mu\nu}b\overline{\ell}\ell
\nonumber\\&+&
iC_{TE}\epsilon^{\mu\nu\alpha\beta}\overline{s}\sigma_{\alpha\beta}b\overline{\ell}\ell\Bigg\}
\,\,\, \label{Hamiltonian}\end{aligned}$$ Where L and R in (1) are $$\begin{aligned}
R=\frac{1+\gamma_{5}}{2}, \,\,\,\,\,\,\,\,\, L
=\frac{1-\gamma_{5}}{2} \nonumber
\, , \label{LR}\end{aligned}$$ and $C_{x}$ are the coefficients of the four–Fermi interactions and $q = p_{2}+p_{1}$ is the momentum transfer. Among twelve Wilson coefficients some of them are already exist in the SM. For example, the coefficients $C_{SL}$ and $C_{BR}$ in penguin operators correspond to $-2m_{s}C^{eff}_{7}$ and $-2m_{b}C^{eff} _{7}$ in the SM, respectively. The next four terms in Eq. (1) are the vector type interactions with coefficients $C^{tot} _{LL}, C^{tot} _{LR},
C_{RL}$ and $C_{RR}$. Two of these vector interactions containing $C^{tot} _{LL} $and $C^{tot} _{LR}$ do exist in the SM as well in the form $(C^{eff} _{9} - C_{10})$ and $(C^{eff} _{9} + C_{10})$. Therefore we can say that $C^{tot} _{LL} $and $C^{tot} _{LR}$ describe the sum of the contributions from SM and the new physics and they can be written as $$\begin{aligned}
C^{tot} _{LL} = C^{eff} _{9} - C_{10} + C_{LL}\nonumber\, ,\\
C^{tot} _{LR} = C^{eff} _{9} + C_{10} + C_{LR}\nonumber\, ,\end{aligned}$$\[ctot\] The terms with coefficients $C_{LRLR}, C_{RLLR}, C_{LRRL}$ and$
C_{RLRL}$ describe the scalar type interactions. The last two terms with the coefficients $C_{T}$ and $C_{TE}$, obviously, describe the tensor type interactions. The amplitude of exclusive $B\rightarrow \ell^{+} \ell^{-}$ decay is obtained by sandwiching of effective Hamiltonian between meson and vacuum states. It follows from Eq. (\[Hamiltonian\]) that in order to calculate the amplitude of the $B\rightarrow \ell^{+} \ell^{-}$ decay, following matrix elements are needed: $$\begin{aligned}
\langle0|\overline{s}\,\gamma_{\mu}\gamma_{5}\,b|B\rangle&=&-i
f_{Bs}p_{\mu} \nonumber
\, , \\
\langle0|\overline{s}\,\gamma_{5}\,b|B\rangle&=& i
f_{Bs}\frac{m_{Bs}^{2}}{m_{b}+m_{s}}\, ,\label{matrix}\end{aligned}$$ All remaining matrix elements $\langle0|\overline{s}
\,\Gamma_{i}\, b|B\rangle$, where is one of the Dirac matrices $
I\
,\gamma_{\mu}\ , \sigma_{\alpha\beta}$ are equal zero.\
For the matrix element of $B\rightarrow \ell^{+} \ell^{-}$ decay we get $$\begin{aligned}
M = i
f_{B}\frac{G_{F}\alpha}{2\sqrt{2}\pi}V_{ts}V^{*}_{tb}\Bigg[C_{PV}\,
\overline{\ell}\,\gamma^{5}\ell\,+\,
C_{PS}\,\overline{\ell}\,\ell\Bigg]\label{transamp}\end{aligned}$$ where pseudovector coefficient $C_{PV}$ and pseudoscalar coefficient $C_{PS}$ are as following: $$\begin{aligned}
C_{PV}&=& m_{\ell}(C_{LL}^{tot}-C_{LR}^{tot}-C_{RL}+C_{RR})+
\frac{m^{2}_{B}}{2(m_{b}+m_{s})}(C_{LRLR}-C_{RLLR}-C_{LRRL}+C_{RLRL})
\nonumber\, ,\\
C_{PS}&=&\frac{m^{2}_{B}}{2(m_{b}+m_{s})}(C_{LRLR}-C_{RLLR}+C_{LRRL}-C_{RLRL})\,,
\label{coef}\end{aligned}$$ After some calculation we get following expression for the un polarized $B\rightarrow \ell^{+} \ell^{-}$ decay width $$\begin{aligned}
\Gamma_{0}&=&f^{2}_{B}\frac{1}{16\ \pi\ m_{B}}\Bigg|
\frac{G_{F}\alpha} {2\sqrt{2}\pi}V_{tb}\
V^{*}_{ts}\Bigg|^{2}\Bigg\{2\ C^{2}_{PV}\ m^{2}_{B} + 2\
C^{2}_{PS}\ m^{2}_{B}\ \upsilon^{2}\Bigg\}\upsilon
\label{Gamazero}\end{aligned}$$ where $\upsilon=\sqrt{1\ -\ m^{2}_{\ell}/m^{2}_{B}} $ is the final lepton velocity.\
Now let get expression for the lepton polarization. In the rest frame of final leptons one can define only one direction. Therefore the unit vectors of each lepton polarization can defined as $$\begin{aligned}
s^{\mu}=(0,\ \overrightarrow{e}_{L}^{\mp})=(0,\
\mp\frac{\overrightarrow{p}_{-}}{|\overrightarrow{p}_{-}|})
\label{smu}\end{aligned}$$ where is the tree momentum of $\ell^{-}$ and subscript L means longitudinal polarization. Boosting these unit vectors to the dilepton center of mass frame by using Lorentz transformation we get $$\begin{aligned}
s_{\ell^{\mp}}^{\mu}=(\frac{|\overrightarrow{p}_{-}|}{m_{\ell}},\
\mp\frac{E_{\ell}\overrightarrow{p}_{-}}{m_{\ell}|\overrightarrow{p}_{-}|})
\label{smuboost}\\\end{aligned}$$ where $E_{\ell}$ is the lepton energy.\
The decay width of the $B\rightarrow \ell^{+} \ell^{-}$ decay can written in following form $$\begin{aligned}
\Gamma\ =\ \frac{1}{2}\ \Gamma_{0}\ \{1\ +\ P^{\mp}_{L}\
\overrightarrow{e}_{L}^{\mp}\ . \overrightarrow{n}^{\mp}\}
\label{gamapol}\\\end{aligned}$$ where $P_{L}$ is longitudinal lepton polarization asymmetry. It define as follows: $$\begin{aligned}
P^{\mp}_{L}\ =\ \frac{\Gamma(\overrightarrow{n}^{\mp}\ =\
\overrightarrow{e}_{L}^{\mp})\ -\ \Gamma(\overrightarrow{n}^{\mp}\
=\
-\overrightarrow{e}_{L}^{\mp})}{\Gamma(\overrightarrow{n}^{\mp}\
=\ \overrightarrow{e}_{L}^{\mp})\ +\
\Gamma(\overrightarrow{n}^{\mp}\ =\
-\overrightarrow{e}_{L}^{\mp})}
\label{pl}\end{aligned}$$ The explicit expression of longitudinal polarization asymmetry is: $$\begin{aligned}
P^{\mp}_{L}\ =\ \frac{2\ Re(C_{PV}\ C^{*}_{PS})\
\upsilon}{C_{PS}^{2}\ \upsilon^{2}\ +\ C^{2}_{PV}}
\label{plexplicite}\end{aligned}$$ From this expression it is obvious that in SM lepton polarization asymmetry $P^{\mp}_{L}\ =\ 0$ since in SM $C_{PS}\ =\ 0$ (see eq. (\[coef\])).
Numerical analysis
==================
. In this section, we study the dependency of $P_{L}$ on new Wilson coefficients. In the present work all new Wilson coefficients are taken to be real. Here we would like to made following remark. Recent experimental results on the $B$ meson decay into two pseudoscalar meson indicated that Wilson coefficient $C_{10}$ can has large phase\[7\]. Therefore in principal appear new source for $CP$ violating effects. We will discuss this possibility elsewhere. In performing numerical analysis we will vary the new Wilson coefficients describing the scalar interactions, in the range $-4\leq |C_{ii}|\leq 4$. The experimental result on branching ratio of $B\rightarrow K (k^{*})\ell^{+} \ell^{-}$ [@8; @9] and the bound on branching ratio of the $B\rightarrow \ell^{+} \ell^{-}$ [@10] decay suggest that this is the right order of magnitude for scalar interaction.\
Now we are ready to perform numerical calculations. The values of input parameters which we have used in our numerical analysis are:\
$ f_{Bs}=0.245$ Gev [@11], $ m_{B}=5.279.2 \pm\ 1.8$ Mev, $ m_{\mu}=105.7$ Gev, $ m_{\tau}=1777 $ Mev, $ \alpha=\frac{1}{129}$\
The values of these parameters taken from [@12].\
In Fig. 1 we present the dependence of longitudinal polarization of the lepton on Wilson coefficients of scalar interactions $C_{LRLR},\,
C_{RLLR},\, C_{LRRL}\,$ and $C_{RLRL}$ for $B\rightarrow \mu^{+}
\mu^{-}$decay. It should be noted that zero value of Wilson coefficients for scalar interactions corresponds to the standard model, case.\
From this figure we see that contributions coming from $C_{RLRL}$ and $C_{LRLR}$ also, $C_{LRRL}$ and $C_{RLLR}$ are equal in magnitude but differ with sign. The similar circumstance take place for $B\rightarrow \tau^{+}
\tau^{-}$decay (see Fig. 2). Therefore measurement the magnitude and sign of the lepton polarization can give unambiguous information about nature of scalar interaction.\
Obviously, if new physics beyond the SM exist, their effects can be appears in branching ratio, besides the lepton polarization. It is well known that the measurement of the branching ratio is more easy, that the lepton polarization. For this reason, it is more convenient and easy to study to study the branching ratio than the polarization, for establishing new physics beyond the standard model.\
In this connection we could like to discuss following problem: Can be establish new physics only by measuring lepton polarization. In other world, do exist such a regions of new Wilson coefficients, for which branching ratio coincides with the SM prediction, while lepton polarization do not. In order to answer this question, we study the correlations of single lepton polarization and branching ratio (see Fig. 3 and Fig. 4).\
From Figs. 3 and 4 we see that there are not exist such for regions of Wilson coefficients for which branching ratios coincides with the SM result, while lepton polarization do not.\
In summary, we present analysis for the longitudinal lepton polarization using the most general form of the effective Hamiltonian. We found that measurement of lepton polarization can provide us essential information about nature of scalar interaction. Moreover, we obtained that there are not exist such regions for the new Wilson coefficients, for which the only measurement of the lepton polarization gives invaluable information in looking for new physics beyond the SM.
Acknowledgement
===============
The author thanks Prof. Dr. TM Aliev and E.O. Iltan for helpful discussions.
[1]{} K. S. Babu and C. Kolda, [*Phys Rev.Let*]{} [**84**]{} (2000) 228. H. E. Logan and U. Nierste, [*Nucl. Phys.* ]{} [**B 586**]{} (2000)39. C. Bobeth, T. Ewerth, F. Krüger and J. Urban, [*Phys. Rev*]{} [**D 64**]{} (2001)074014. L. T. Handoko, C. S. Kim and T. Yoshikawa, [*Phys. Rev*]{} [**D 65**]{} (2002) 077506
S. Fukae, C. S. Kim and T. Yoshikawa, [*Phys. Rev*]{} [**D 61**]{} (2000)074015 TM. Aliev, D. A. Demir and M. Savci, [*Phys. Rev*]{} [**D 62**]{} (2000)074016 A. J. Buras, R. Fleischer, S. Recksiegel, F. Schwab,hep-ph/0312259 (2003). A. Ishikawa et al., BELLE Collaboration [*Phys Rev.Let*]{} [**91**]{} (2003)261601 A. Aubert et al., BABAR Collaboration [*Phys Rev.Let*]{} [**91**]{} (2003)221802. V. Halyo, hep-ph/0207010 (2002). S. Hashimoto, [*Nucl. Phys. Proc. Suppl.*]{} [**B 83**]{}(2000)3. Particle Data Group , K. Hagiwara et al, [*Phys. Rev.*]{} [**D 66**]{}, 01000 (2002).
[**Figure Captions**]{}\
[**Fig. (1)**]{}The dependence of longitudinal polarization asymmetry $P_{L}$ on new Wilson coefficients responsible for scalar interactions for the $B\rightarrow \mu^{+} \mu^{-}$ decay. Here the solid, dashed, doted and small dashed lines corresponds to $C_{LRLR}$, $C_{RLLR} $,$C_{RLRL}$ and $C_{LRRL}$, respectively.\
[**Fig. (2)**]{} The same as [**Fig. (1)**]{}, but for the $B\rightarrow \tau^{+} \tau^{-}$ decay.\
[**Fig. (3)**]{} Parametric plot of the correlation between longitudinal lepton polarization asymmetry and branching ratio for the $B\rightarrow \mu^{+} \mu^{-}$ decay. The vertical line corresponds to SM result for branching ratio. In this figure, solid line corresponds to the Wilson coefficients $C_{LRLR}$ and $C_{RLRL}$ , and dashed line $C_{LRRL}$ and $C_{RLLR}$, respectively.\
[**Fig. (4)**]{} The same as [**Fig. (3)**]{}, but for the $B\rightarrow \tau^{+} \tau^{-}$ decay.\
-3.0truein =6.8in -4.0truein
-3.0truein =6.8in -4.0truein
-3.0truein =6.8in -4.0truein
-3.0truein =6.8in -4.0truein
[^1]: E-mail address: bashiry@newton.physics.metu.edu.tr
|
---
author:
- Mathieu Baillif
title: Curves of constant diameter and inscribed polygons
---
\#1[${\cal C}^{#1}$]{} \#1[\#1]{} \#1[\#1]{} \#1 \#1
\[prop\][Lemma]{} \[prop\][Corollary]{} \[prop\][Definition]{}
Let $\Gamma$ be a simple closed curve in the Euclidean plane. Say that a polygon $S$ is [*inscribed in $\Gamma$ at $x$*]{} if all the vertices of $S$ lie on $\Gamma$ and one is $x$. A line segment is here considered as a $2$-gon. We say that $\Gamma$ has property ($C_n(D)$) (for some $n\ge 2$, $D>0$) if
$$\begin{array}{l}
\forall x\in\Gamma \text{ there is a unique regular $n$-gon with edges length $D$}\\
\text{inscribed in $\Gamma$ at $x$.}
\end{array}$$
Notice that ($C_2(D)$) is equivalent to the following since $\Gamma$ is simple and closed ($||\cdot ||$ is the Euclidean norm):
$$\begin{array}{l}
\forall x\in\Gamma \quad\exists ! y(x)\in\Gamma\text{ with }||x-y(x)||=D,\text{ and} \\
\text{if } z\not= y(x),\, ||x-z||<D.
\end{array}$$
If one drops the unicity assumption, ($C(D)$) is the property of having [*constant diameter*]{}, which is in fact equivalent (for closed curves in the plane) to having [*constant width*]{} or [*constant breadth*]{} (for the definitions and the proof of the equivalence, see [@RademacherToeplitz chap. 25]). It is a surprise to many (including me !) that curves of constant diameter other that the circle do exist. The simplest examples (due to Reuleaux [@Reuleaux]) are pictured below. They are built with circle arcs whose centers are marked with a black dot. Notice that these curves are not .
The theory of curves of constant breadth has generated a considerable literature, starting with Euler [@EulerTriangularibus] and including works by Hurwitz [@Hurwitz], Minkowski [@Minkowski:konstanterBreite], Blaschke (five articles in [@Blaschke:GesammelteWerke3]) and many others. These curves have many interesting properties, the most startling being perhaps that their perimeter is $\pi$ times their diameter. See [@RademacherToeplitz chap. 25] for an elementary account, or [@JordanFiedler] for a more thorough, though old, presentation. For $n\ge 3$, to our knowledge the property ($C_n(D)$) has not been investigated (but see [@JordanFiedler p. 61], and [@EgglestonTaylor] for a related problem). [.3cm ]{}In this note, we shall prove the theorem below, using only basic differential geometry (tangent, curvature, and so on). We recall that a curve is [*regular*]{} if it is with non vanishing derivative.
- For all $n\ge 2$ and $D>0$, there are regular simple closed non circular curves $\Gamma$ with property [($C_n(D)$)]{}.
- The circle of radius $D/2$ is the only regular simple closed curve which satisfies both [($C_4(\frac{D}{\sqrt{2}})$)]{} and [($C(D)$)]{}.
For i) $n=2$, such curves abound in the literature, see for instance [@Robertson] or [@Rabinowitz] for one given by a polynomial equation. We however give a short self contained proof that yields simple explicit examples. For ii), notice that a square with edges length $D/\sqrt{2}$ has diagonal $D$. This result gives a definition of the circle involving only Euclidean distance between points [*on*]{} the circle, while most usual definitions refer to points [*off*]{} the circle.
We shall prove the following:
\[prop1\] Let $G:{\Bbb R}\to{\Bbb R}^2$ be such that\
a) $G(\theta+\pi)=G(\theta)$,\
b) $G'(\theta)=r(\theta)\cdot (-\sin\theta,\cos\theta)$, where $|r(\theta)|<\frac{D}{2}$.\
Then, the curve $\gamma(\theta)=G(\theta)+\frac{D}{2}(\cos\theta,\sin\theta)$ has property [($C(D)$)]{}.
To obtain explicit examples, we can take $r(\theta)=a\cdot\sin((2k+1)\theta)$, with $k\ge 1$, $a<D/2$. Integrating, we get $$G(\theta)=\frac{a}{4} \left(-\frac{\sin(2k\theta)}{k} + \frac{\sin(2(k+1)\theta)}{k+1} \, ,\,
-\frac{\cos(2k\theta)}{k} - \frac{\cos(2(k+1)\theta)}{k+1} \right)$$ which satisfies $G(\theta)=G(\theta+\pi)$. By Proposition \[prop1\], the curve $\gamma(\theta)=G(\theta)+\frac{D}{2}(\cos\theta,\sin\theta)$ has property ($C(D)$). One could also take any linear combination of $\cos((2k+1)\theta)$ and $\sin((2k+1)\theta)$ ($k\ge 1$), with small enough coefficients, for $r(\theta)$. Below are pictured the curves of diameter $1$ (and $G$) for $$r(\theta)=\sin(3\theta)/3 + \cos(3\theta)/5,\,
\sin(5\theta)/2.01,\,
\sin(3\theta)/10 + \cos(7\theta)/2.501.$$
By construction, $\gamma(\theta+\pi)-\gamma(\theta)$ is collinear with $(\cos\theta,\sin\theta)$.\
Thus, $\gamma'(\theta)=(r(\theta)+\frac{D}{2})(-\sin\theta,\cos\theta)$ is normal to $\gamma(\theta+\pi)-\gamma(\theta)$. Moreover, $r(\theta)>-\frac{D}{2}$, $\gamma'(\theta)\not=0$, thus $\gamma$ is regular. We could end the proof here by invoking the classical result that a curve such that every normal is a binormal is of constant diameter (see for instance [@JordanFiedler]), but let us give a self contained argument. Since $\gamma''(\theta)=r'(\theta)\cdot(-\sin\theta,\cos\theta)-(r(\theta)+\frac{1}{2})(\cos\theta,\sin\theta)$, the curvature of $\gamma$ is $$\rho(\theta)=\frac{||\gamma'(\theta)\wedge\gamma''(\theta)||}{||\gamma'(\theta)||^3}
=\frac{1}{|r(\theta)+D/2|}>\frac{1}{D},$$ since $r(\theta)<D/2$. To summarize, $\gamma$ satisfies:\
1) $||\gamma(\theta)-\gamma(\theta+\pi)||=D$,\
2) $\gamma'(\theta)$ is normal to $\gamma(\theta)-\gamma(\theta+\pi)$,\
3) the curvature is $>1/D$.\
We now show that $\gamma$ satisfies ($C(D)$). If $\epsilon>0$ is small enough, since the curvature of $\gamma$ at $\theta+\pi$ is strictly bigger than the one of the circle of radius $D$ and center $\gamma(\theta)$ while their derivatives are collinear, $\gamma(\theta+\pi\pm\epsilon)$ is in the interior of this circle, and thus $||\gamma(\theta+\pi\pm\epsilon)-\gamma(\theta)||<||\gamma(\theta+\pi)-\gamma(\theta)||$ (see the figure below, on the left). If one “goes away” from $\theta+\pi$, the distance will decrease more and more. Indeed, by construction the segments between $\gamma(\phi)$ and $\gamma(\phi+\pi)$ (for different $\phi$) all cross each other. We are thus in the case of the figure below, on the right.
The circle of radius $D$ and center $\gamma(\phi+\pi)$ is dashed. For the same reason as above, the curve will stay (locally) inside this circle. Thus, $$||\gamma(\phi)-\gamma(\theta)||>||\gamma(\phi+\epsilon)-\gamma(\theta)||$$ (see figure). The distance decreases thus strictly when $\phi$ gets away from $\theta+\pi$.
\
For this, write $f_\theta(\phi)=||\gamma(\theta)-\gamma(\theta+\phi)||$ and suppose first that $\gamma$ is a circle (i.e. that $G$ is constant). We fix $\epsilon>0$ such that if $\phi\in[-\epsilon,\epsilon]$ then $f_\theta(\phi)<D$ and if $\phi\in[\pi-\epsilon,\pi+\epsilon]$, $f_\theta(\phi)>D$. This property will remain true (for all $\theta$ and for the same $\epsilon$) if $G$ is small enough. If $\gamma$ is a circle, $f_\theta(\phi)$ is strictly increasing on $]\,0\, ,\,\pi\,[$ and strictly decreasing on $]\,\pi\, ,\,2\pi\,[$. Thus, if $G$ and its derivative are small, $f_\theta$ will be strictly increasing on $]\,\epsilon\, ,\,\pi-\epsilon\,[$ and strictly decreasing on $]\,\pi+\epsilon\, ,\,2\pi-\epsilon\,[$ for all $\theta$. There are therefore only two points at distance $D$ from $\gamma(\theta)$, so there is a unique $n$-gon with edges length $D$ inscribed in the curve at $\gamma(\theta)$ (if $G$ and its derivative are small enough). Let $\Gamma$ be a regular simple closed curve satisfying ($C_4(D/\sqrt{2})$) and ($C(D)$). We take $D=1$ for simplicity. Let $\gamma$ parametrise $\Gamma$ by arc length counterclockwise.\
\
Since the unit normal vector at $\gamma(t)$ is $n(t)=(-\gamma_2'(t),\gamma_1'(t))$, $y(\gamma(t))=\gamma(t)+n(t)$ is differentiable in $t$. (Notice that we cannot use the implicit function theorem here, because the derivative of $||x-z||$ vanishes precisely at $z=y(x)$.) Moreover, the angle between the oriented line segment $[\gamma(t),y(\gamma(t))]$ and the horizontal axis strictly increases. Given $\theta\in[0,2\pi]$, there is thus a unique (oriented) line segment $[x,y(x)]$ making an angle $\theta$ with the horizontal axis, we define $G(\theta)$ as its middle point. Since $y(x)$ is differentiable with respect to $x$, $G(\theta)$ is differentiable with respect to $\theta$. Then, $\wt{\gamma}(\theta)=G(\theta)+\frac{1}{2}(\cos\theta,\sin\theta)$ is a parametrisation of $\Gamma$ (see the figure below, on the left). By definition, $y(\wt{\gamma}(\theta))=\wt{\gamma}(\theta+\pi)$, and since the tangent of $\Gamma$ at $x$ is normal to $x-y(x)$, we have $$\bigl<\wt{\gamma} '(\theta)\,|\,(\cos\theta,\sin\theta)\bigr>
=\bigl< G'(\theta)\,|\,(\cos\theta,\sin\theta)\bigr>=0,$$ which implies $G'(\theta)=r(\theta)(-\sin\theta,\cos\theta)$ for some continuous $r$.\
Now, since $\Gamma$ has property ($C_4(1/\sqrt{2})$), there is a unique square $S(\theta)$ with edges length $\frac{1}{\sqrt{2}}$ inscribed in $\Gamma$ at $\wt{\gamma}(\theta)$. Since $\wt{\gamma}(\theta+\pi)$ is the unique point of $\Gamma$ at distance $1$ from $\wt{\gamma}(\theta)$, $\wt{\gamma}(\theta+\pi)$ is the vertex of $S(\theta)$ diagonal to $\wt{\gamma}(\theta)$ ($S(\theta)$ has diagonal $1$). Thus, $G(\theta)$ is also the center of $S(\theta)$, which implies $G(\theta+\pi/2)=G(\theta)$ (see figure below, on the right).
Therefore, $G'(\theta+\pi/2)=G'(\theta)$, i.e. $r(\theta+\pi /2)(\cos\theta,\sin\theta)=r(\theta)(-\sin\theta,\cos\theta)$, so $r(\theta)=0$, $G(\theta)$ is constant, and hence $\Gamma$ is a circle. [.3cm ]{}This proof can be easily generalized to show that if a regular closed simple curve has both properties ($C_{2n}(\wt{D})$) and ($C(D)$), with $D$ two times the radius of the regular $2n$-gon with edges length $\wt{D}$, then it is the circle. This however leaves open the following:
Are there $D,\wt{D}>0$ and curves other than the circle which have property $(C(D))$ and ($C_n(\wt{D})$) for some $n\ge 3$ ? Are there (non regular or non ) curves with properties $(C(D))$ and ($C_4(\frac{D}{\sqrt{2}})$) ?
We finish with the following proposition, which gives a motivation for the “$!$” in the definition of ($C(D)$):
\[prop2\] Let $\Gamma$ be a continuous closed piecewise regular curve satisfying [($C(D)$)]{}. Then, $\Gamma$ is regular.
Piecewise regular means that the curve is regular everywhere, except possibly at a finite number of points. The proof is left as an exercise. ([*Hint:* ]{} Show that $y(x)$ is continuous in $x$, then that $\Gamma$ cannot have corners.) Notice that taking $b=0$ in the first figure of this paper yields a curve of constant diameter with corners (the well known Reuleaux triangle).
I wish to thank D. Cimasoni, G. Wanner, and especially B. Dudez, librarian at the math department in Geneva, for his bibliographical help.
[10]{}
W. Blaschke. , volume 3. Thales-Verl., Essen, 1982–1986.
H. G. Eggleston and S. J. Taylor. On the size of equilateral triangles which may be inscribed in curves of constant width. , 27:438–448, 1952.
L. Euler. De curvis triangularibus. , II:3–30, 1780. Reprinted in Opera Omnia, vol. 28.
A. Hurwitz. Sur quelques applications g[é]{}om[é]{}triques des s[é]{}ries de [F]{}ourier. , pages 357–408, 1902. tome 19.
Ch. Jordan and R. Fiedler. . Hermann, Paris, 1912.
H. Minkowski. ber die [Kö]{}rper konstanter [B]{}reite. In [*Gesammelte Abhandlungen*]{}, volume II, pages 277–279. Chelsea Publ., New York, 1967. Reprint of the Leipzig Edition, 1911.
S. Rabinowitz. A polynomial curve of constant width. , 9(1):23–27, 1997.
H. Rademacher and O. Toeplitz. . Princeton University Press, Princeton, N.J., 1957.
F. Reuleaux. . Braunschweig, 1875. Available at [ http://historical.library.cornell.edu/kmoddl/toc[\_]{}reuleaux1.html]{}.
S. A. Robertson. Smooth curves of constant width and transnormality. , 16(3):264–274, 1984.
|
---
abstract: 'When a chromophore interacts with titrable molecular sites, the modeling of its photophysical properties requires to take into account all their possible protonation states. We have developed a multi-scale protocol, based on constant-pH molecular dynamics simulations coupled to QM/MM excitation energy calculations, aimed at sampling both the phase space and protonation state space of a short polypeptide featuring a tyrosine–tryptophan dyad interacting with two aspartic acid residues. We show that such a protocol is accurate enough to reproduce the tyrosine UV absorption spectrum at both acidic and basic pH. Moreover, it is confirmed that UV-induced radical tryptophan is reduced thanks to an electron transfer from tyrosine, ultimately explaining the complex pH-dependent behavior of the peptide spectrum.'
author:
- Elisa Pieri
- Vincent Ledentu
- Nicolas Ferré
bibliography:
- 'peptideM.bib'
title: 'Sampling the protonation states: pH-dependent UV absorption spectrum of a polypeptide dyad'
---
Introduction
============
The classical atomistic modeling of a biological molecule like a polypeptide, a protein or a DNA double helix usually involves a converged sampling of its configuration space, i.e. atom positions and velocities. Molecular dynamics (MD) simulations, in which trajectories are generated by solving classical Newton equations, are clearly among the most popular available methods and take benefit of continuous improvements on both the software (eg replica-exchange, accelerated MD) and hardware sides (GPUs, Anton)[@Durrant_11; @Grand_13; @Perilla_15; @Mori_16]. A typical MD simulation starts with some required input parameters: the force-field defining the atom-atom interaction energy and a set of atom coordinates and velocities used as initial conditions. The latter geometrical parameters are commonly obtained from available experimentally derived structures, often by means of X-ray diffraction or NMR spectroscopies[@Schneider_09]. However, no information is usually found regarding the distribution of the protonation states of titrable moieties like aspartic acid, lysine side-chains and similar in a protein. Hence the model needs to be complemented by an educated guess of these protonation states.
Most of the times, the protonation state of a titratable moiety is determined by comparing its [$\mathrm{p}K_\mathrm{a}$]{}with the pH of the system. Of course, experimental [$\mathrm{p}K_\mathrm{a}$]{}are macroscopic values which can barely be attributed to a given titrable site in a molecular system featuring several, and possibly interacting, sites. Hence empirical methods have been developed to give a quick and rough estimation of effective microscopic [$\mathrm{p}K_\mathrm{a}$]{}values for all the titrable sites in a system. For instance, the PropKa approach[@Olsson_11] uses an available 3-dimensional structure to estimate amino-acidic [$\mathrm{p}K_\mathrm{a}$]{}values in a protein. On the other hand, the recently developed constant-pH molecular dynamics (CpHMD) method[@Goh_12; @Swails2012; @Swails2014; @Huang_16a] has been especially designed to sample the protonation states of titrable amino-acids as a function of pH. Roughly speaking, this method introduces a Metropolis-based probability eventually allowing to change protonation states during the course of a normal MD simulation. This method has been shown to efficiently sample both the phase space and the protonation state space at the same time, given that sufficiently long trajectories are produced. Ultimately, the CpHMD simulations result in accurate [$\mathrm{p}K_\mathrm{a}$]{}predictions[@Lee_14a].
However, instead of using this information to decide on the most probable protonation states of the titrable sites, the same CpHMD trajectories can be exploited to give access to an ensemble of structures featuring a probability distribution of the protonation states at a given pH value, in agreement with the computed [$\mathrm{p}K_\mathrm{a}$]{}values. Furthermore, this ensemble can be used to calculate in a second step any molecular property whose value depends on the pH. This is precisely the target of the present study, in which the pH-dependent UV absorption spectrum of a small polypeptide is simulated for the first time. In such a case, the properties of interest (vertical excitation energies and oscillator strengths) have to be evaluated by a quantum mechanical method coupled to an approximate description of the interactions between the chromophore and its environment (QM/MM)[@Senn_09]. To the best of our knowledge, all the QM/MM models reported in the literature assume a single and constant (i.e. most probable) distribution of the protonation states (which will be called microstate in the following). In other words, the calculated molecular property is somehow biased towards this particular microstate.
The here-proposed CpHMD-then-QM/MM work-flow can be seen as the generalization of the routine MD-then-QM/MM approach[@Houriez_08; @Houriez_10; @Olsen_10a], used when (classical) nuclear motion contributions to a given molecular property are needed. The successful application of such a work-flow relies on a statistically meaningful selection of snapshots along the MD trajectory. Moreover the number of such snapshots has to be large enough to ensure the convergence of the property averaged value, ie with a reasonable standard deviation. In the case of the CpHMD approach, reaching such a convergence is certainly more involved than in standard MD, since the phase space is complemented with the protonation state space[@Swails2014].
After having presented the peptide, we will briefly present some details regarding the CpHMD and QM/MM models we used in the present study, and finally we will describe the procedure that we followed to obtain the spectrum.
Computational details
=====================
#### The Peptide M.
The subject of our study is a $\beta$-hairpin 18-mer, named Peptide M, designed by Barry *et al.*[@Pagba2015] and containing two UV-absorbing chromophores: tyrosine (Y5) and tryptophane (W14). The ultraviolet absorption spectrum features a dependency upon the pH value: the trace obtained by subtracting the tryptophan spectrum (recorded in water) from that of Peptide M (i.e. the Y5 spectrum) is always dominated in the 250-to-350-nm region by the $\pi-\pi^*$ transitions of the phenol ring, but the [$\lambda_\mathrm{max}$]{}undergoes a red-shift from $\sim$ 283 nm at pH 5 to $\sim$ 292 nm at pH 11. There is also an additional red-shift of a few nm at pH=5 with respect to tyrosine in water. Quoting [@Pagba2015], “*the red shift of the tyrosine ultraviolet spectrum in Peptide M is attributable to the close proximity of the cross-strand Y5 and W14 to form a Y5-W14 dyad.*”
![Peptide M structure and experimental UV absorption spectra (in nm)[@Pagba2015] of Y5 in Peptide M at pH=5 (red) and pH=11 (blue).[]{data-label="fig:peptideM_spectrum"}](peptideM_spectrum){width="75.00000%"}
It is rightful, being the experimental [$\mathrm{p}K_\mathrm{a}$]{}of the tyrosine side chain in water $\sim$ 10.9[@Cantor1980], to attribute this behavior to the deprotonation occurring at basic pH; however, the presence of two other titratable residues, aspartic acids D3 and D11, contributes to the complexity of the protonation microstates landscape. The small size of the peptide and the limited number of titratable amino-acids make this system the ideal case study for the development and testing of our method.
#### CpHMD Method.
We carried out the CpHMD method simulations in explicit solvent using a discrete protonation state model as presented by Roitberg *et al.*[@Swails2014] and implemented in the AMBER16 software suite[@Case2016]. In this method, the standard molecular dynamics steps are performed in explicit solvent, and periodically interspersed with attempts to change the protonation state in GB implicit solvent, which avoids the problem of the solvent molecule orientation; these attempts are regulated by a Metropolis Monte Carlo approach. After a successful protonation state change, which is handled by changing the charges on each atom of the residue according to the designed force field (AMBER FF10), the solvent molecules and non-structural ions are restored and relaxed, keeping the solute frozen; then, the velocities of the solute atoms are recovered, allowing the standard dynamics to continue.\
For this type of calculation, we made use of the replica exchange technique applied along the pH-dimension (pH-REMD), in order to enhance the sampling capabilities and get an acceptable convergence[@Itoh2011; @Swails2012] in the given time. Our simulations were carried out using periodic boundary conditions and with a total length of 40 ns, which we considered a good compromise between accuracy in the convergence and computational time; we used 8 pH-replicas, spanning from pH 3 to pH 6 and from pH 9 to pH 12 with one pH unit as interval.\
For the single microstates trajectories, we used the temperature replica exchange technique (T-REMD), aiming at overcoming small energy barriers and therefore exploring exhaustively the potential energy surface; we chose 6 temperature values from 260 and 360 K, with a 20 K interval.
#### QM/MM Method.
We extracted 40000 equally spaced snapshots from the trajectories at pH 5 and 11, and coupled each frame to the corresponding protonation microstate. This data allowed us to get a spatial distribution of point charges.\
We chose the tyrosine side-chain as QM subsystem, inserting a hydrogen link-atom between $C\alpha$ and $C\beta$, and calculated the electrostatic potential acting on each QM atom using a direct sum approach over regularly spaced images of the primitive cell used in the MD simulations. $7^3$ image boxes ensure the convergence of the electrostatic potential. In the case of an electrically charged system with total charge $q_t$, we neutralized each image by placing a $-q_t$ charge at its center. In other words, the electrostatic potential experienced by the QM subsystem originates from the (charged) primitive cell and from neutralized images. Electrostatic embedding of the QM subsystem is realized thanks to the ESPF method[@Ferre_02], as implemented into our local version of Gaussian09[@Gaussian09].
The [$\lambda_\mathrm{max}$]{}and oscillator strengths for the first four excited states were calculated using the Gaussian09 package[@Gaussian09] at the TD-DFT B3LYP/6-311G\* level of theory; this choice is justified by the aim of seeking qualitative and not necessarily quantitative accordance with the experimental data. Nevertheless we have assessed the quality of the TDDFT $S_0\rightarrow S_1$ vertical excitation energy with respect to state-of-the-art CASPT2 ones obtained using the Molcas package[@Aquilante_15] on a subset of 10 representative structures for both the protonated and deprotonated forms of Y5 side-chain. A full $\pi$ + oxygen lone pairs active space has been selected in the CASSCF calculations, using the triple-$\zeta$ ANO-L-VTZP basis set together with the resolution of identity based on the Cholesky decomposition of two-electron integrals[@Bostrom_10]. All details are available in Supporting Informations. Inspection of Table \[tab:caspt2\_benchmark\] clearly shows the qualitative agreement between CASPT2 and TDDFT vertical excitation energies. In particular, the 100 nm red-shift caused to the $S_0\rightarrow S_1$ transition by tyrosine deprotonation is correctly reproduced (90 nm).
----------------------- -------- ------- -------------- -------
CASPT2 TDDFT CASPT2 TDDFT
$S_0 \rightarrow S_1$ 271 258 371 $\pm$ 12 348
----------------------- -------- ------- -------------- -------
: Benchmark vertical excitation energies of tyrosine (in nm), averaged over 10 different configurations for each protonation micro-state.[]{data-label="tab:caspt2_benchmark"}
#### Spectrum Elaboration.
The absorption spectra were generated at room temperature with normalized Lorentzian functions from the excitation energies for the first four excited states and the corresponding oscillator strengths using Newton-X 2.0[@Barbatti2014; @Barbatti2015], which adopts the nuclear ensemble approach[@Barbatti2007]. Data of the experimental spectra published in [@Pagba2015] have been kindly provided by Prof. Barry[@PersonalCommunication].
Results
=======
#### Microstate populations.
Titration curves for all titrable residues in the system are the first useful information coming out from CpHMD simulations. Their shapes not only give qualitative informations regarding the convergence of the simulations, it also may indicate non-Henderson-Hasselbalch [@Onufriev_01; @Po_01] behaviors possibly arising when titrable sites are strongly interacting. In the case of Peptide M (Figure \[fig:titration\_curves\]), we first produced 12 ns long trajectories for pH ranges 3–6 and 9–12. The smooth sigmoidal shape of the 3 titration curves corresponding to D3, Y5 and D11 is a good indication of a converged exploration of both the phase and protonation state spaces.
![Titration curves (deprotonated fraction as a function of pH) from CpHMD trajectories for the three titrable residues D3, Y5 and D11 in Peptide M.[]{data-label="fig:titration_curves"}](titration){width="75.00000%"}
Hill fitting [@Onufriev_01] of the titration curves result in Hill factor $n_h$ equal to 0.96, 1.07 and 0.93 for D3, Y5 and D11 respectively. In other words, these titrable residues are interacting negligibly in the protonation state space. The subsequent analysis of the fitted curves allows to determine the microscopic [$\mathrm{p}K_\mathrm{a}$]{}value of each titrable residue. As expected, the [$\mathrm{p}K_\mathrm{a}$]{}value of Y5, close to 10 and not far from the reference [$\mathrm{p}K_\mathrm{a}$]{}of tyrosine (in water), is higher than the aspartic acids ones (about 4). This result implies that at pH=5, the pH value at which the Peptide M absorption spectrum has been experimentally determined, the deprotonated form of both D3 and D11 dominates. Of course, at this pH value, Y5 is always protonated. On the other hand, at pH=11, corresponding to the second experimental absorption spectrum value, both D3 and D11 are always deprotonated while Y5 is predominantly in the deprotonated form.
After having established qualitatively the relative populations of the various microstates, we have expanded to 40 ns the trajectories corresponding to the same pH ranges 3–6 and 9–12. The D3 [$\mathrm{p}K_\mathrm{a}$]{}value converges to 4.11, while D11 [$\mathrm{p}K_\mathrm{a}$]{}is evaluated to 4.27. The detailed analysis of the microstates is reported in Figure \[fig:microstate\_populations\].
![D3, Y5 and D11 microstate populations at pH=5 and pH=11. “d” stands for deprotonated while a number 1–4 stands for protonated in the corresponding position in Peptide M).[]{data-label="fig:microstate_populations"}](populations){width="75.00000%"}
First, it should be noted that, in principle, the position of the acidic proton on one or the other oxygen atom of D3 (or D11) are equivalent. However, the small difference in the corresponding populations (Figure \[fig:microstate\_populations\]), about 0.1%, indicates that the trajectories are sufficiently converged to obtain reliable population estimates. Y5 is always protonated at pH=5 while D3 and D11 are predominantly (73%) deprotonated. On the other hand, there exists a noticeable population (14%) in which D11 is protonated on one of the two oxygen atoms. The same occurs with D3, but to a lower extent (8%). All other 20 possible micro-states population amounts to 4%.
At pH=11, both D3 and D11 are fully deprotonated, while Y5 is 93% deprotonated. Accordingly, only 2 micro-states are populated. Compared to pH=5 populations, this situation looks easier to handle. We will see in the following that it is not necessarily true.
With respect to the [$\mathrm{p}K_\mathrm{a}$]{}value of isolated aspartic acid in water (3.9), D3 closer [$\mathrm{p}K_\mathrm{a}$]{}value implies a slightly larger stabilization of D11 protonated form which may be attributed to enhanced interactions between D11 and other components of Peptide M. Figure \[fig:distances\] (complemented with Table \[tab:distances\]) reports a selected set of average distances at pH=5 and pH=11.
![Selected average distances (in Å) at pH=5 (in red) and pH=11 (in blue) between D3 (blue, bottom left), Y5 (purple), D11 (blue, top right), W14 (orange) and R16 (green).[]{data-label="fig:distances"}](Distances){width="75.00000%"}
Y5 $\cdots$ W14 Y5 $\cdots$ D3 Y5 $\cdots$ D11 W14 $\cdots$ D3 W14 $\cdots$ D11 Y5 $\cdots$ R16
------------ ----------------- ---------------- ----------------- ----------------- ------------------ -----------------
pH=5 $9.1 \pm 2.2$ $10.1 \pm 2.1$ $10.8 \pm 1.3$ $6.4 \pm 2.9$ $12.6 \pm 1.6$ $17.1 \pm 2.3$
D3:d,D11:d $8.8 \pm 0.9$ $9.7 \pm 1.9$ $10.7 \pm 0.8$ $6.8 \pm 1.9$ $10.5 \pm 0.7$ $19.2 \pm 2.1$
D3:d,D11:3 $11.1 \pm 1.0$ $10.4 \pm 1.0$ $11.5 \pm 0.9$ $4.5 \pm 1.7$ $13.6 \pm 1.5$ $18.3 \pm 0.9$
D3:d,D11:1 $8.5 \pm 0.7$ $10.3 \pm 0.8$ $11.5 \pm 0.8$ $5.9 \pm 1.2$ $11.1 \pm 0.8$ $15.0 \pm 1.5$
D3:1,D11:d $8.5 \pm 0.5$ $10.7 \pm 0.7$ $10.7 \pm 0.7$ $6.6 \pm 1.3$ $10.6 \pm 0.6$ $20.4 \pm 1.6$
D3:3,D11:d $8.0 \pm 0.7$ $10.3 \pm 1.4$ $11.7 \pm 0.8$ $5.5 \pm 1.4$ $10.4 \pm 0.7$ $17.5 \pm 1.9$
pH=11 $9.1 \pm 1.6$ $ 9.2 \pm 1.7$ $12.1 \pm 1.0$ $7.1 \pm 3.1$ $10.6 \pm 1.4$ $17.0 \pm 2.5$
: Selected average distances and standard deviations (in Å) at pH=5 (also decomposed according to the most important microstates, see Figure \[fig:microstate\_populations\] for notation) and pH=11 between D3 (), Y5 () , D11 (), W14 () and R16 ().
\[tab:distances\]
First, it should be noted that the pH does not seem to modify the average distance (9.1 Å) between Y5 and W14. However, the corresponding fluctuations are larger at acidic pH than at basic one. Regarding the distances between the two aspartic acids (D3 and D11) and the members of the dyad (Y5 and W14), they show different behaviors with respect to the pH. While Y5 is always closer to D3 than D11, the distance between Y5 and D3 decreases with increasing pH while the distance between Y5 and D11 increases at the same time. On the other hand, W14 is always much closer to D3 than D11. When going to acidic to basic pH, the distance between W14 and D3 slightly increases while the the distance between W14 and D11 decreases by 2 Å. Finally, at variance with results indicated by Pagba et al [@Pagba2015], our simulations do not show evidence of strong (hydrogen-bond) interactions between Y5 and R16, the corresponding distance being always larger than 17 Å.
#### Convergence and correlations.
The accuracy of the proposed simulation protocol ultimately depends on the quality of the underlying statistics, i.e. the production of a sufficiently large number of uncorrelated snapshots extracted from the CpHMD trajectories. We first investigated the dependence of the [$\mathrm{p}K_\mathrm{a}$]{}predicted values with the length of the trajectories, using four 10 ns and two 20 ns windows extracted from the available 40 ns trajectories at each pH value and compared them with the [$\mathrm{p}K_\mathrm{a}$]{}values obtained from the 40 ns trajectories. As reported in Supplementary Informations (Figure \[fig:D3pKaEVO\]), the D3 [$\mathrm{p}K_\mathrm{a}$]{}value does not change much, converging to 4.11. However, D11 [$\mathrm{p}K_\mathrm{a}$]{}value is less stable, ranging from 4.66 to 3.80 if only 10 ns of trajectory are used (Supplementary Informations, Figure \[fig:D11pKaEVO\]). Nevertheless, the [$\mathrm{p}K_\mathrm{a}$]{}value obtained from the 40 ns trajectories is 4.27.
We then determined the minimum time step between two consecutive uncorrelated snapshots extracted from the CpHMD trajectories. This was achieved by analyzing the autocorrelation function of the QM/MM vertical excitation energies (ground to the first 4 excited states) computed from 10000 snapshots separated by 1 fs. As apparent in Supplementary Informations (Figure \[fig:ACF\]), two consecutive snapshots are uncorrelated if they are separated by about 700 fs. Because the QM/MM calculations are somehow expensive, we decided to sample the CpHMD trajectories each ps in the following.
#### Analyzing Y5 spectrum at pH=5.
The computed UV absorption spectrum of Y5 in Peptide M at pH=5 is reported in Figure \[fig:acid\_spectra\].
![Computed UV absorption spectrum for Y5 in Peptide M and contributions from each important microstate at pH=5 weighted by their respective population.[]{data-label="fig:acid_spectra"}](acid_spectra.pdf){width="100.00000%"}
It includes 2 bands between 200 and 300 nm, with [$\lambda_\mathrm{max}$]{}values equal to 263.6 and 223.9 nm. Given the TDDFT 20 nm blue-shift already documented (Table \[tab:caspt2\_benchmark\]), the first [$\lambda_\mathrm{max}$]{}value may reproduce the 276 or 283 nm experimental value. The second maximum is even in better agreement with the experimental [$\lambda_\mathrm{max}$]{}at 227 nm.\
In order to disentangle the contribution of each microstate, we have performed single microstates MD simulations with T-REMD, i.e. with a fixed distribution of protonation states, and calculated the corresponding UV spectra (Figure \[fig:acid\_spectra\]). Obviously, the main contributions originate from the most abundant microstate in which D3 and D11 are deprotonated, while Y5 is protonated. Accordingly, the pH=5 spectrum of Y5 could be satisfactorily modeled using this single microstate. However, it is interesting to have a look to the other contributions, i.e. the microstates in which either D3 or D11 are protonated. When considering the first absorption band, D3 protonation is responsible for 0.4 nm red-shift. On the other hand, D11 protonation comes together with a more limited 0.1 or 0.2 nm red-shift, depending on which D11 oxygen atom the proton is located on. Again, for the second absorption band, D3 causes a larger [$\lambda_\mathrm{max}$]{}red-shift (0.5 nm) than D11 (0.0 to 0.3 nm).
Such a D3 vs D11 difference, while remaining small, demonstrates that the present CpHMD-then-QM/MM protocol is able to capture subtle [$\lambda_\mathrm{max}$]{}changes caused by modifications of the protonation states. As already seen from the average distances (Figure \[fig:distances\]), the origin of [$\lambda_\mathrm{max}$]{}slight changes cannot be simply related to the distances between Y5 and each aspartic acid. Indeed, the distance between Y5 and D3 is reduced when D3 is protonated, while it is strongly enlarged (by more than 2Å) when D11 is protonated in position 3 (see Table \[tab:distances\]).
However, we expected the final CpHMD spectrum at pH 5 (Figure \[fig:acid\_spectra\] in black) to feature [$\lambda_\mathrm{max}$]{}values very close to the ones of the most abundant microstate (Figure \[fig:acid\_spectra\] in red), while our results show a small difference (0.2 nm blue-shift) in the first absorption band and a large one (1.3 nm red-shift) in the second band. Actually, these differences cannot be justified only by the contributions of the other microstates. Indeed, their corresponding [$\lambda_\mathrm{max}$]{}are always red-shifted with respect to the main microstate [$\lambda_\mathrm{max}$]{}. While these shifts are going in the right direction in the case of the second band (222.6 nm $\rightarrow$ 223.9 nm), this is no longer true in the first band (263.8 nm $\rightarrow$ 263.6 nm). Since the distances analysis of the single microstates T-REMD simulations (Table \[tab:distances\]) looks consistent, we conclude that the phase space sampled with the pH-REMD algorithm could possibly be less converged than the T-REMD one for this biological system. In other words, the absorption spectrum of peptide M may require more sampling than its pKa values.
#### Comparing Y5 spectra at pH=5 and pH=11.
The computed UV absorption spectra of Y5 in Peptide M at pH=5 and pH=11 are reported in Figure \[fig:wrong\_spectra\], together with the experimental spectra reproduced from [@Pagba2015]. Going from pH=5 to pH=11, the experimentally reported red-shift is reproduced by our calculations. However, instead of a 10 nm displacement of the first absorption band, applying the CpHMD-then-QM/MM procedure results in a much larger 70 nm red-shift.
![Y5 spectra in Peptide M at pH=5 and pH=11 (anionic Y5), both calculated and experimental[@Pagba2015].[]{data-label="fig:wrong_spectra"}](wrong_spectra){width="90.00000%"}
Several possible reasons may cause such a discrepancy. The use of the B3LYP/6-311G\* level of theory results in the usual blue-shift of the $S_0\rightarrow S_1$ excitation energy. However, this shift does not change going from pH=5 (protonated Y5) to pH=11 (deprotonated Y5).
The quality of the QM/MM electrostatic interactions between Y5 and the surrounding water molecules may be responsible. When deprotonated, the Y5 electron density is strongly localized in the phenolate moiety, hence in close contact with the MM point charges which may in turn induce an overpolarization of Y5 electron density. As a matter of fact, we have compared QM/MM and QM-only vertical excitation energies obtained from 10 selected snapshots, treating the closest water molecules (within a 3 Å distance from Y5 oxygen atom) either with point charges or quantum-mechanically. Inspection of Table \[tab:water\_interactions\] shows that i) the QM/MM approach works well when Y5 is protonated, ii) an average 16 nm blue-shift has to be applied when Y5 is deprotonated.
----------------- ----- ------ ----------
QM QM+q $\Delta$
Protonated Y5 254 259 5
Deprotonated Y5 300 316 16
----------------- ----- ------ ----------
: Influence of the level of theory (fully QM or QM polarized by water point charges (QM+q)) describing the Y5 – water interactions on the first vertical transition (averaged over 10 snapshots, in nm).
\[tab:water\_interactions\]
The corresponding shifted spectrum has been added to Figure \[fig:wrong\_spectra\], in green. Still, a too large pH=5 to pH=11 spectral shift (about 50 nm, instead of 10 nm) is obtained.
In their report, Barry et al [@Pagba2015] indicate that Y5 and W14 are actually strongly interacting: “*the red shift of the tyrosine ultraviolet spectrum in Peptide M is attributable to the close proximity of the cross-strand Y5 and W14 to form a Y5-W14 dyad.*” As a matter of fact, the UV absorption spectrum of Y5 is perturbed at pH=5, but not at pH=11, with respect to the reference spectrum in water. Conversely, the UV absorption of W14 is perturbed at pH=11, but not at pH=5. Together with other spectroscopic arguments, these perturbations are interpreted as the signature of the formation of a dyad. Actually, their close proximity may promote a photoinduced electron transfer between tyrosine and transient radical tryptophan [@Morozova_03; @Reece_05]. Assuming that W14 can be oxidized by the UV laser used in the experiment[@Pagba2015] to form the radical form we will denote in the following, it can then react with protonated Y5 (denoted in chemical reaction \[ce1\]) at pH=5 or deprotonated Y5 (denoted in chemical reaction \[ce2\]) at pH=11. $$\begin{aligned}
\ce{W^{.+} + Y-OH &<=> W + Y-O^{.} + H+ \label{ce1} \\
W^{.+} + Y-O- &<=> W + Y-O^{.}} \label{ce2}\end{aligned}$$ Whatever the pH value, these chemical reactions result in the generation of radical tyrosine which spectroscopic signature may be significantly different not only from the neutral (protonated) Y5 one, but also from the anionic (deprotonated) Y5 one. In order to test this hypothesis, we have re-analyzed the pH=11 trajectory, assuming that each deprotonated extracted snapshot (93% of the population) actually features a radical Y5. The corresponding absorption spectrum is reported in Figure \[fig:correct\_spectra\].
![Y5 spectra in Peptide M at pH=5 and pH=11 (radical Y5), both calculated and experimental[@Pagba2015].[]{data-label="fig:correct_spectra"}](correct_spectra){width="90.00000%"}
The improvement of the pH=11 spectrum is spectacular: the two experimental peaks at 283 and 292 nm are perfectly reproduced. Moreover, their respective intensities are also in agreement with experiment. From these results, we can conclude that indeed Y5 and W14 may form a dyad featuring a deprotonated radical tyrosine, possibly in equilibrium with its neutral protonated form at pH=5, even if the average distance between Y5 oxygen atom and W14 nitrogen atom in our simulations is rather large at both pH values (9.1 Å). This distance could not be representative of the actual distance in the presence of the Y5-W14 dyad, when W14 is a radical species.
Conclusions
===========
In this article, we have reported a new multi-scale protocol developed for simulating the pH-dependent photophysical properties of a peptide featuring a tyrosine-tryptophan dyad in interaction with two titrable aspartic acid residues. The modeling work-flow features two main steps: (i) the sampling of both the phase space and the protonation state space of the peptide by CpHMD simulations, (ii) the calculation of the tyrosine UV absorption spectrum by means of QM/MM calculations.
Using the replica-exchange approach, CpHMD-based [$\mathrm{p}K_\mathrm{a}$]{}values of the three titrable residues are converged in tenths of ns, with uncorrelated snapshots separated by 1 ps. Using the ESPF method, the QM/MM calculations can be achieved on thousands of protonated or deprotonated tyrosine side-chains polarized electrostatically by their environment (peptide and water molecules).
At pH=5, tyrosine in Peptide M is fully protonated (neutral). However its interaction with aspartic acid or aspartate residues in various minor microstates induces small deviations from the principal microstate.
At pH=11, tyrosine in Peptide M is mostly deprotonated (ionized) while interacting with deprotonated aspartate residues. However, its experimental UV absorption spectrum cannot be explained without assuming that (i) tryptophan can be ionized by the UV-light source and (ii) radical tryptophan is reduced by an electron transferred from tyrosine which UV spectrum signature reflects its radical nature, ultimately confirming the existence of the tryptophan–tyrosine dyad.
In principle, the reported modeling protocol can be applied to the calculation of any pH-dependent molecular property, especially when it depends on a larger protonation state space, as it is the case in proteins which may feature a very large number of titrable residues.
The authors thank the French Agence Nationale de la Recherche for funding (grant ANR-14-CE35-0015-02, project FEMTO-ASR). Mésocentre of Aix-Marseille Université and GENCI (CINES Grant 2017-A0010710063) are acknowledged for allocated HPC resources.
Convergence of the [$\mathrm{p}K_\mathrm{a}$]{}values with the CpHMD simulation time. Autocorrelation function of vertical excitation energies.
|
---
abstract: 'Cepheids and RR Lyrae stars are important pulsating variable stars in distance scale work because they serve as standard candles. Cepheids follow well-defined period-luminosity (PL) relations defined for bands extending from optical to mid-infrared (MIR). On the other hand, RR Lyrae stars also exhibit PL relations in the near-infrared and MIR wavelengths. In this article, we review some of the recent developments and calibrations of PL relations for Cepheids and RR Lyrae stars. For Cepheids, we discuss the calibration of PL relations via the Galactic and the Large Magellanic Cloud routes. For RR Lyrae stars, we summarize some recent work in developing the MIR PL relations.'
---
Introduction
============
Classical Cepheids and RR Lyrae stars are pulsating stars that play a vital role in the definition of the distance scale ladder[^1]. This is because they are standard candles in the local Universe that permit the calibration of secondary distance indicators (e.g., the peak brightness of type Ia supernovae). The ultimate goal of the distance scale ladder is to determine the Hubble constant ($H_0$) with 1% precision and accuracy. The existence of period-luminosity (PL) relations for Cepheids (from optical to infrared wavelengths) makes distance determination using this type of variable star possible. In this article, we review some prospects of the calibration of Cepheid PL relations and their role in the recent distance scale work (Section \[sec2\]). RR Lyrae stars also obey a PL relation in the infrared, and we review some of the recent developments of such relations in Section \[sec3\].
The Cepheid period-luminosity relation {#sec2}
======================================
The Cepheid PL relation is a 2-D projection of the period-luminosity-color (PLC) relation on the logarithmic period and magnitude plane, where the PLC relation can be derived by combining the Stefan-Boltzmann law, the period-mean density relation for pulsators, and the mass-luminosity relation based on stellar evolution models. Discussion of the physics behind the Cepheid PL relation can be found in [@madore1991 Madore & Freedman (1991)], and will not be repeated here. The PL relation usually takes the linear form of $M_\lambda = a_\lambda \log(P) + b_\lambda$, where $a$ and $b$ are the slope and intercept of the relation in bandpass $\lambda$, respectively. Once the slopes and intercepts of the multi-band PL relations are determined or calibrated, the distance to a nearby galaxy can be obtained by fitting the calibrated PL relations to the Cepheid data in that galaxy (see Fig. \[fig0\]).
![Illustration of using the calibrated Cepheid PL relation to determine the distance modulus to a galaxy. After a calibrated PL relation is adopted, this calibrated PL relation is shifted vertically to fit the observed Cepheids data in a given galaxy, and the vertical offset provides the distance modulus ($\mu$) of the galaxy.[]{data-label="fig0"}](ngeow_fig1-new.eps){width="3.0in"}
Calibration of Cepheid PL relations
-----------------------------------
Determining the slope of the PL relation is relatively straightforward. The large number of Cepheids discovered in the Magellanic Clouds permits the determination of the PL slope with $\sim$10$^{-2}$ accuracy ([@soszynski2008; @soszynski2010 Soszyński et al. 2008, 2010]). The derivation of PL intercepts, on the other hand, is trickier, because distances to a number of Cepheids need to be known or inferred [*a priori*]{}. There are two routes to calibrate the Cepheid PL intercepts that are commonly found in literature: the Galactic route and the Large Magellanic Cloud route.
The Galactic route relies on Galactic Cepheids that are located in the solar neighborhood, i.e. those within few kpc. These Cepheids are bright enough that extensive data, both multi-band light curves and radial velocity curves, are available from the literature. However, they suffer from varying extinction and their distances need to be determined independently. A number of Galactic Cepheids is close enough to permit an accurate parallax measurement using [*Hipparcos*]{} ([@vaL2007 van Leeuwen et al. 2007]) or [*Hubble Space Telescope*]{} ([*HST*]{}, [@benedict2007 Benedict et al. 2007]). In the near future, [*Gaia*]{} will provide reliable parallaxes to almost all nearby Galactic Cepheids. Besides parallaxes, distances to Galactic Cepheids can also be determined from the Baade-Wesselink (BW) technique and its variants. The BW technique combines the measurements of radial velocities and angular diameters to derive the distance and mean radius for a given Cepheid. The angular diameter variations can be determined from the infrared surface brightness method (see, for example, [@storm2011 Storm et al. 2011], and references therein) or the interferometric technique (e.g., as in [@gallene2012 Gallenne et al. 2012]). A critical parameter in the BW technique is the projection factor, or p-factor (that converts the observed radial velocity to pulsational velocity), because a 1% error in the p-factor translates to a 1% error in the derived distance. For a Cepheid located in an open cluster, the distance to the Cepheid can be inferred from the distance of its host cluster measured via isochrone fitting ([@turner2010 Turner 2010]). Finally, the distance to a large number of Cepheids can be obtained from the calibrated Wesenheit function using [*HST*]{} parallaxes ([@ngeow2012 Ngeow 2012]). Examples of PL relations based on Galactic Cepheids can be found in [@tammann2003 Tammann et al. (2003)], [@ngeow2004 Ngeow & Kanbur (2004)] and [@fouque2007 Fouqu[é]{} et al. (2007)]. It has been argued that the PL relations calibrated with Galactic Cepheids are preferred in distance scale work (see [@tammann2003 Tammann et al. 2003]; [@kanbur2003 Kanbur et al. 2003] and reference therein), because the spiral galaxies that are used to calibrate the secondary distance indicators have metallicities close to solar value, and hence a metallicity correction to the Cepheid PL relation is not needed to derive distances in this way.
The Large Magellanic Cloud (LMC), located $\sim$50 kpc away, is an irregular galaxy that is far enough to assume that Cepheids in this galaxy lie at the same distance. Yet the LMC is also close enough that stars observed there can be resolved. Therefore, the LMC Cepheids have been commonly used in the previous studies on calibrating the Cepheid PL relations. However, measurements of the LMC distance modulus ($\mu_{\rm LMC}$) show a wide spread, ranging from $\sim$18.0 to $\sim$19.0 mag, with a center around $18.5\pm0.1$ mag (for example, see [@freedman2001; @benedict2002; @shaefer2008 Freedman et al. 2001, Benedict et al. 2002, Schaefer 2008])[^2]. This causes the calibration of the PL intercepts to suffer a systematic error of the order of $\sim$5% ([@freedman2001 Freedman et al. 2001]). For this reason, some of the PL relations derived from the LMC Cepheids leave the PL intercepts un-calibrated (i.e., the values are taken from fitting only), as shown in [@soszynski2008 Soszyński et al. (2008)] and [@ngeow2009 Ngeow et al. (2009)]. Nevertheless, this problem is solved with the latest result published by [@pietrzynski2013 Pietrzy[ń]{}ski et al. (2013)]. By using late-type eclipsing binary systems, they determined the distance to the LMC with 2% accuracy, i.e., $\mu_{\rm LMC}=18.493\pm0.048$ (total error). Then, the PL relations for fundamental mode LMC Cepheids given in [@soszynski2008 Soszyński et al. (2008)] become: $V=-2.762(\pm0.022)\log P - 0.963 (\pm0.015)$, $I=-2.959(\pm0.016)\log P - 1.614(\pm0.010)$ (both uncorrected for extinction), and $W=-3.314(\pm0.009)\log P -2.600 (\pm0.006)$. Similarly, the multi-band PL relations from [@ngeow2009 Ngeow et al. (2009)] can be calibrated, which is summarized in Table \[tab1\].
[**Band**]{} [**Slope**]{} [**Fitted Intercept**]{} [**Calibrated Intercept**]{}
--------------------- ------------------ -------------------------- ------------------------------
$V$ $-2.769\pm0.023$ $17.115\pm0.015$ $-1.378$
$I$ $-2.961\pm0.015$ $16.629\pm0.010$ $-1.864$
$J$ $-3.115\pm0.014$ $16.293\pm0.009$ $-2.200$
$H$ $-3.206\pm0.013$ $16.063\pm0.008$ $-2.430$
$K$ $-3.194\pm0.015$ $15.996\pm0.010$ $-2.497$
3.6 $\mu\mathrm{m}$ $-3.253\pm0.010$ $15.967\pm0.006$ $-2.526$
4.5 $\mu\mathrm{m}$ $-3.214\pm0.010$ $15.930\pm0.006$ $-2.563$
5.8 $\mu\mathrm{m}$ $-3.182\pm0.020$ $15.873\pm0.015$ $-2.620$
8.0 $\mu\mathrm{m}$ $-3.197\pm0.036$ $15.879\pm0.034$ $-2.614$
$W$ $-3.313\pm0.008$ $15.892\pm0.005$ $-2.601$
: Examples of the calibrated multi-band LMC PL relations.[]{data-label="tab1"}
Two additional issues need to be taken into account when calibrating the LMC PL relations: extinction correction and non-linearity of the LMC PL relation. The LMC is known to suffer from differential extinction, hence extinction corrections need to be applied to individual LMC Cepheids by means of extinction maps (e.g., [@zaritsky2004; @haschke2011 Zaritsky et al. 2004, Haschke et al. 2011]). The LMC PL relation is also known to be non-linear in optical bands: the PL relation can be split into two relations separated at 10 days (for examples, see [@sandage2004; @kanbur2004; @ngeow2005; @garcia2013 Sandage et al. 2004, Kanbur & Ngeow 2004, Ngeow et al. 2005, Garc[í]{}a-Varela et al. 2013]). Both these issues, nevertheless, can be remedied by using the Wesenheit function ([@madore1991; @ngeow2005a; @madore2009; @ngeow2009; @bono2010; @inno2013 Madore & Freedman 1991, Ngeow & Kanbur 2005, Madore & Freedman 2009, Ngeow et al. 2009, Bono et al. 2010, Inno et al. 2013]) or moving to the mid-infrared (MIR, from $\sim$3 $\mu\mathrm{m}$ to $\sim$10 $\mu\mathrm{m}$, [@freedman2008; @ngeow2008; @madore2009a; @ngeow2010; @scowcroft2011 Freedman et al. 2008, Ngeow & Kanbur 2008, Madore et al. 2009, Ngeow et al. 2010, Scowcroft et al. 2011]) at which extinction is negligible.
Examples of distance scale application
--------------------------------------
Both the [*HST*]{} $H_0$ Key Project ([@freedman2001 Freedman et al. 2001]) and SN Ia [*HST*]{} Calibration Program ([@sandage2006 Sandage et al. 2006]), two benchmark programs that utilized the Cepheid PL relation in distance scale work, derived a Hubble constant with a $10$% uncertainty. Since then, two additional programs, the SH0ES (Supernovae and $H_0$ for the Equation of State, [@riess2011 Riess et al. 2011]) and the CHP (Carnegie Hubble Program, [@freedman2012 Freedman et al. 2012]), aimed to determine the Hubble constant with a $3$% uncertainty by reducing or eliminating various systematic errors. Again, the Cepheid PL relation plays an important role in these programs. One of the main differences between the SH0ES program and previous programs is that in the SH0ES program the LMC was replaced with NGC 4258 as an anchoring galaxy in the determination of the distance scale ladder. In NGC 4258, the motions of water masers surrounding its central black hole permit an accurate geometrical distance to be determined ([@humphreys2008 Humphreys et al. 2008]). To further reduce the systematic errors along the distance scale ladder, the SH0ES program adopted only “ideal” type Ia supernovae in nearby galaxies. They are used to calibrate their peak brightness, using a homogeneous sample of Cepheids, and observed with a single instrument on-board the [*HST*]{}. The CHP, on the other hand, recalibrated the [*HST*]{} $H_0$ Key Project distance scale ladder by adopting the MIR PL relation, where the PL slopes are defined by the LMC Cepheids and the PL intercepts are calibrated with Galactic Cepheids that have [*HST*]{} parallaxes. Similar to SH0ES, CHP also utilized only a single instrument on-board the [*Spitzer Space Telescope*]{} to derive and calibrate the MIR Cepheid PL relations. Both programs derived the Hubble constant with an uncertainty of $\sim$3%.
Period-luminosity relations for RR Lyrae stars {#sec3}
==============================================
RR Lyrae stars follow PL relations in optical to infrared bands. However, the $V$-band bolometric correction for RR Lyrae stars is almost independent of temperature, suggesting the slope of their $V$-band PL relation is zero or very close to it (instead, RR Lyrae stars follow an $M_V$-\[Fe/H\] relation in the $V$-band). In contrast, there is a temperature dependence of the bolometric correction in infrared bands, which translates to an observed $K$-band PL relation ([@bono2001; @bono2003 Bono et al. 2001, Bono 2003]). The observed $K$-band PL relation for RR Lyrae stars can be dated back to [@longmore1986 Longmore et al. (1986)], who derived the relation based on single-epoch observations of RR Lyrae stars in three globular clusters. Recent calibration of the $K$-band PL relation, or the PL$_K$-\[Fe/H\] relation, can be found in, for example, [@sollima2006 Sollima et al. (2006)], [@borissova2009 Borissova et al. (2009)], [@benedict2011 Benedict et al. (2011)] and [@dambis2013 Dambis et al. (2013)]. When calibrating the $K$-band PL relation with RR Lyrae stars in globular clusters, one has to be cautious because RR Lyrae stars near the cluster’s core may suffer from blending ([@majaess2012 Majaess et al. 2012]).
![Preliminary RR Lyrae stars PL relations in [*WISE’s*]{} bands based on $143$ field RR Lyrae stars. Filled and open circles represent the RR Lyrae stars of both Bailey $ab$ and $c$ type, respectively.[]{data-label="fig1"}](ngeow_fig2-new.eps){width="3.0in"}
The derivation of the PL relation for RR Lyrae stars can be extended to MIR wavelengths. This is convincingly demonstrated by [@klein2011 Klein et al. (2011)], who derived the MIR PL relations in [*Wide-field Infrared Survey Explorer (WISE)*]{} $W1$ (3.4 $\mu\mathrm{m})$, $W2$ (4.6 $\mu\mathrm{m})$ and $W3$ (12 $\mu\mathrm{m})$ bands for 76 field RR Lyrae stars. When deriving these PL relations, [@klein2011 Klein et al. (2011)] employed a Bayesian framework where the posterior distances were based on the data from [*Hipparcos*]{}. An updated version of the MIR PL relations with nearly double the sample size is shown in Fig. \[fig1\]. Independently, [@madore2013 Madore et al. (2013)] derived similar MIR PL relations based on four Galactic RR Lyrae stars having parallaxes measured by the [*HST*]{}.
Conclusion
==========
Independent measurements of the Hubble constant via the distance scale ladder are expected to achieve $\sim$1% uncertainty in the future. This is possible due to a large number of Cepheids and RR Lyrae stars with high-quality data which will become available from various future or on-going projects, such as [*Gaia*]{}, the fourth-phase of the Optical Gravitational Lensing Experiment (OGLE-IV), and the VISTA survey of the Magellanic Clouds (VMC). The [*James Webb Space Telescope*]{} (*JWST*), which will operate mainly in the MIR, is expected to routinely observe Cepheids beyond 30 Mpc, and it is also expected that data from this satellite will allow to derive a Hubble constant with a 1% uncertainty. Therefore, accurate and independent calibrations of the PL relations for Cepheids and RR Lyrae stars in the MIR are important in the preparation for the [*JWST*]{} era.
We would like to thank the invitation of the SOC and the LOC for presenting this talk at the conference. CCN acknowledges the support from NSC grant NSC101-2119-M-008-007-MY3.
2002, *AJ*, 123, 473
2007, *AJ*, 133, 1810
2011, *AJ*, 142, 187
2003, in: D. Alloin and W. Gieren (eds.), *Stellar Candles for the Extragalactic Distance Scale, Lecture Notes in Physics*, 635, 85
2001, *MNRAS*, 326, 1183
2010, *ApJ*, 715, 277
2009, *A&A*, 502, 505
2013, *MNRAS*, 435, 3206
2001, *ApJ*, 553, 47
2008, *ApJ*, 679, 71
2012, *ApJ*, 758, 24
2007, *A&A*, 476, 73
2012, *A&A*, 541, A87
2013, *MNRAS*, 431, 2278
2011, *AJ*, 141, 158
2008, *ApJ*, 672, 800
2013, *ApJ*, 764, 84
2003, *A&A*, 411, 361
2004, *MNRAS*, 350, 962
2011, *ApJ*, 738, 185
1986, *MNRAS*, 220, 279
1991, *PASP*, 103, 933
2009, *ApJ*, 695, 988
2009, *ApJ*, 696, 1498
2013, *ApJ*, 776, 135
2012, *PASP*, 124, 1035
2012, *ApJ*, 747, 50
2004, *MNRAS*, 349, 1130
2005, *MNRAS*, 360, 1033
2008, *ApJ*, 679, 76
2005, *MNRAS*, 363, 831
2009, *ApJ*, 693, 691
2010, *MNRAS*, 408, 983
2013, *Nature*, 495, 76
2011, *ApJ*, 730, 119
2004, *A&A*, 424, 43
2006, *ApJ*, 653, 843
2008, *AJ*, 135, 112
2011, *ApJ*, 743, 76
2006, *MNRAS*, 372, 1675
2008, *AcA*, 58, 163
2010, *AcA*, 60, 17
2011, *A&A*, 534, A94
2003, *A&A*, 404, 423
2010, *Ap&SS*, 326, 219
2007, *MNRAS*, 379, 723
2004, *AJ*, 128, 1606
[^1]: For latest version of the distance scale ladder, see [http://kiaa.pku.edu.cn/$\sim$grijs/\
distanceladder.pdf]{}
[^2]: Also, see the LMC distance moduli compiled in [http://clyde.as.utexas.edu/SpAstNEW/\
head602.ps]{}
|
---
abstract: 'We study critical Fermi surfaces in generic dimensions arising from coupling finite-density fermions with transverse gauge fields, by applying the dimensional regularization scheme developed in *Phys. Rev. B 92, 035141 (2015)*. We consider the cases of $U(1)$ and $U(1)\times U(1)$ transverse gauge couplings, and extract the nature of the renormalization group flow (RG) fixed points as well as the critical scalings. Our analysis allows us to treat a critical Fermi surface of a generic dimension $m$ perturbatively in an expansion parameter $\epsilon =(2-m) / (m+1)\,.$ One of our key results is that either $m>1$, or inclusion of higher-loop corrections, does not alter the existence of an RG flow fixed line for the $U(1)\times U(1)$ theories, which was identified earlier for $m=1$ at one-loop order.'
author:
- Ipsita Mandal
bibliography:
- 'biblio.bib'
title: Critical Fermi surfaces in generic dimensions arising from transverse gauge field interactions
---
Introduction
============
Metallic states that lie beyond the framework of Laudau Fermi liquid theory are often dubbed as non-Fermi liquids. It is a theoretically challenging task to study such systems, and consequently there have been intensive efforts dedicated to building a framework to understand them [@holstein; @reizer; @leenag; @HALPERIN; @polchinski; @ALTSHULER; @Chakravarty; @eaKim; @nayak; @nayak1; @lawler1; @SSLee; @metlsach1; @metlsach; @chubukov1; @Chubukov; @mross; @Jiang; @ips2; @ips3; @Shouvik1; @Lee-Dalid; @shouvik2; @ips-uv-ir1; @ips-uv-ir2; @ips-subir; @ips-sc; @ips-c2; @andres1; @andres2; @Lee_2018; @ips-fflo]. They are also referred to as critical Fermi surface states, as the breakdown of the Fermi liquid theory is brought about by the interplay between the soft fluctuations of the Fermi surface and some gapless bosonic fluctuations. Thess bosonic degrees of freedom can be massless scalar bosons, or the transverse components of gauge fields. A similar situation also arises in semimetals, where instead of a Fermi surface, there is a Fermi node interacting with long-ranged (unscreened) Coulomb potential which gives rise to a non-Fermi liquid behaviour [@abrikosov; @moon-xu; @rahul-sid; @ips-rahul]. Since the quasiparticles are destroyed, there is no obvious perturbative parameter in which one can carry out a controlled expansion, which would ultimately enable us to extract the universal properties. In this paper, we consider the case when Fermi surfaces are coupled with emergent gauge fields [@Chakravarty; @MOTRUNICH; @LEE_U1; @PALEE; @MotrunichFisher; @nayak1; @mross; @ips2; @ips3]. This belongs to the category when the critical boson carries zero momentum, and the quasiparticles lose coherence across the entire Fermi surface. An example when the critical boson with zero momentum is a scalar, is the Ising-nematic critical point [@metlsach1; @ogankivfr; @metzner; @delanna; @kee; @lawler1; @rech; @wolfle; @maslov; @quintanilla; @yamase1; @yamase2; @halboth; @jakub; @zacharias; @eaKim; @huh; @Lee-Dalid; @ips-uv-ir1; @ips-uv-ir2; @ips-subir; @ips-sc]. There are complementary cases when the critical boson carries a finite momentum. Examples include the critical points involving spin density wave (SDW), charge density wave (CDW) [@metlsach; @chubukov1; @Chubukov; @shouvik2; @ips-c2; @andres1; @andres2], and the FFLO order parameter [@ips-fflo].
An analytic approach [@senshank; @Lee-Dalid; @ips-uv-ir1; @ips-fflo] to deal with non-Fermi liquid quantum critical points is through dimensional regularization, in which the co-dimension of the Fermi surface is increased in order to identify an upper critical dimension $d=d_c$, and subsequently, to calculate the critical exponents in a systematic expansion involving the parameter $\epsilon = d_c - d_{\text{phys}}$ (where $d_{\text{phys}}$ is the actual/physical dimension of the system). This approach is especially useful, as it allows one to deal with critical Fermi surfaces of a generic dimension $m$ [@ips-uv-ir1; @ips-uv-ir2], representing a system with physical dimensions $d=d_{\text{phys}} = m+1$.
Another approach implements controlled approximation through dynamical tuning, involving an expansion in the inverse of the number ($N$) of fermion flavours combined with a further expansion $\varepsilon = z_b-2$, where $z_b$ is the dynamical critical exponent of the boson field [@nayak1; @mross]. This amounts to modifying the kinetic term of a collective mode ($\phi(k)$) from $k^{2}\,|\phi(k)|^2 $ to $k^{1+\varepsilon}\,|\phi(k)|^2 $. A drawback of this approach is that this modification of the kinetic term leads to nonalayticities in the momentum space, which are equivalent to nonlocal hopping terms in real space. Hence in this paper, we will employ the former approach of dimensional regularization, which maintains locality in real space.
The earlier works considering generic values of $d$ and $m$ involved the Ising-nematic order parameter [@ips-uv-ir1; @ips-uv-ir2], which represents quantum critical metals near a Pomeranchuk transition, where the critical boson couples to antipodal patches with the same sign of coupling strength [@metlsach1]. In contrast, a transverse gauge field couples to the two antipodal patches with opposite signs [@SSLee]. Here, we will implement the dimensional regularization procedure to determine the low-energy scalings of an $m$-dimensional (with $m\geq 1$) Fermi surface coupled with one or more transverse gauge fields. First we will develop the formalism for a single $U(1)$ gauge field. Then we will extend it to the $U(1) \times U(1)$ case, which can describe a quantum phase transition between a Fermi liquid metal and an electrical insulator without any Fermi surface (deconfined Mott transition), or that between two metals that having Fermi surfaces with finite but different sizes on either side of the transition (deconfined metal-metal transition) [@debanjan].
The paper is organized as follows. In Sec. \[modelu1\], we review the framework for applying dimensional regularization scheme to access the non-Fermi liquid fixed points perturbatively, and apply it to the case of a single transverse gauge field. In Sec. \[modelu2\], we carry out the computations for the scenario of quantum critical transitions involving two different kinds of fermions charged differently under the action of two transverse gauge fields. We conclude with a summary and some outlook in Sec. \[conclude\]. The details of the one-loop calculations have been provided in Appendix \[app:oneloop\].
Model involving a $U(1)$ transverse gauge field {#modelu1}
===============================================
\
\
We first consider an $m$-dimensional Fermi surface, which is coupled to a $U(1)$ transverse gauge field $a $ in $d=(m+1)$ space dimensions. The set-up is identical to Ref. [@ips-uv-ir1]. We review it here for the sake of completeness. As in earlier works [@Lee-Dalid; @ips-uv-ir1; @ips-uv-ir2], we want to characterize the resulting non-Fermi liquids through the scaling properties of the fermionic and bosonic Green’s functions. To do so, we focus on one point (say $K^*$) of the Fermi surface at which the fermion Green’s function is defined. The low energy effective theory involves fermions which are primarily scattered along the tangential directions of the Fermi surface, mediated by the critical boson. We assume the presence of the inversion symmetry, which implies that the fermions near $K^*$ are most strongly coupled with fermions near the antipodal point $-K^*$, since their tangent spaces coincide. Hence we write down a model including a closed Fermi surface divided into two halves centered at momenta $K^*$ and $-K^*$ respectively. The fermionic fields $\psi_{+,j}$ and $\psi_{-,j}$ represent the corresponding halves, as shown in Fig. \[fig:FS\]. In this coordinate system, the minimal Euclidean action that captures the essential description of the low energy physics is given by [@SSLee]: $$\begin{aligned}
S = & \sum \limits_{p=\pm} \sum_{j=1}^N \int dk\,
\psi_{p,j}^\dagger (k) \,\mathrm{i}
\left[ k_0 + p \,k_1 + {{{\mathbf{L}}}}_{(k)}^2 \right ] \psi_{p,j}(k)
\nn &
+ \frac{1}{2} \int dk
\left[ k_0^2 + k_1^2 + {{{\mathbf{L}}}}_{(k)}^2 \right]
a^\dagger(k) \, a(k) \nonumber \\
& + \frac{e}{\sqrt{N}} \sum_{ p=\pm} p \sum_{j=1}^N
\int dk\,dq \, a(q) \, \psi^\dagger_{p,j}(k+q)
\, \psi_{ p,j}(k) \, ,
\label{actu1}\end{aligned}$$ where $k =(k_0,k_1, {{{\mathbf{L}}}}_{(k)})$ is the $(d+1)$-dimensional energy-momentum vector with $dk \equiv \frac{d^{d+1} k}{(2\pi)^{d+1} }\,,$ and $e$ is the transverse gauge coupling. The fermion field $\psi_{+,j}(k_0,k_i)$ $\left ( \psi_{-,j}(k_0,k_i) \right)$ with flavor $j=1,2,..,N$, frequency $k_0$ and momentum $K_i^*+k_i$ ($-K_i^*+k_i$) is represented by $\psi_{+,j}(k_0,k_i)$ $\left ( \psi_{-,j}(k_0,k_i) \right)$, with $1 \leq i \leq d$. The components $k_{1}$ and ${{{\mathbf{L}}}}_{(k)} ~\equiv~ (k_{2}, k_{3},\ldots, k_{d})$ represent the momentum components perpendicular and parallel to the Fermi surface at $\pm K^*$, respectively. We have rescaled the momentum such that the absolute value of the Fermi velocity and the quadratic curvature of the Fermi surface at $\pm K^*$ can be set to one. Due to the fact that the Fermi surface is locally parabolic, the scaling dimension of $k_1$ and ${{{\mathbf{L}}}}_{(k)}$ are equal to $1$ and $1/2$ respectively. For a generic convex Fermi surface, there can be cubic and higher order terms in ${{{\mathbf{L}}}}_{(k)}$, but we can ignore them as they irrelevant in the renormalization group (RG) sense. Since we have a compact Fermi surface, the range of ${{{\mathbf{L}}}}_{(k)}$ in $\int dk$ is finite and is set by the size of the Fermi surface. This range is of the order of $\sqrt{k_F}$ in this coordinate system. To ensure this finite integration range, we will include an exponential cut-off $\exp \left \lbrace- \frac {{{{\mathbf{L}}}}_{(k)}^2} { \mu \, {\tilde{k}}_F } \right \rbrace$ while using the fermion Green’s function in loop integrations, which will capture the compactness of the Fermi surface in a minimal way without including the details of the shape. This can be made explicit by including the inverse of this factor in the kinetic part of the fermion action.
In order to control the gauge coupling $e$ for a given $m$, we tune the co-dimension of the Fermi surface [@senshank; @Lee-Dalid; @shouvik2] to determine the upper critical dimension $d=d_c$. To preserve the analyticity of the theory in momentum space (locality in real space) with general co-dimensions, we introduce the spinors [@Lee-Dalid; @shouvik2] $$\begin{aligned}
\Psi_j^T(k) = \left(
\psi_{+,j}(k),
\psi_{-,j}^\dagger(-k)
\right) \text{ and } \bar \Psi_j \equiv \Psi_j^\dagger \,\gamma_0\,,\end{aligned}$$ and write an action that describes the $m$-dimensional Fermi surface embedded in a $d$-dimensional momentum space: $$\begin{aligned}
\label{actu12}
S =& \sum_{j} \int dk\, \bar \Psi_j(k) \,\mathrm{i}
\left[ {{\mathbf{\Gamma}}} \cdot {{\mathbf{K}}} + \gamma_{d-m} \, \delta_k \right ]
\Psi_{j}(k) \, \exp \Big \lbrace \frac {{{{\mathbf{L}}}}_{(k)}^2} { \mu \, {\tilde{k}}_F } \Big \rbrace \nonumber\\
&+
\frac{1}{2} \int dk \,
{{{\mathbf{L}}}}_{(k)}^2\, a^\dagger(k) \, a(k) \nonumber \\
&+ \frac{ e \, \mu^{x/2} } {\sqrt{N}} \sum_{j}
\int dk \,dq \,
a(q) \, \bar \Psi_{j}(k+q)\, \gamma_{0}\, \Psi_{j}(k) \,,
\nn x = & \frac{4+m-2d} {2} \,.\end{aligned}$$ Here, ${{\mathbf{K}}} ~\equiv ~(k_0, k_1,\ldots, k_{d-m-1})$ includes the frequency and the first $(d-m-1)$ components of the $d$-dimensional momentum vector, ${{{\mathbf{L}}}}_{(k)} ~\equiv~ (k_{d-m+1}, \ldots, k_{d})$ and $\delta_k = k_{d-m}+ {{{\mathbf{L}}}}_{(k)}^2$. In the $d$-dimensional momentum space, $k_1,..,k_{d-m}$ (${{{\mathbf{L}}}}_{(k)}$) represent(s) the $(d-m)$ ($m$) directions perpendicular (tangential) to the Fermi surface. ${{\mathbf{\Gamma}}} \equiv (\gamma_0, \gamma_1,\ldots, \gamma_{d-m-1})$ represents the gamma matrices associated with ${{\mathbf{K}}}$. Since we are interested in a value of co-dimension $1 \leq d-m \leq 2$, we consider only $2 \times 2$ gamma matrices with $\gamma_0= \sigma_y , \, \gamma_{d-m} = \sigma_x$. In the quadratic action of the boson, only ${{{\mathbf{L}}}}_{(k)}^2 \, a^\dagger (k)\, a(k)$ is kept, because $|{{\mathbf{K}}}|^2 + k_{d-m}^2$ is irrelevant under the scaling where $k_0,k_1,..,k_{d-m}$ have dimension $1$ and $k_{d-m+1},..,k_d$ have dimension $1/2$. In the presence of the $(m+1)$-dimensional rotational symmetry, all components of $k_{d-m}, ..., k_d$ should be equivalent. The rotational symmetry of the bare fermion kinetic part in the $(d-m)$-dimensional space spanned by ${{\mathbf{K}}}$ components is destroyed by the coupling with the gauge boson, as the latter involves the $\gamma_0$ matrix. With this in mind, we will denote the extra (unphysical) co-dimensions by the vector $\tilde{{{\mathbf{K}}}}$, and the corresponding gamma matrices by $\tilde{{{\mathbf{\Gamma}}}}$.
Since the scaling dimension of the gauge coupling is equal to $x$, we have made $e$ dimensionless by using a mass scale $\mu$. We have also defined a dimensionless parameter for the Fermi momentum, $ {\tilde {k}}_F = k_F/\mu$ using this mass scale. The spinor $\Psi_j$ exhibits an energy dispersion with two bands $E_k =
\pm \sqrt{ \sum \limits_{i=1}^{d-m-1} k_i^2 + \delta_k^2 } \,,$ and this gives an $m$-dimensionsal Fermi surface embedded in the $d$-dimensional momentum space, defined by the $d-m$ equations: $k_i = 0$ for $i=\lbrace 1,\ldots,d-m-1 \rbrace$ and ${ k}_{d-m} = - {{{\mathbf{ L}}}}_{(k)}^2$. Basically, the extra ($d-m-1$) directions are gapped out so that the Fermi surface reduces to a sphere $S^m$ (sphere in an $(m+1)$-dimensional Euclidean space) locally.
When we perform dimensional regularization, the theory implicitly has an ultraviolet (UV) cut-off for ${{\mathbf{K}}}$ and $k_{d-m}$, which we denote by $\Lambda$. It is natural to choose $\Lambda = \mu$, and the theory has two important dimensionless parameters: $e$ and $\tilde k_F = k_F / \Lambda$. If $k$ is the typical energy at which we probe the system, the limit of interest is $k \ll \Lambda \ll k_F$. This is because $\Lambda$ sets the largest energy (equivalently, momentum perpendicular to the Fermi surface) fermions can have, whereas $k_F$ sets the size of the Fermi surface. We will consider the RG flow generated by changing $\Lambda$ and requiring that low-energy observables are independent of it. This is equivalent to a coarse-graining procedure of integrating out high-energy modes away from Fermi surface. Because the zero energy modes are not integrated out, $k_F/\Lambda$ keeps on increasing in the coarse-graining procedure. We treat $k_F$ as a dimensionful coupling constant that flows to infinity in the low-energy limit. Physically, this describes the fact that the size of the Fermi surface, measured in the unit of the thickness of the thin shell, around the Fermi surface diverges in the low-energy limit. This is illustrated in Fig. \[fig:FS\].
Dimensional Regularization {#dr}
--------------------------
\
\
To gain a controlled approximation of the physics of the critical Fermi surface, we fix $m$ and tune $d$ towards a critical dimension $d_c\,,$ at which quantum corrections depend logarithmically on $\Lambda$ within the range $\Lambda \ll k_F$. In order to identify the value of $d_c$ as a function of $m$, we consider the one-loop quantum corrections.
The bare propagator for fermions is given by: $$\begin{aligned}
\label{propf}
G_0 (k) = -\mathrm{i}\, \frac{{{\mathbf{\Gamma}}} \cdot {{\mathbf{K}}} +
\gamma_{d-m} \,\delta_k}
{{{\mathbf{K}}}^2 + \delta_k^2} \, \times \, \exp \Big \lbrace - \frac {{{{\mathbf{L}}}}_{(k)}^2} { \mu \, {\tilde{k}}_F } \Big \rbrace \,.\end{aligned}$$ Since the bare boson propagator is independent of $k_{0},..,k_{d-m}$, the loop integrations involving it are ill-defined, unless one resums a series of diagrams that provides a non-trivial dispersion along those directions. This amounts to rearranging the perturbative expansion such that the one-loop boson self-energy is included at the ‘zero’-th order. The dressed boson propagator, which includes the one-loop self-energy (see Fig. \[fig:bos\]), is given by: $$\begin{aligned}
\label{babos}
& \Pi_1 (k) = - e^2 \mu^x
\int dq\, \text{Tr}
\left[ \gamma_{0}\, G_0 (k+q)\,\gamma_{0}\, G_0 (q) \right ]
\nn & =
- \frac{ \beta(d,m)\, e^2 \, \mu^x \left( \mu \, {\tilde{k}}_F \right )^{\frac{m-1}{2}} }
{ |{{\mathbf{L}}}_{(q)}|}
\nn & \qquad \times \left[ k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right]
|{{\mathbf{K}}}|^{d-m-2} \,,\end{aligned}$$ where $$\begin{aligned}
\label{eqbetad}
\beta(d,m)
=
\frac{ \pi ^{\frac{4-d}{2}}
\,\Gamma (d-m) \,\Gamma (m+1-d) }
{2^{\frac{4d-m-1}{2} }\,\Gamma ^2 \left(\frac{ d-m+2} {2}\right) \Gamma \left(\frac{m+1-d}{2} \right)} \,.\end{aligned}$$ This expression is valid to the leading order in $k/k_F$, and for $|{{\mathbf{K}}}|^2/|{{\mathbf{L}}}_{(k)}|^2, ~\delta_k^2/|{{\mathbf{L}}}_{(k)}|^2 \ll k_F$ [^1]. We provide the details of computation for the expression of $\Pi_1(k) $ in Appendix \[app:oneloopbos\]. For $m>1$, the boson self-energy diverges in the $k_F \rightarrow \infty$ limit. This is due to the fact that the Landau damping gets stronger for a system with a larger Fermi surface, as the boson can decay into particle-hole excitations that encompass the entire Fermi surface for $m>1$. This is in contrast with the case for $m=1$, where a low-energy boson with a given momentum can decay into particle-hole excitations only near the isolated patches whose tangent vectors are parallel to that momentum. Eq. (\[babos\]) is valid when there exists at least one direction that is tangential to the Fermi surface ($m \geq 1$). Henceforth, we will use the dressed propagator: $$\begin{aligned}
\label{babosprop}
D_1 (q) = \frac{1}{{{{\mathbf{L}}}}_{(q)}^2 - \Pi_1 (q)} \,. \end{aligned}$$ for any loop calculation.
The next step is to computed the one-loop fermion self-energy $\Sigma_1 (q)$, as shown in Fig. \[fig:ferm\]. Again, the details of the calculation are provided in Appendix \[app:oneloopfer\]. This blows up logarithmically in $\Lambda$ at the critical dimension $$\begin{aligned}
d_c(m) = m + \frac{3}{m+1} \,.\end{aligned}$$ The physical dimension is given by $d=d_c(m) - \epsilon$. In the dimensional regularization scheme, the logarithmic divergence in $\Lambda$ turns into a pole in $\frac{1} {\epsilon}$: $$\begin{aligned}
\label{sigmau1}
\Sigma_1(q) = & -\frac{ \mathrm{i} \,
e^{\frac{2\,(m+1)} {3} }
\left[ u_0 \,
\gamma_0\,q_0
+u_1
\left( {\tilde{{{\mathbf{\Gamma}}}}} \cdot {\tilde{{{\mathbf{Q}}}}} \right)
\right] }
{N \, {\tilde{k}}_F ^{ \frac{(m-1)(2-m) } {6}}
\, \epsilon}
\nn & + \text{ finite terms }\end{aligned}$$ to the leading order in $q/k_F\,,$ where $u_0\,,u_1\geq 0\,$. For the cases of interest, we have computed these coefficients numerically to obtain: $$\begin{aligned}
\label{valu}
\begin{cases}
u_0 = 0.0201044 \,, \quad v_0 =1.85988 & \text{ for } m=1 \\
u_0 =v_0 = 0.0229392 & \text{ for } m=2
\end{cases} \,.\end{aligned}$$
The one-loop vertex correction in Fig. \[fig:vert\] is non-divergent, and hence does not contribute to the RG flows. This is to be contrasted with the Ising-nematic case where it is guaranteed to vanish due to a Ward identity [@Lee-Dalid].
We can vary the dimension of Fermi surface from $ m = 1$ to $ m=2 $ while keeping $\epsilon$ small, thus providing a controlled description for any $m$ between $1$ and $2$. For a given $m$, we tune $d$ such that $\epsilon = d_c(m) - d $ is small. To remove the UV divergences in the $\epsilon \rightarrow 0$ limit, we add counterterms using the minimal subtraction scheme. The counter terms take the same form as the original local action:
$$\begin{aligned}
\label{actcount}
S_{CT} = & \sum_{j} \int dk\, \bar \Psi_{j}(k)
\, \mathrm{i} \,\Bigl[
A_{0} \, \gamma_0 \,k_0 + A_{1} \,\tilde{{{\mathbf{\Gamma}}}} \cdot \tilde{ {{\mathbf{K}}}}
+ A_2 \, \gamma_{d-m} \, \delta_k
\Bigr] \Psi_{j}(k) \, \exp \Big \lbrace \frac {{{{\mathbf{L}}}}_{(k)}^2}
{ \mu \, {\tilde{k}}_F } \Big \rbrace
\nn & + \frac{A_{3}}{2} \int dk\,
{{{\mathbf{L}}}}_{(k)}^2\, a^\dagger(k) \, a(k)
+ A_{4} \frac{ \mathrm{i} \,e \, \mu^{x/2} }{\sqrt{N}}
\sum_{j} \int dk \, dq \,
a(q) \, \bar \Psi_{ j}(k+q) \,\gamma_{0} \, \Psi_{j}(k) \, ,\end{aligned}$$
where $$\begin{aligned}
A_{\zeta} =
\sum_{\lambda=1}^\infty \frac{Z^{(\lambda)}_{ \zeta}
(e_,\tilde{k}_F)}{\epsilon^\lambda} \text{ with } \zeta=0,1,2,3 , 4\,.\end{aligned}$$
In the mass-independent minimal subtraction scheme, these coefficients depend only on the scaled coupling $e$, and the scaled Fermi momentum $\tilde{k}_F$. As discussed earlier, we expect $\tilde{k}_F$ to act as another coupling for $m>1$, and hence it must be included in the RG flow equations. The coefficients can be further expanded in the number of loops modulo the one-loop self energy of boson, which is already included in Eq. (\[babosprop\]). Note that the $(d-m-1)$-dimensional rotational invariance in the space perpendicular to the Fermi surface guarantees that each term in $ \tilde{{{\mathbf{\Gamma}}}} \cdot \tilde{{{\mathbf{K}}}}$ is renormalized in the same way. Similarly, the sliding symmetry along the Fermi surface guarantees that the form of $\delta_k$ is preserved. However, $A_0$, $A_1$ and $A_2$ are in general different due to a lack of the full rotational symmetry in the $(d+1)$-dimensional spacetime. Note the difference from the Ising-nematic case, where we had $A_0 = A_1$, as the rotational symmetry there involved the full $(d-m)$-dimensional subspace.
Adding the counterterms to the original action, we obtain the renormalized action which gives the finite quantum effective action:
$$\begin{aligned}
\label{act7}
S_{ren} = & \sum_{j} \int d k^B
\, \bar \Psi_{j}^B(k^B) \,\mathrm{i}
\left[ {{\mathbf{\Gamma}}} \cdot {{\mathbf{K}}}^B + \gamma_{d-m} \delta_{k^B} \right ] \Psi_{j}^B(k^B)
\, \exp \left \lbrace \frac {{{{\mathbf{L}}}}_{(k^B)}^2} { k_{F^B} } \right \rbrace
+
\frac{1}{2} \int d k^B \,
{{{\mathbf{L}}}}_{(k^B)}^2\, { a^B } ^\dagger (k^B) \,\,\,a^B(k^B) \nonumber \\
&+ \frac{\mathrm{i} \, e^B }{\sqrt{N}} \sum_{j}
\int d k^B \, d q^B \,
a^B(q^B) \, \bar \Psi_{j}^B(k^B+q^B)
\, \gamma_{0} \Psi_{j}^B(k^B) \, ,\end{aligned}$$
where $$\begin{aligned}
& k_{0}^B = \frac{Z_0} {Z_2}\,k_0\,,\quad
\tilde{{{\mathbf{K}}}}^B = \frac{Z_1} {Z_2} \, \tilde{{{\mathbf{K}}}} \, , \quad
k_{d-m}^B = k_{d-m} \, ,
\nn & {{{\mathbf{L}}}}_{(k^B)} = {{{\mathbf{L}}}}_{(k)} \,, \quad
\Psi_j^B(k^B) = Z_{\Psi}^{\frac{1}{2}}\, \Psi_j(k)\,,
\nn & a^B(k^B) = Z_{a}^{\frac{1}{2}}\, a(k)\,, \quad
k_{F}^B = k_F =\mu \, {\tilde{k}}_F \,,
\nn &
Z_{\Psi} = \frac{Z_2^{d-m+1} } { Z_0\, {Z_1 }^{d-m-1}}\,,\quad
Z_{a} = \frac{Z_{3}\, Z_2^{d-m}} {Z_0\, {Z_1 }^{d-m-1}}\,,\nn
& e^B= Z_{e}\,e\,\mu^{\frac{x}{2}}\,, \quad
Z_{e}= \frac{ Z_{4} \, Z_2^{\frac{d-m} {2} -1}}
{\sqrt{ Z_0\, Z_{3}} \, {Z_1 }^{\frac{d-m-1} {2}} }\,.\end{aligned}$$ Here, $$\begin{aligned}
Z_{\zeta} = 1 + A_{\zeta}\,.\end{aligned}$$ The superscript “B" denotes the bare fields, couplings, and momenta. In Eq. (\[act7\]), there is a freedom to change the renormalizations of the fields and the renormalization of momentum without affecting the action. Here we fix the freedom by requiring that $\delta_{k^B} = \delta_k$. This amounts to measuring scaling dimensions of all other quantities relative to that of $\delta_k$.
Let $z$ be the dynamical critical exponent, $\tilde z$ be the critical exponent along the extra spatial dimensions, $\beta_e $ be the beta function for the coupling $e$, $ \beta_{k_F}$ be the beta function for ${\tilde k}_F$, and $\eta_\Psi$ ($\eta_\phi$) be the anomalous dimension for the fermions (gauge boson). These are explicitly given by: $$\begin{aligned}
& z = 1 + \frac{ \partial \ln (Z_0) }{\partial \ln \mu}\, , \quad
\tilde{z} = 1 + \frac{ \partial \ln (Z_1) }{\partial \ln \mu}\, , \quad
\eta_\Psi = \frac{1}{2} \frac{ \partial \ln Z_\Psi}{\partial \ln \mu} \, ,
\nn &\eta_{a} = \frac{1}{2} \frac{ \partial \ln Z_{a}}{\partial \ln \mu} \,,\quad
\beta_{k_F}({\tilde k}_F) = \frac{\partial {\tilde k}_F}{\partial \ln \mu} \, , \quad
\beta_{e} = \frac{\partial e}{\partial \ln \mu}\, .
\label{beta1}\end{aligned}$$ In the $\epsilon \rightarrow 0$ limit, we require solutions of the form: $$\begin{aligned}
\label{solexp}
& z=z^{(0)}\,,\quad \tilde z={\tilde z}^{(0)}\,,\nn &
\eta_\Psi = \eta_\Psi^{(0)} + \eta_\Psi^{(1)} \epsilon \,,
\quad \eta_a = \eta_a^{(0)} + \eta_a^{(1)} \epsilon \,.\end{aligned}$$
RG Flows At One-Loop Order
--------------------------
To one-loop order, the counterterms are given by $Z_\zeta = 1 + \frac{Z_{\zeta}^{(1)}} {\epsilon}\,.$ Collecting all the results, we find that only $$\begin{aligned}
Z_{0}^{(1)} = -\frac{ u_0\, \tilde{e} } {N } \text{ and }
Z_{1}^{(1)} = - \frac{ u_1 \, \tilde{e} } {N }\end{aligned}$$ are nonzero, where $$\begin{aligned}
\tilde{e} =\frac{e^{ \frac{2 \,(m+1) } {3} }}
{ {\tilde{k}}_F ^{ \frac{(m-1) (2-m)}{6} }}\,.\end{aligned}$$
Then the one-loop beta functions, that dictate the flow of $\tilde k_F$ and $e$ with the increasing energy scale $\mu$, are given by: $$\begin{aligned}
& \beta_{k_F} = - {\tilde k}_F \,,
\quad (1-z)\, Z_0= -\beta_{e} \, \frac{\partial Z_0} {\partial e}
+ {\tilde k}_F \, \frac{\partial Z_0} {\partial \tilde{k}_F} \,, \nn
& (1-\tilde z)\, Z_1= - \beta_{e} \, \frac{\partial Z_1} {\partial e}
+ {\tilde k}_F \, \frac{\partial Z_1} {\partial \tilde{k}_F} \,, \nn
& \beta_{e} = - \frac{ \epsilon
+ \frac{2-m}{m+1} \,(1-\tilde z)+1-z } {2}\,e \,,\nn
& \eta_\Psi
=\eta_{a} = \frac{ 1-z + (1-\tilde z)\, (d-m-1) } {2}\,.
\label{beta1}\end{aligned}$$
Solving these equations using the required form outlined in Eq. (\[solexp\]), we get: $$\begin{aligned}
z=&\, 1 - \frac{ (m+1) \,u_0 \,\tilde e } {3 \,N + (m+1)\, u_1\,\tilde e}\,,
\nn
\tilde z= &\, 1 - \frac{ (m+1) \,u_1 \,\tilde e } {3 \,N + (m+1)\, u_1\,\tilde e}\,,
\nn -\frac{\beta_e}{e} = & \,
\frac{\epsilon }{2} +\frac{ (m-1) (2-m)}{4 (m+1)}
-\frac{ (m+1) \,u_0 +(2-m) \,u_1} {6 \, N} \,\tilde{e} \,.\end{aligned}$$ The first term indicates that $e$ remains strictly relevant in the infrared (IR) at $d=d_c(m)$ for $1< m < 2\,.$ However, the second term implies that the higher order corrections are controlled not by $e$, but by an effective coupling $\tilde e\,.$ Indeed, the scaling dimension of $\tilde e$ vanishes at $d_c $ for $1\leq m \leq 2\,.$ The beta function of this effective coupling is given by: $$\begin{aligned}
\frac{\beta_{\tilde e} } {\tilde e}
= -\frac{(m+1) \,\epsilon}{3}
+ \frac{ \left (m+1 \right )
\left[\, \left (m+1 \right ) u_0+ \left ( 2-m\right ) u_ 1 \, \right ]}
{9 \, N} \,{\tilde e}\,.\end{aligned}$$ The interacting fixed point is obtained from $\beta_{\tilde e} = 0$, and takes the form: $$\begin{aligned}
{\tilde{e}}^* = \frac{3 \,N\, \epsilon }
{(m+1)\, u_0+(2-m)\, u_1 } +\mathcal{O} \left( \epsilon^2 \right) .\end{aligned}$$ It can be checked that this is an IR stable fixed point by computing the first derivative of $\beta_{\tilde e}\,.$ The critical exponents at this stable fixed point are given by: $$\begin{aligned}
\label{critex}
& z^*= 1+\frac{(m+1) \,u_1 \,\epsilon }
{(m+1)\,u_0+(2-m) \,u_1}\,,\nn
& {\tilde z}^*= 1+\frac{(m+1) \,u_0 \,\epsilon }
{(m+1)\,u_0+(2-m) \,u_1}
\,,\nn
& \eta_\Psi^* = \eta_a^* = -\frac{\epsilon}{2} \,.\end{aligned}$$
Higher-Loop Corrections
-----------------------
We will now discuss the implications of the higher-loop corrections, without actually computing the Feynman diagrams. For $m>2$, we expect a nontrivial UV/IR mixing to be present, as was found in Ref. , which makes the results one-loop exact. In other words, all higher-loop corrections would vanish for $m>1$ in the limit $k_F \rightarrow 0\,,$ due to supression of the results by positive powers of $k_F\,.$ For $m=1$, we will use the arguments and results of Ref. to assume a generic form of the corrections coming from two-loop diagrams. Henceforth, we will just focus on $m=1$ in this subsection.
The two-loop bosonic self-energy should turn out to be UV finite, and hence will renormalize the factor $\beta(\frac{5}{2},1)$ (see Eq. \[eqbetad\]) by a finite amount $\beta_2 = \frac{\kappa\, {\tilde e}} {N}\,,$ where $\kappa$ is a finite number. Then the bosonic propogator at this order will take the form: $$\begin{aligned}
\label{babos2}
D_2 (q)
= \frac{1}{{{{\mathbf{L}}}}_{(q)}^2
+ \frac{\left[ \beta \big(\frac{5} {2},2 \big) + \frac{\kappa\, \tilde e }
{N}\right] e^2 \mu^\epsilon }
{ |{{\mathbf{L}}}_{(q)}|}
\times \frac{ k_0^2 + \left ( \epsilon- \frac{1}{2} \right)\,{\tilde {{{\mathbf{K}}}}}^2 }
{|{{\mathbf{K}}}|^{ \frac{1 } {2}+\epsilon }}\,.
} \,. \end{aligned}$$ From this, the fermion self-energy now receives a correction $$\begin{aligned}
\Sigma_{2}^{(1)}(k) =& \left [
\left \lbrace \frac{ \beta \big(\frac{5} {2},2 \big)}
{\beta \big(\frac{5} {2},2 \big) +\frac{ \kappa \, \tilde e}
{N} }\right \rbrace ^{\frac{1}{3}}
-1 \right ] \Sigma_1(k) \nn
= & -\frac { \kappa \, \tilde e }
{3 \,N\, \beta \big(\frac{5} {2},2 \big) }\,\Sigma_1(k)
+ \text{ finite terms} \,.\end{aligned}$$ Now the two-loop fermion self-energy diagrams, after taking into account the counterterms obtained from one-loop corrections, take the form: $$\begin{aligned}
\Sigma_{2}^{(2)}(k) =&
-\frac{ \mathrm{i} \,{\tilde e}^2
\Big[\,
\tilde v_{0} \,\gamma_0\,q_0 +
\tilde v_1 \, \left( \tilde{{{\mathbf{\Gamma}}} } \cdot \tilde{{{\mathbf{Q}}} } \right)
+ w\, \gamma_{d-1}\,\delta_k \,\Big ] }
{ N^2\,\epsilon}
\nn & + \text{ finite terms}\,.\end{aligned}$$ Adding the two, generically the total two-loop fermion self-energy can be written as: $$\begin{aligned}
\Sigma_{2}^{(2)}(k) =&
-\frac{ \mathrm{i} \, \tilde e^{2}
\Big[\,v_{0} \,\gamma_0\,q_0 + v_1 \, \left( \tilde{{{\mathbf{\Gamma}}} } \cdot \tilde{{{\mathbf{Q}}} } \right)
+ w\, \gamma_{d-1}\,\delta_k \,\Big ] }
{ N^2\,\epsilon}
\nn & + \text{ finite terms}\,.\end{aligned}$$ There will also be a divergent vertex correction which will lead to a nonzero $Z_4^{(1)}$ of the form $-\frac{\tilde e^{2} \, \mu^{ \frac{4 \epsilon} {3}} \,y}{N^2}\,.$ All these now lead to the nonzero coefficients: $$\begin{aligned}
&Z_{0}^{(1)} = -\frac{ u_0\,\tilde{e} } {N }
-\frac{ v_0\, \tilde{e}^2 } {N^2 } \,,\quad
Z_{1}^{(1)} = - \frac{ u_1\,\tilde{e} } {N }
-\frac{ v_0\, \tilde{e}^2 } {N^2 } \,,\nn
& Z_{2}^{(1)} =
-\frac{ w\, \tilde{e}^2 } {N^2 } \,,\quad
Z_{4}^{(1)} =
-\frac{ y\,\tilde{e}^2 } {N^2 }\,,\end{aligned}$$ resulting in
$$\begin{aligned}
\frac{ \beta_{\tilde e}} {\tilde e}
= -\frac{2
\left \lbrace 2 \,u_1 \, {\tilde e}
+3 \,N \right \rbrace \epsilon} {9 \, N}
+\frac{2
\left \lbrace 2\,{\tilde e}
\left(2 \,u_0\,u_1+ u_1^2+6 \,v_0 +3 \,v_1-9 \,w\right)
+3 \,N\left (2 \,u_0+ u_1 \right )\right \rbrace \, {\tilde e}
}
{27 \, N^2} \,.\end{aligned}$$
At the fixed point, we now have: $$\begin{aligned}
\frac{\tilde e^*} {N}
= \frac{3 \,\epsilon }
{2 \,u_0+ u_1}
-\frac{18 \left (2 \,v_0+v_1-3 \,w \right )}
{ \left (2 \,u_0+ u_1 \right )^3}\, \epsilon ^2
+\mathcal{O} \left( \epsilon^3 \right) .\end{aligned}$$ This shows that the nature of the stable non-Fermi liquid fixed point remains unchanged, although its location (as well as any critical scaling) gets corrected by one higher power of $\epsilon$.
Model involving two $U(1)$ transverse gauge fields {#modelu2}
==================================================
In this section, we consider the $m$-dimensional Fermi surfaces of two different kinds of fermions (denoted by subscripts $1$ and $2$) coupled to two U(1) gauge fields, $a_c$ and $a_s$, in the context of deconfined Mott transition and deconfined metal-metal transition studied in Ref. (for $m=1$). The fermion fields $\psi_{1,\pm,j}$ and $\psi_{2,\pm,j}$ carry negative charges under the even ($a_c + a_s$) and odd ($a_c - a_s$) combinations of the gauge fields, and hence the action takes the form:
$$\begin{aligned}
S & = \sum \limits_{\alpha=1,2} \sum \limits_{p=\pm} \sum_{j=1}^N \int dk\,
\psi_{\alpha,p,j}^\dagger (k)
\Bigl[ \mathrm{i}\, k_0 + p \,k_{d-m} + {{{\mathbf{L}}}}_{(k)}^2 \Bigr] \psi_{\alpha,p,j}(k)
+ \frac{1}{2} \int dk \, {{{\mathbf{L}}}}_{(k)}^2 \left[
a_c^\dagger(k) \, a_c(k) + a_s^\dagger(k) \, a_s(k) \right ] \nonumber \\
& \quad
+\sum_{\alpha=1,2} \sum_{p=\pm} p\sum_{j=1}^N \int dk \, dq
\left[
\frac{ (-1)^\alpha \, e_s}{\sqrt{N}} \, a_s(q) \, \psi^\dagger_{\alpha,p,j}(k+q)
\, \psi_{\alpha, p,j}(k)
- \frac{e_c}{\sqrt{N}} \, a_c(q) \, \psi^\dagger_{\alpha,p,j}(k+q)
\, \psi_{\alpha, p,j}(k)\right ] ,
\label{actu2}\end{aligned}$$
where $e_c $ and $e_s $ denote the gauge couplings for the gauge fields $a_c $ and $a_s $ respectively. We will perform dimensional regularization on this action and determine the RG fixed points. Our formalism allows us to extend the discussion beyond $m=1$, and also to easily compute higher-loop corrections.
Dimensional Regularization {#dimensional-regularization}
--------------------------
Proceeding as in the single transverse gauge field case, we add artificial co-dimensions for dimensional regularization after introducing the two-component spinors: $$\begin{aligned}
& \Psi_{\alpha,j}^T(k) = \left(
\psi_{\alpha,+,j}(k),
\psi_{\alpha,-,j}^\dagger(-k)
\right) \text{ and } \bar \Psi_{\alpha,j} \equiv \Psi_{\alpha, j}^\dagger \,\gamma_0\,,
\nn
& \text{with } \alpha =1,2 \,.\end{aligned}$$ The dressed gauge boson propagators, including the one-loop self-energies, are given by: $$\begin{aligned}
\label{babos2}
& \Pi^c_1 (k) =
-\frac{ \beta(d,m)\, e_c^2 \, \mu^x \left( \mu \, {\tilde{k}}_F \right )^{\frac{m-1}{2}}
}
{ |{{\mathbf{L}}}_{(q)}|}
\nn & \hspace{ 1.5 cm} \times
\left[ k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right]
|{{\mathbf{K}}}|^{d-m-2} \,, \end{aligned}$$ and $$\begin{aligned}
& \Pi^s_1 (k) =
-\frac{ \beta(d,m)\, e_s^2 \, \mu^x \left( \mu \, {\tilde{k}}_F \right )^{\frac{m-1}{2}}
}
{ |{{\mathbf{L}}}_{(q)}|}
\nn & \hspace{ 1.5 cm} \times
\left[ k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right]
|{{\mathbf{K}}}|^{d-m-2} \,,\end{aligned}$$ for the $a_c$ and $a_s$ gauge fields, respectively. This implies that the one-loop fermion self-energy for both $\Psi_{1,j}$ and $\Psi_{2,j}$ now takes the form: $$\begin{aligned}
\label{sigmau1}
\Sigma_1(q) = &
-\frac{ \mathrm{i} \left( e_c^{\frac{2\,(m+1)} {3} } + e_s^{\frac{2\,(m+1)} {3} } \,
\right) }
{ N \, {\tilde{k}}_F ^{ \frac{(m-1)(2-m) } {6}}}
\frac{u_0\,\gamma_0\,q_0 + u_1 \left( \tilde{{{\mathbf{\Gamma}}} } \cdot \tilde{{{\mathbf{Q}}} } \right) }
{\epsilon}
\nn & + \text{ finite terms} \,,
$$ with the critical dimension $d_c = \left( m+\frac{3}{m+1}\right)$, $u_0$ and $u_1$ (See Eq. \[valu\]) having the same values as for the $U(1)$ case.
The counterterms take the same form as the original local action:
$$\begin{aligned}
S_{CT} = & \sum_{\alpha,j} \int dk \, \bar \Psi_{\alpha,j}(k)
\, \mathrm{i} \,\Bigl[
A_{0} \, \gamma_0 \,k_0 + A_{1} \,\tilde{{{\mathbf{\Gamma}}}} \cdot \tilde{ {{\mathbf{K}}}}
+ A_2 \, \gamma_{d-m} \, \delta_k
\Bigr] \Psi_{\alpha,j}(k) \, \exp \Big \lbrace \frac {{{{\mathbf{L}}}}_{(k)}^2} { \mu \, {\tilde{k}}_F } \Big \rbrace
+ \frac{A_{3_s}}{2} \int dk\,
{{{\mathbf{L}}}}_{(k)}^2\, a_s^\dagger(k) \, a_s(k)
\nn &+ \frac{A_{3_c}}{2} \int dk \,
{{{\mathbf{L}}}}_{(k)}^2\, a_c^\dagger(k) \, a_c(k)
- A_{4_c} \frac{ e_c \, \mu^{x/2} }{\sqrt{N}} \sum_{\alpha,j}
\int dk \, dq \,
a_c(q) \, \bar \Psi_{\alpha, j}(k+q) \,\gamma_{0} \, \Psi_{\alpha,j}(k)
\nn & + A_{4_s} \frac{ e_s \, \mu^{x/2} }{\sqrt{N}} \sum_{\alpha,j} (-1)^\alpha
\int \frac{d^{d+1}k \, d^{d+1}q}{(2\pi)^{2d+2}} \,
a_s (q) \, \bar \Psi_{\alpha, j}(k+q) \,\gamma_{0} \, \Psi_{\alpha,j}(k) \, ,\end{aligned}$$
where $$\begin{aligned}
A_{\zeta} =
\sum_{\lambda=1}^\infty \frac{Z^{(\lambda)}_{ \zeta}
(e_,\tilde{k}_F)}{\epsilon^\lambda} \text{ with } \zeta=0,1,2,3_c,3_s , 4_c,4_s\,.\end{aligned}$$
We have taken into account the exchange symmetry: $ \Psi_{1,j} \leftrightarrow \Psi_{2,j}\,,\,\,
a_s \rightarrow - a_s \,,$ which was assumed in Ref. , and here it means that both $\Psi_{1,j}$ and $\Psi_{2,j}$ have the same wavefunction renormalization $ Z_{\Psi}^{1/2}$.
Adding the counterterms to the original action, we obtain the renormalized action:
$$\begin{aligned}
\label{act8}
S_{ren} = & \sum_{\alpha,j} \int dk^B \bar \Psi^B_{\alpha,j}(k^B)
\, \mathrm{i} \left[ \gamma_0 \,k_0^B + \tilde{{{\mathbf{\Gamma}}}} \cdot \tilde{ {{\mathbf{K}}}}^B
+ \gamma_{d-m} \, \delta_k \right ] \Psi^B_{\alpha,j}(k^B)
\, \exp \Big \lbrace \frac {{{{\mathbf{L}}}}_{(k^B)}^2} { \mu \, {\tilde{k}}_F^B } \Big \rbrace
+ \frac{1}{2} \int dk^B\,
{{{\mathbf{L}}}}_{(k^B)}^2\, {a^B_c }^\dagger(k^B) \, \,\,a^B_c(k^B)
\nn &+ \frac{1}{2} \int dk^B \,
{{{\mathbf{L}}}}_{(k^B)}^2\, { a^B_s }^\dagger(k^B) \,\,\, a_s^B(k^B)
- \frac{ e_c^B \, \mu^{x/2} }{\sqrt{N}} \sum_{\alpha,j}
\int dk^B \, dq^B \,
a^B_c (q^B) \, \bar \Psi^B_{\alpha, j}(k^B+q^B) \,\gamma_{0} \, \Psi^B_{\alpha,j}(k^B)
\nn & + \frac{ e_s^B \, \mu^{x/2} }
{\sqrt{N}} \sum_{\alpha,j} (-1)^\alpha
\int dk^B \, dq^B \,
a_s^B (q^B) \, \bar \Psi^B_{\alpha, j}(k^B+q^B) \,\gamma_{0} \, \Psi^B_{\alpha,j}(k^B) \, ,\end{aligned}$$
remembering that $\delta_{k^B} =\delta_k\,.$ Here $$\begin{aligned}
& k_{0}^B = \frac{Z_0} {Z_2}\,k_0\,,\quad
\tilde{{{\mathbf{K}}}}^B = \frac{Z_1} {Z_2} \, \tilde{{{\mathbf{K}}}} \, , \quad
k_{d-m}^B = k_{d-m} \, ,
\quad {{{\mathbf{L}}}}_{(k^B)} = {{{\mathbf{L}}}}_{(k)} \,, \quad
k_{F}^B = k_F =\mu \, {\tilde{k}}_F \,, \quad
\Psi_j^B(k^B) = Z_{\Psi}^{\frac{1}{2}}\, \Psi_j(k)\,,
\nn &
a_c^B(k^B) = Z_{a_c}^{\frac{1}{2}}\, a_c(k)\,, \quad
a_s^B(k^B) = Z_{a_s}^{\frac{1}{2}}\, a_s(k)\,, \quad
Z_{\Psi} = \frac{Z_2^{d-m+1} } { Z_0\, {Z_1 }^{d-m-1}}\,,\quad
Z_{a_c} = \frac{Z_{3_s}\, Z_2^{d-m}} {Z_0\, {Z_1 }^{d-m-1}}\,,\quad
Z_{a_s} = \frac{Z_{3_c}\, Z_2^{d-m}} {Z_0\, {Z_1 }^{d-m-1}}\,,\nn &
e_c^B= Z_{e_c}\,e_c\,\mu^{\frac{x}{2}}\,, \quad
Z_{e_c}= \frac{ Z_{4} \, Z_2^{\frac{d-m} {2} -1}}
{\sqrt{ Z_0\, Z_{3_c}} \, {Z_1 }^{\frac{d-m-1} {2}} }\,,\quad
e_s^B= Z_{e_s}\,e_s\,\mu^{\frac{x}{2}}\,, \quad
Z_{e_s}= \frac{ Z_{4} \, Z_2^{\frac{d-m} {2} -1}}
{\sqrt{ Z_0\, Z_{3_s}} \, {Z_1 }^{\frac{d-m-1} {2}} }\,, \end{aligned}$$
and $$\begin{aligned}
Z_{\zeta} = 1 + A_{\zeta}\,.\end{aligned}$$ As before, the superscript “B" denotes the bare fields, couplings, and momenta.
As before, we will use the same notations, namely, $z$ for the dynamical critical exponent, $\tilde z$ be the critical exponent along the extra spatial dimensions, $ \beta_{k_F}$ for the the beta function for ${\tilde k}_F$, and $\eta_\psi$ for the anomalous dimension for fermion. Since we have now two gauge fields now, we will use the symbols $\beta_{e_c} $ and $\beta_{e_s}$ to denote the beta functions for the couplings $e_c$ and $e_s$ respectively, which are explicitly given by: $$\begin{aligned}
\beta_{e_c} = \frac{\partial e_c}{\partial \ln \mu}\,,\quad
\beta_{e_s} = \frac{\partial e_s}{\partial \ln \mu}\, .
\label{beta2}\end{aligned}$$ The anomalous dimensions of these two bosons are indicated by: $$\begin{aligned}
&\eta_{a_c} = \frac{1}{2} \frac{ \partial \ln Z_{a_c}}{\partial \ln \mu} \,,\quad
\eta_{a_s} = \frac{1}{2} \frac{ \partial \ln Z_{a_s}}{\partial \ln \mu} \,.\end{aligned}$$
RG Flows At One-Loop Order
--------------------------
To one-loop order, the counterterms are given by $Z_\zeta =
1 + \frac{Z_{\zeta}^{(1)}} {\epsilon}\,.$ Here, only $$\begin{aligned}
Z_{0}^{(1)} = & - \frac{ u_0 \left( \tilde{e}_c +{\tilde e}_s \right) }
{N } \text{ and }
Z_{1}^{(1)} = -\frac{ u_1 \left( \tilde{e}_c +{\tilde e}_s \right) }
{N }\end{aligned}$$ are nonzero, where $$\begin{aligned}
\tilde{e}_c =\frac{e_c^{ \frac{2 \,(m+1) } {3} }}
{ {\tilde{k}}_F ^{ \frac{(m-1) (2-m)}{6} }}
\text{ and }
\tilde{e}_s =\frac{e_s^{ \frac{2 \,(m+1) } {3} }}
{ {\tilde{k}}_F ^{ \frac{(m-1) (2-m)}{6} }}\,.\end{aligned}$$
The one-loop beta functions are now given by: $$\begin{aligned}
& \beta_{k_F} = - {\tilde k}_F \,,
\nn & (1-z)\, Z_0= -\beta_{e_c} \, \frac{\partial Z_0} {\partial e_c}
-\beta_{e_s} \, \frac{\partial Z_0} {\partial e_s }
+ {\tilde k}_F \, \frac{\partial Z_0} {\partial \tilde{k}_F} \,, \nn
& (1-\tilde z)\, Z_1= - \beta_{e_c} \, \frac{\partial Z_1} {\partial e_c}
-\beta_{e_s } \, \frac{\partial Z_1} {\partial e_s}
+ {\tilde k}_F \, \frac{\partial Z_1} {\partial \tilde{k}_F} \,, \nn
& \beta_{e_c} =- \frac{ \epsilon
+ \frac{2-m}{m+1} \,(1-\tilde z)+1-z } {2}\,e_c \,,\nn
& \beta_{e_s } =- \frac{ \epsilon
+ \frac{2-m}{m+1} \,(1-\tilde z)+1-z } {2}\,e_s \,,\nn
& \eta_\psi =\eta_{a_c} =\eta_{a_s} = \frac{ 1-z + (1-\tilde z)\, (d-m-1) } {2} \, .
\label{beta1}\end{aligned}$$ Solving these equations, we get: $$\begin{aligned}
& -\frac{\beta_{e_c}} {e_c} =-\frac{\beta_{e_s}} {e_s}
\nn &= \frac{\epsilon }{2}
+\frac{ (m-1) (2-m)}{4\, (m+1)}
-\frac{\left[ (m+1) \,u_0 +(2-m) \,u_1 \right ] \left( \tilde{e}_c + \tilde e_s \right )} {6 \, N} .\end{aligned}$$ Again, it is clear that for generic $m$, the order by order loop corrections are controlled not by $e_c$ and $e_s $, but by the effective couplings $\tilde{e}_c $ and $ \tilde{e}_s \,.$ Hence we need to compute the RG flows from the beta functions of these effective couplings, which are given by: $$\begin{aligned}
& -\frac{\beta_{\tilde e_c} } {\tilde e_c} =-\frac{\beta_{\tilde e_s} } {\tilde e_s} \nn &
=\frac{(m+1) \,\epsilon}{3}
- \frac{ \left (m+1 \right ) \left[\left (m+1 \right ) u_0+ \left ( 2-m\right ) u_ 1 \right ]
\left( \tilde e_c+\tilde e_s \right)
}
{9 \, N} \, .\end{aligned}$$ The interacting fixed points are determined from the zeros of the above beta functions, and take the form: $$\begin{aligned}
{\tilde{e}_c}^* + {\tilde e}_s^* = \frac{3 \,N\, \epsilon }
{(m+1)\, u_0+(2-m)\, u_1 } +\mathcal{O} \left( \epsilon^2 \right) ,\end{aligned}$$ which actually give rise to a fixed line, as found in Ref. [@debanjan] for the case of $m=1\,.$ It can be checked that this is IR stable by computing the first derivative of the beta functions. Hence, we have proven that the fixed line feature survives for critical Fermi surfaces of dimensions more than one. The critical exponents at this stable fixed line take the same forms as in Eq. (\[critex\]).
Higher-Loop Corrections
-----------------------
Using the same arguments as the single gauge field case, we will have the following nonzero $Z_\zeta^{(1)}$’s: $$\begin{aligned}
Z_{0}^{(1)} = & - \frac{ u_0 \left( \tilde{e}_c +{\tilde e}_s\right) }
{N \,\epsilon}
- \frac{ v_0 \left( \tilde{e}_s +{\tilde e}_c\right)^2 } {N^2 \,\epsilon}\,,\nn
Z_{1}^{(1)} = & -\frac{ u_1 \left( \tilde{e}_s +{\tilde e}_c\right) }
{N \,\epsilon}
- \frac{ v_1 \left( \tilde{e}_c +{\tilde e}_s\right)^2 } {N^2 \,\epsilon}\,,\quad
Z_{2}^{(1)} =
- \frac{ w \left( \tilde{e}_c +{\tilde e}_s \right)^2 } {N^2 \,\epsilon}\,,\nn
Z_{4_s}^{(1)} = &
- \frac{ y_s \left( \tilde{e}_c +{\tilde e}_s \right)^2 } {N^2 \,\epsilon}\,,\quad
Z_{4_c}^{(1)} =
- \frac{ y_c \left( \tilde{e}_c +{\tilde e}_s\right)^2 } {N^2 \,\epsilon}\,,\end{aligned}$$ including the one-loop and two-loop corrections for $m=1\,$. This leads to the beta functions:
$$\begin{aligned}
\frac{\beta_{\tilde{e}_c}} {{\tilde e}_c} =\frac{\beta_{\tilde{e}_s}} {{\tilde e}_s}
= & -\frac{2
\left \lbrace 2 \,u_1 \left ( {\tilde e}_c +
{\tilde e}_s \right )+3 \,N \right \rbrace \epsilon} {9 \, N}
+
\frac{2
\left \lbrace 2
\left ( {\tilde e}_c + {\tilde e}_s \right )
\left(2 \,u_0\,u_1+ u_1^2+6 \,v_0 +3 \,v_1-9 \,w\right)
+3 \,N\left (2 \,u_0+ u_1 \right )\right \rbrace \left ( {\tilde e}_c + {\tilde e}_s \right )
}
{27 \, N^2}\, ,\end{aligned}$$
which again have a continuous line of fixed points defined by: $$\begin{aligned}
\frac{{\tilde{e}_s}^* + {\tilde e}_c^*}{N}
= \frac{3 \,\epsilon }
{2 \,u_0+ u_1}
-\frac{18 \left (2 \,v_0+v_1-3 \,w \right )}
{ \left (2 \,u_0+ u_1 \right )^3}\, \epsilon ^2
+\mathcal{O} \left( \epsilon^3 \right) .\end{aligned}$$ For $m>1\,,$ the UV/IR mixing will render the higher-loop corrections to be $k_F$-suppressed and hence they will have no effect on the fixed line. Therefore, the fixed line feature is generically not altered by going to higher loops.
\[conclude\]Conclusion
======================
In this paper, we have applied the dimensional regularization scheme, developed for non-Fermi liquids arising at Ising-nematic quantum critical point, to the case of non-Fermi liquids arising from transverse gauge field couplings with finite-density fermions. This has allowed us to access the interacting fixed points perturbatively in an expansion in $\epsilon\,,$ which is the difference between the upper critical dimension ($d_c =m+\frac{3}{m+1}$) and the actual physical dimension ($d_{\text{phys}}=m+1$) of the theory, for a Fermi surface of dimension $m$. We have extracted the scaling behaviour for the case of one and two $U(1)$ gauge fields.
There is a crucial difference in the matrix structure of the couplings in the cases of Ising-nematic order parameter and gauge fields. This arises from the fact that the fermions on the antipodal points of the Fermi surface couple to the Ising-nematic order parameter (transverse gauge) with the same (opposite) sign(s). Hence, although we get the same values of critical dimension and critical exponents, the differences will show up in the renormalization of some physical quantities like the $2 k_F$ scattering (backscattering involving an operator that carries a momentum $2k_F$) amplitudes.
The $U(1) \times U(1)$ is particularly interesting in the context of recent works which show that this scenario is useful to describe the phenomena of deconfined Mott transition, and deconfined metal-metal transition [@debanjan]. In Ref. , Zou and Chowdhury found that in $(2+1)$ spacetime dimensions and at one-loop order, these systems exhibited a continuous line of stable fixed points, rather than a single one. Their method involved modifying the bosonic dispersion (such that it becomes nonanalytic in the momentum space), and then carrying out a double expansion in two small parameters [@nayak1; @mross]. Our method avoids this issue by employing the dimensional regularization scheme. We also have the advantage that we could analyze a critical Fermi surface of generic dimensions, and also perform higher-loop diagrams giving order by order corrections in $\epsilon$. The discovery of a fixed line for the $U(1) \times U(1)$ theory in Ref. [@debanjan] raised the question whether this feature survives when we consider either higher dimensions or higher loops. Our computations show that definitely higher dimensions do not reduce the fixed line to discrete fixed points. Regarding higher-loop corrections, we have not performed those explicitly, but through arguments based on the previous results for the Ising-nematic critical points [@Lee-Dalid; @ips-uv-ir1; @ips-uv-ir2], we have predicted that these will also not make the fixed line degenerate into fixed point(s). In future, it will be worthwhile to carry out this entire procedure for the case of $SU(2)$ gauge fields [@debanjan]. Another direction is to compute the RG flows for superconducting instabilities in presence of these transverse gauge fields, as was done in Ref. for the Ising-nematic order parameter.
We thank Debanjan Chowdhury for stimulating discussions. We are especially grateful to Andres Schlief for valuable comments on the manuscript.
Computation Of The Feynman Diagrams At One-Loop Order {#app:oneloop}
=====================================================
One-Loop Boson Self-Energy {#app:oneloopbos}
--------------------------
In this subsection, we compute the one-loop boson self-energy: $$\begin{aligned}
\label{bosloop}
\Pi_1 (q) = & -e^2 \mu^x
\int dk\,\text{Tr}
\left[ \gamma_{0}\, G_0 (k+q)\,\gamma_{0}\, G_0 (k) \right ]
=
2 \, e^2 \mu^x \int dk\,
\frac{ k_0 \left(k_0+q_0 \right)
-\tilde{{{\mathbf{K}}}} \cdot \left ( \tilde{{{\mathbf{K}}}} + \tilde{{{\mathbf{Q}}}} \right ) - \delta_q \,\delta_{k+q}}
{[{{\mathbf{K}}}^2 + \delta_k^2]\, [({{\mathbf{K}}} +{{\mathbf{Q}}})^2 + \delta_{k+q}^2 ]} \,
e^{-\frac{{{{\mathbf{L}}}}_{(k)}^2 + {{{\mathbf{L}}}}_{(k+q)}^2} { \mu \, {\tilde{k}}_F }}\,.\end{aligned}$$
We first integrate over $k_{d-m}$ to obtain [^2]: $$\begin{aligned}
\Pi_1 (q) &= e^2 \, \mu^x \int \frac{ d{{{\mathbf{L}}}}_{(k)} \, d{{\mathbf{K}}}}{(2\,\pi)^d}
\frac{ 2\,k_0 \left(k_0+q_0 \right) -{{\mathbf{K}}} \cdot ({{\mathbf{K}}} +{{\mathbf{Q}}})}
{ |{{\mathbf{K}}}|\,|{{\mathbf{K}}} +{{\mathbf{Q}}}|\,
\left[ \big (\, \delta_q +2 \, {{{\mathbf{L}}}}_{(q)}^i \, {{{\mathbf{L}}}}_{(k)}^i \, \big )^2 + \big(\, |{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}|\, \big)^2
\right] } \big(\,|{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}| \, \big)
\, e^{-\frac{{{{\mathbf{L}}}}_{(k)}^2 + {{{\mathbf{L}}}}_{(k+q)}^2} { \mu \, {\tilde{k}}_F }} \nn
& \quad -
e^2 \, \mu^x \int \frac{ d{{{\mathbf{L}}}}_{(k)} \, d{{\mathbf{K}}}}{(2\,\pi)^d}
\frac{ |{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}| }
{ \big (\, \delta_q +2 \, {{{\mathbf{L}}}}_{(q)}^i \, {{{\mathbf{L}}}}_{(k)}^i \, \big )^2 + \big(\, |{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}|\, \big)^2 }
\times e^{-\frac{{{{\mathbf{L}}}}_{(k)}^2 + {{{\mathbf{L}}}}_{(k+q)}^2} { \mu \, {\tilde{k}}_F }} \,,
$$ where we have chosen the coordinate system such that ${{{\mathbf{L}}}}_{(q)} = (q_{d-m+1},0, 0,\ldots,0) $. Since the problem is rotationally invariant in these directions and $\Pi_1(q)$ depends only the on the magnitude of ${{{\mathbf{L}}}}_{(q)}$, the final result is independent of this choice.
Making a change of variable, $u=\delta_q +2 \, q_{d-m+1} \, k_{d-m+1} \,,$ and integrating over $u$, we get $$\begin{aligned}
I &\equiv \int \frac{d k_{d-m+1}}{2 \,\pi} \frac{1} { \big (\, \delta_q +2 \, q_{d-m+1} \, k_{d-m+1} \, \big )^2 + \big(\, |{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}|\, \big)^2 }
= \frac{1}{4 \, |{{\mathbf{L}}}_{(q)}| \, \big(\, |{{\mathbf{K}}} +{{\mathbf{Q}}}|+|{{\mathbf{K}}}|\, \big)} \,.\end{aligned}$$ The rest of ${{\mathbf{L}}}_{(k)}$-integrals evaluate to $J^{m-1}\,,$ where $$\begin{aligned}
\label{eqJ}
J \equiv \int_{-\infty}^{\infty} \frac{dy}{2 \pi} \, \exp{ \Big \lbrace \frac{-2y^2} { \mu \, {\tilde{k}}_F } \Big \rbrace} =\sqrt{\frac{ \mu \, {\tilde{k}}_F }{8 \pi}} \,.\end{aligned}$$ Hence the self-energy expression reduces to: $$\begin{aligned}
\label{pia11}
\Pi_1 (q) =
\frac{ e^2 \, \mu^x } { 2^{m+1} \, |{{\mathbf{L}}}_{(q)}|} \Big ( \frac{ \mu \, {\tilde{k}}_F } {2 \pi}
\Big )^{\frac{m-1}{2}} \, I_1 (d-m, {{\mathbf{Q}}})\,, \end{aligned}$$ where $$\begin{aligned}
I_1 (d-m, {{\mathbf{Q}}})
&= \int \frac{ d{{\mathbf{K}}}}{(2\,\pi)^{d-m} }
\left [\frac{k_0 \left(k_0+q_0 \right) -\tilde{{{\mathbf{K}}} } \cdot \left ( \tilde{{{\mathbf{K}}}} + \tilde{{{\mathbf{Q}}}} \right )} {|{{\mathbf{K}}} +{{\mathbf{Q}}}| \,|{{\mathbf{K}}}|}
- 1 \right ] .
$$ The ($d-m$)-dimensional integral in $I_1 (d-m,{{\mathbf{Q}}})$ can be done using the Feynman parametrization formula $$\begin{aligned}
\label{feynm}
\frac{1}{A^{\alpha} \,B^{\beta}}=
\frac{\Gamma (\alpha +\beta)}{\Gamma (\alpha) \,\Gamma (\beta)}
\int_0^1 \,dt\, \frac{t^{\alpha-1}\,(1-t)^{\beta-1}}
{\left[ t \,A +(1-t) \,B\right]^{\alpha+\beta}} \,.\end{aligned}$$ Substituting $\alpha=\beta=1/2$, $A=| {{\mathbf{K}}} + {{\mathbf{Q}}}|^2 $ and $B=|{{\mathbf{K}}}|^2 $, we get: $$\begin{aligned}
I_1 (d-m, {{\mathbf{Q}}}) = \frac{1}
{ \pi \, (2 \pi)^{d-m} }
\int_0^1 \frac{dt }{\sqrt{ t \, (1-t)}}\,
\int {d {{\mathbf{K}}}} \left[
\frac{ k_0 \left(k_0+q_0 \right)
- \tilde{{{\mathbf{K}}}} \cdot \left (\tilde{{{\mathbf{K}}}}+\tilde{{{\mathbf{Q}}}} \right )}
{ x \, |{{\mathbf{K}}} + {{\mathbf{Q}}}|^2 + (1-t) {{\mathbf{K}}}^2 }
-1 \right ] .\end{aligned}$$ Introducing the new variable $ {{\mathbf{u}}} = {{\mathbf{K}}} + t \, {{\mathbf{Q}}} \,,$ $I_1$ reduces to: $$\begin{aligned}
I_1(d-m, {{\mathbf{Q}}})
= \frac{1}
{\pi \, (2 \,\pi)^{d-m} }
\int_0^1 \frac{dt } {\sqrt{ t \, (1-t)}}\,
\int {d^{d-m}{{\mathbf{u}}}} \,
\left [
\frac{ 2 \left \lbrace u_0^2 - t \,(1-t) \, q_0^2 \right \rbrace
-2\,{{\mathbf{u}}}^2 }
{ {{\mathbf{u}}} ^2 + t \,(1-t) \,{{\mathbf{Q}}}^2}
\right ].\end{aligned}$$ Again, we use another new variable ${{\mathbf{v}}}$, defined by ${{\mathbf{u}}} = \sqrt{t\,(1-t)} \,{{\mathbf{v}}} \,,$ so that $$\begin{aligned}
\label{i1}
I_1 (d-m, {{\mathbf{Q}}})
&= -\frac{2^{-2 d+2 m+1} \,
\pi ^{-d+m+\frac{1}{2}} \Gamma \left(\frac{d-m+1}{2} \right)}{\Gamma \left(\frac{d-m+2}{2} \right)}
\int {d^{d-m}{{\mathbf{v}}}} \,
\frac{ q_0^2 + \tilde{ {{\mathbf{v}}}}^2 }
{ {{\mathbf{v}}} ^2 + {{\mathbf{Q}}}^2} \,. \end{aligned}$$ Using $$\begin{aligned}
\int_0^{\infty}\, dy \, \frac{ y^{n_1}} {( y^2 + C)^{n_2}} =
\frac{ \Gamma \left (\frac{n_1+1}{2} \right ) \, \Gamma \left (n_2-\frac{n_1+1}{2} \right ) }
{2 \, \Gamma(n_2)}
\, C^{\frac{n_1+1}{2}-n_2} \,,\end{aligned}$$ and the volume of the $(n-1)$-sphere (at the boundary of the $n$-ball of unit radius) $$\begin{aligned}
S^{n-1} \equiv \int d \Omega_n = \frac{2 \,\pi^{n/2}} { \Gamma \left (n/2 \right )} \,,\end{aligned}$$ we finally obtain the one-loop boson self-energy to be: $$\begin{aligned}
\label{api}
\Pi_1 (k)
& =
-\frac{ \beta(d,m)\, e^2 \, \mu^x } { |{{\mathbf{L}}}_{(q)}|} \Big ( \mu \, {\tilde{k}}_F \Big )^{\frac{m-1}{2}}
\left[ k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right] \,|{{\mathbf{K}}}|^{d-m-2}
\,,\end{aligned}$$ with $$\begin{aligned}
\beta(d,m)
&= \frac{ 1 } { 2^{m+1}} \Big ( \frac{1 } {2 \pi}
\Big )^{\frac{m-1}{2}}
\frac{2^{-2 d+2 m+1}\, \pi ^{\frac{-d+m+3}{2} }
\,\Gamma (d-m) \,\Gamma (m+1-d) }
{\Gamma ^2 \left(\frac{ d-m+2} {2}\right) \Gamma \left(\frac{m+1-d}{2} \right)}
=
\frac{2^{\frac{1+m-4d}{2} }\, \pi ^{\frac{4-d}{2}}
\,\Gamma (d-m) \,\Gamma (m+1-d) }
{\Gamma ^2 \left(\frac{ d-m+2} {2}\right) \Gamma \left(\frac{m+1-d}{2} \right)} \,.\end{aligned}$$
One-Loop Fermion Self-Energy {#app:oneloopfer}
----------------------------
Here we compute the one-loop fermion self energy $\Sigma_1 (q) $ by using the dressed propagator for boson which includes the one-loop self-energy $\Pi_1(k)$: $$\begin{aligned}
\label{sigma1int}
\Sigma_1 (q) &= \frac{ e^2 \, \mu^{x}} {N}
\int dk \,
\gamma_{0} \, G_0 (k+q)\, \gamma_{0} \,D_1 (k)
= \frac{ \mathrm{i}\,e^2 \, \mu^{x}}{N} \int dk\,
D_1 (k) \frac{ \tilde{{{\mathbf{\Gamma}}} } \cdot \left( \tilde{{{\mathbf{K}}} }
+ \tilde{{{\mathbf{Q}}} } \right ) + \gamma_{d-m} \, \delta_{k+q} - \gamma_0\left( k_0+q_0 \right)}
{\left ({{{\mathbf{K}}} } + {{{\mathbf{Q}}}} \right )^2 +\delta_{k+q}^2} \,
e^{-\frac{{{{\mathbf{L}}}}_{(k+q)}^2} { \mu \, {\tilde{k}}_F }} \, .\end{aligned}$$ Integrating over $k_{d-m}\,,$ we get $$\begin{aligned}
\Sigma_1 (q) = \frac{\mathrm{i}\, \,e^2 \, \mu^{x}}{ 2 N} \int \frac{d ^{d} k}{(2\pi)^{d}} \,
D_1 (k) \frac{ \tilde{{{\mathbf{\Gamma}}} } \cdot \left( \tilde{{{\mathbf{K}}} } + \tilde{{{\mathbf{Q}}} } \right )
-\gamma_0\left( k_0+q_0 \right)
}
{|{{\mathbf{K}}} +{{\mathbf{Q}}}|} \, \times e^{-\frac{{{{\mathbf{L}}}}_{(k+q)}^2} { \mu \, {\tilde{k}}_F }} \, .\end{aligned}$$ Since only $D_1 (k) \, e^{-\frac{{{{\mathbf{L}}}}_{(k+q)}^2} {k_F}} $ depends on $ {{{\mathbf{L}}}}_{(k)}$, let us first perform the integral: $$\begin{aligned}
\label{i2}
I_2(k) & \equiv \int \frac{d {{{\mathbf{L}}}}_{(k)} } {(2\pi)^{m}} \, \frac{ e^{-\frac{{{{\mathbf{L}}}}_{(q+k)}^2} {k_F}}}
{ {{{\mathbf{L}}}}_{(k)}^2 + \beta_{dm} \, e^2 \, \mu^{x} \,( \mu \, {\tilde{k}}_F )^{ \frac{m-1}{2}} \, \frac{ |{{\mathbf{K}}}|^{d-m}}{ |{{\mathbf{L}}}_{(k)}| }
\times \left[ d-m-1 +(m-d)\frac{ k_0^2}{|{{\mathbf{K}}}|^2}\right]}
\nn &
= \frac{ \pi^{\frac{2-m}{2}} }
{3 \times 2^{m-1} \, \Gamma{(m/2)} \, | \sin \lbrace (m+1) \pi/3 \rbrace |
\left [ \beta(d,m) \, e^2 \, \mu^{x} \,( \mu \, {\tilde{k}}_F )^{ \frac{m-1}{2}} \, |{{\mathbf{K}}}|^{d-m-2}
\times \left \lbrace k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right \rbrace
\right ]^{\frac{2-m}{3}} } \,.\end{aligned}$$
Now the expression for the self-energy can be written as: $$\begin{aligned}
\Sigma_1 (q) = \frac{ \mathrm{i} \, e^{2(m+1)/3} \, \mu^{ x \, (m+1)/3} \, \pi^{\frac{2-m}{2}}
\times I_3 (d-m, {{\mathbf{Q}}})}
{6 N
\times 2^{m-1} \, \Gamma{(m/2)} \, |\sin \left( \frac{m+1}{3} \,\pi \right )|\,
\left[ \beta(d,m) \right] ^{\frac{2-m} {3}} \, ( \mu \, {\tilde{k}}_F ) ^{(m-1)(2-m)/6} } \,,\end{aligned}$$ where $$\begin{aligned}
& I_3 (d-m, {{\mathbf{Q}}})
\nn & = \int \frac{ d {{\mathbf{K}}}}{(2\pi)^{d-m}}
\frac{ \tilde{{{\mathbf{\Gamma}}} } \cdot \left( \tilde{{{\mathbf{K}}} } + \tilde{{{\mathbf{Q}}} } \right )
- \gamma_0\left( k_0+q_0 \right)
}
{ \left [ \left \lbrace k_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{K}}}}}^2 \right \rbrace \,|{{\mathbf{K}}}|^{d-m-2}
\right ]^{\frac{2-m}{3}}
\,|{{\mathbf{K}}} +{{\mathbf{Q}}}|}
\nn & = \frac{\Gamma \left( \frac{ 2 d+m^2+3 - m\,(d+2) } {6} \right) }
{\Gamma \left(\frac{1}{2}\right) \Gamma \left(\frac{2-m}{3}\right)
\Gamma \left(\frac{(d-m-2) (2-m)}{6} \right)}
\int_0^1 dt_1 \int_0^{1-t_1} dt_2\,
\frac{
(1-t_1-t_2)^{ \frac{(d-m-2)\,(2-m)}{6}-1} }
{t_1^{\frac{m+1}{3}} \,\sqrt{t_2} }
\nn & \quad \times
\int \frac{ d {{\mathbf{K}}}}{(2\pi)^{d-m}}
\frac{ \tilde{{{\mathbf{\Gamma}}} } \cdot \left(\tilde{{{\mathbf{K}}} } + \tilde{{{\mathbf{Q}}} }\right )
-\gamma_0 \left( 1 -t_2\right) q_0
}
{ \left [
t_1\, (m+1-d)\,{\tilde{{{\mathbf{K}}}}}^2
+ t_2 \left(\tilde{{{\mathbf{K}}} }+ \tilde{{{\mathbf{Q}}} }\right)^2
+ (1-t_1-t_2)\, \tilde{{{\mathbf{K}}}}^2 + q_0^2\, t_2\, (1 - t_2)+ k_0^2
\right ]^{\frac{ 2 d+m^2+3 - m\,(d+2) } {6}}}
\nn & =
\frac{\pi ^{\frac{m-d-1}{2} } \, \Gamma \left(\frac{ 3-(d-m)\,(m+1)} {6} \right)}
{2^{d-m} \,\Gamma \left(\frac{2-m}{3}\right) \Gamma \left(\frac{ m^2-d (m-2)-4}{6} \right)}
\int_0^1 dt_1 \int_0^{1-t_1} dt_2\Bigg [
\frac{
(1-t_1-t_2)^{ \frac{(d-m-2)\,(2-m)}{6}-1} }
{t_1^{\frac{m+1}{3}} \,\sqrt{t_2} }
\, \frac{ \gamma_0 \left( 1 -t_2\right) q_0
- \left( \tilde{{{\mathbf{\Gamma}}} } \cdot \tilde{{{\mathbf{Q}}} } \right )
\left [\frac{ \left( 1+m\,t_1- d\,t_1 -t_2\right)}
{1+m\,t_1- d\,t_1}\right ] }
{ \left ( 1+m\,t_1- d\,t_1 \right )^{ \frac{(d-m)\, (2-m)}{6} }
}
\nn & \quad
\left \lbrace \frac{\tilde{{{\mathbf{Q}}}}^2\,t_2 \left( 1+m\,t_1- d\,t_1 -t_2 \right) }
{ \left( 1+m\,t_1- d\,t_1 \right)^2 } +\frac{ q_0^2\, t_2\, (1 - t_2)} {1+m\,t_1- d\,t_1}
\right \rbrace^{\frac { (d-m)\,(m+1)-3} {6} }
\left \lbrace \Theta \left( 1+m\,t_1- d\,t_1 \right)
+ fac \times \Theta \left( d\,t_1-1-m\,t_1 \right) \right \rbrace
\Bigg]
\,.\end{aligned}$$ where $fac =(-1)^{\frac{(m-2) \,(d-m)}{3} }
\mathrm{i}^{d-m+1}
\left[ (-1)^{\frac{1}{6} (m-2) (m-d)} \cos \left(\frac{ (m+1) (d-m)}{6} \pi \right)
-\cos \left(\frac{d-m}{2} \pi \right)\right ] \csc \left(\frac{(m-2) (d-m)}{6} \pi \right) \,.$ From this expression, it is clear that $I_3 (d-m, {{\mathbf{Q}}})$ blows up when the argument of the gamma function in the numerator blows up, *i.e.* $ \frac{3-(d-m)\,(m+1)} {6} =0\,.$ This implies that $\Sigma_1 (q) $ blows up logarithmically in $\Lambda$ at the critical dimension $$\begin{aligned}
d_c(m) =m+\frac{3}{m+1}\,.\end{aligned}$$ The integrals over $t_1$ and $t_2$ are convergent, but their values have to be computed numerically for a given $m$.
Expanding in $\epsilon $ defined as $ d=m+\frac{3}{m+1}-\epsilon$, we obtain: $$\begin{aligned}
\Sigma_1(q) =
-\frac{ \mathrm{i} \,
e^{\frac{2\,(m+1)} {3} }
\left[ u_0 \left( \frac{\mu}{ |q_0| } \right)^{ \frac{m+1}{3} \epsilon}
\gamma_0\,q_0
+u_1 \left( \frac{\mu}{ |\tilde{{{\mathbf{Q}}}}| } \right)^{ \frac{m+1}{3} \epsilon}
\left( {\tilde{{{\mathbf{\Gamma}}}}} \cdot {\tilde{{{\mathbf{Q}}}}} \right)
\right] }
{N \, {\tilde{k}}_F ^{ \frac{(m-1)(2-m) } {6}}
\,\epsilon}
+\text{ finite terms} \,,\end{aligned}$$ where $u_0, \, u_1 \geq 0\,.$ Numerically, we find: $$\begin{aligned}
\begin{cases}
u_0 = 0.0201044 \,, \quad v_0 =1.85988 & \text{ for } m=1 \\
u_0 =v_0 = 0.0229392 & \text{ for } m=2
\end{cases} \,.\end{aligned}$$
One-Loop Vertex Correction {#oneloopvert}
--------------------------
In general, the one-loop fermion-boson vertex function $\Gamma_1 (k,q)$ depends on both $k$ and $q$. In order to extract the leading $1/\epsilon$ divergence, however, it is enough to look at the $q \rightarrow 0$ limit. In this limit, we get $$\begin{aligned}
\Gamma_1 (k,0)&= \frac{ e^2 \,\mu^{x}}{N}
\int \frac{d^{d+1} q}{(2\pi)^{d+1}} \, \gamma_{0} \, G_0 (q)\,
\gamma_{0} \, G_0 (q) \, \gamma_{0} \, D_1 (q-k)\nn
&= \frac{e^2 \,\mu^{x}}{N}
\int \frac{d^{d+1} q} {(2\pi)^{d+1}} D_1 (q-k) \, \gamma_{0}\,
\left[
\frac{ 1 } { {{\mathbf{Q}}}^2 + q_{d-m}^2 }
- \frac{ 2\,q_0^2 }
{\left( q_0^2+\tilde{{{\mathbf{Q}}}}^2 + q_{d-m}^2 \right)^2 }
\right] e^{-\frac{2 \, {{\mathbf{L}}}_{(q)}^2}{\mu \tilde{k}_F}}
\nn &=
\frac{e^2 \,\mu^{x}}{2\,N}
\int \frac{ d{{{\mathbf{Q}}}}
\,d {{{\mathbf{L}}}}_{(q)} } {(2\pi)^{d}} D_1 (q) \, \gamma_{0}\,
\frac{ (\tilde{{{\mathbf{Q}}}}+ \tilde{{{\mathbf{K}}}} )^2}
{\left [ (q_0+k_0^2 )^2+ (\tilde{{{\mathbf{Q}}}} +\tilde{{{\mathbf{K}}}})^2\right ]^{3/2}}
e^{-\frac{2 \, {{\mathbf{L}}}_{(q+k)}^2}{\mu \tilde{k}_F}} \,.\end{aligned}$$
Using Eq. (\[i2\]), the above expression reduces to $$\begin{aligned}
\Gamma_1 (k,0)&=
\frac{e^2 \,\mu^{x}\, \gamma_{0}} {N}
\frac{ \pi^{\frac{m}{2}+1-d} }
{3 \times 2^{d} \, \Gamma{(m/2)} \, | \sin \left( \frac{m+1}{3} \,\pi \right ) |
\left [ \beta(d,m) \, e^2 \, \mu^{x} \,( \mu \, {\tilde{k}}_F )^{ \frac{m-1}{2}} \right ]^{\frac{2-m}{3}} }
\times I_4(d,m)\,,\end{aligned}$$ where $$\begin{aligned}
& I_4(d,m)
\nn & = \int \frac{d{{{\mathbf{Q}}}}}
{\left [ |{{\mathbf{Q}}}|^{d-m-2} \left \lbrace q_0^2 + ( m+1-d)\,{\tilde {{{\mathbf{Q}}}}}^2 \right \rbrace
\right ]^{\frac{2-m}{3}} }
\frac{ (\tilde{{{\mathbf{Q}}}}+ \tilde{{{\mathbf{K}}}} )^2}
{\left [ (q_0+k_0^2 )^2+ (\tilde{{{\mathbf{Q}}}} +\tilde{{{\mathbf{K}}}})^2\right ]^{3/2}}
\nn & = \int_0^1 dt_1 \int_0^{1-t_1} dt_2\,
\frac{\Gamma \left( \frac{ 2 d+m^2+9-m\,(d+2)}{6} \right)
t_1^{-\frac{m+1}{3}} \,t_2^{\frac{1}{2}} \,(1-t_1-t_2)^{ \frac{(d-m-2)\,(2-m)}{6}-1} }
{\Gamma \left(\frac{3}{2}\right) \Gamma \left(\frac{2-m}{3}\right)
\Gamma \left(\frac{(d-m-2) (2-m)}{6} \right)}
\nn & \quad \times
\int \frac{ d {{\mathbf{Q}}}}{(2\pi)^{d-m}}
\frac{ (\tilde{{{\mathbf{Q}}}}+ \tilde{{{\mathbf{K}}}} )^2}
{ \left [
t_1\, (m+1-d)\,{\tilde{{{\mathbf{Q}}}}}^2
+ t_2\left(\tilde{{{\mathbf{Q}}} }+ \tilde{{{\mathbf{K}}} }\right)^2
+ (1-t_1-t_2)\, \tilde{{{\mathbf{Q}}} }^2 + k_0^2\, t_2\, (1 - t_2)+ q_0^2
\right ]^{\frac{-(d+2) m+2 d+m^2+9}{6} }}
\nn & = \int_0^1 dt_1 \int_0^{1-t_1} dt_2\,
\frac{\Gamma \left( \frac{ 2 d+m^2+9-m\,(d+2)}{6} \right)
t_1^{-\frac{m+1}{3}} \,t_2^{\frac{1}{2}} \,(1-t_1-t_2)^{ \frac{(d-m-2)\,(2-m)}{6}-1} }
{\Gamma \left(\frac{3}{2}\right) \Gamma \left(\frac{2-m}{3}\right)
\Gamma \left(\frac{(d-m-2) (2-m)}{6} \right)}
\times
\frac{ \Gamma \left(\frac{ (d-m)\, (2-m)} {6} + 1\right)}
{\Gamma \left(\frac{ m^2-(d+2) m+2 d+9} {6} \right)}
\nn & \hspace{2.5 cm}
\times \frac{J_1} {\left (1+ m\,t_1- d\,t_1 \right )^{ \frac{ (d-m) (m-2)-6 } {6} }}\,.\end{aligned}$$ Here, $$\begin{aligned}
& J_1 = \int \frac{ d \tilde{{{\mathbf{Q}}}}}{(2\pi)^{d-m}}
\frac{ \tilde{{{\mathbf{Q}}}}^2+ A^2 }
{ \left [{\tilde{{{\mathbf{Q}}}}} ^2+ B\right ] ^{ \frac{ (d-m) (m-2)-6 } {6} } } \nn
& =
\frac{ B^{\frac{ 5 d+m^2+3-(d+5) m}{6} } \Gamma \left(\frac{(d-m)\, (m-5)-9} {6} \right)
}
{3 \times 2^{d-m+1}\, \pi ^{\frac{d-m+1}{2} } \,\Gamma \left(\frac{ (d-m) (m-2)}{6}-1\right)}
\left[ \left \lbrace (d-m)\, (m-5)-9 \right \rbrace A^2+3 \, (d-m-1) \,B
\right]\,,\end{aligned}$$ where $B \equiv \frac{\tilde{{{\mathbf{K}}}}^2\,t_2 \left( 1+m\,t_1- d\,t_1 -t_2 \right) }
{ \left( 1+m\, t_1- d\,t_1 \right)^2 } +\frac{ k_0^2\, t_2\, (1 - t_2)} {1+m\,t_1- d\,t_1} $ and $ A^2 = \left( \frac{ 1+m\,t_1- d\,t_1 -t_2}
{1+m \,t_1 -d \,t_1} \right )^2 \tilde{{{\mathbf{K}}}} ^2 \,.$
As demonstrated below, we find that although the final value has to be calculated numerically, the expression for $\Gamma_1 (k,0)$ does not have any pole in $\epsilon = m+\frac{3}{m+1}-\epsilon$ and is therefore non-divergent. $$\begin{aligned}
\Gamma_1 (k,0)&= \begin{cases}
- \frac{e^{2/3} \,\mu^{2\,\epsilon/3}\, \gamma_{0}}
{N\, \left [ \beta\left(\frac{5}{2},1 \right ) \right ]^{\frac{1}{3}}}
\times \frac{ \Gamma \left(\frac{5}{4}\right) }
{180 \,\sqrt{3} \,\pi ^{9/4}
\, \Gamma \left(-\frac{5}{4}\right)
\Gamma \left(-\frac{1}{12}\right)
\Gamma \left(\frac{1}{3}\right)^2}
\int_0^1 dt_1 \int_0^{1-t_1} dt_2\,
\frac{ t_2^{\frac{1}{2}} \, B^{3/2} \,\left(B-10 \,A^2\right) }
{ t_1^{\frac{2}{3}} \,(1-t_1-t_2)^{ \frac{13}{12}}}
& \text{ for } m=1 \\
0 & \text{ for } m=2
\end{cases}\,.\end{aligned}$$
[^1]: The $k_F$-dependence drops out for $m=1$.
[^2]: While performing the integral for $k_{d-m+1}$, we can neglect the contribution from the exponential term as $k_{d-m+1}$ already appears in the denominator and is appropriately damped. For extracting leading order singular behaviour in $k_F$, this is sufficient.
|
---
abstract: 'A model of photoemission spectra of actinide compounds is presented. The complete multiplet spectrum of a single ion is calculated by exact diagonalization of the two-body Hamiltonian of the $f^n$ shell. A coupling to auxiliary fermion states models the interaction with a conduction sea. The ensuing self-energy function is combined with a band Hamiltonian of the compound, calculated in the local-density approximation, to produce a solid state Green’s function. The theory is applied to PuSe and elemental Am. For PuSe a sharp resonance at the Fermi level arises from mixed valent behavior, while several features at larger binding energies can be identified with quantum numbers of the atomic system. For Am the ground state is dominated by the ${ | {f^6;J=0} \rangle }$ singlet but the strong coupling to the conduction electrons mixes in a significant amount of $f^7$ character.'
author:
- 'A. Svane'
title: 'Dynamical mean-field theory of photoemission spectra of actinide compounds'
---
The electronic structure of actinides has been subject of extensive experimental[@puse-gouder; @havela; @wachter; @pucoga5; @ute-lander; @durakiewicz] and theoretical[@Soderlind; @kotliar; @leon; @MLM; @borje; @Zwicknagl; @Opahle; @Oppeneer; @kotliar-am] investigations in recent years. The key issue is the character of the $f$-electrons which show varying degrees of band-like and/or localized behavior. Photoemission in particular provides energy resolved information about the $f$-electrons, the understanding of which is far from complete.[@Allen] The actinide antimonides and chalcogenides[@puse-gouder; @durakiewicz] display narrow band-like features around the Fermi level as well as atomic-like features at higher binding energies. Similarly, among the elements a distinct shift of $f$-character away from the Fermi level happens from Pu to Am,[@havela; @naegele] where also the cohesive properties suggest a shift from itinerant to localized behavior of the $f$-electrons. The atomic-like features of photoemission spectra are not well analyzed in terms of multiplets of isolated actinide atoms, in contrast to the situation for rare earth compounds.[@Campagna] On the other hand, band structure calculations generally lead to narrow $f$-bands pinned at the Fermi level but no structures at large binding energy, and additional modelling has been introduced to account for the dual character of the $f$-electrons.[@kotliar; @leon; @MLM; @borje; @Oppeneer; @kotliar-am] The recent development of dynamical mean-field therory (DMFT) has in particular spawned advancements.[@kotliar; @kotliar-am] In the present work a novel model of actinide photoemission is presented, which is capable of describing both high and low energy excitations. In applications to PuSe and Am metal it is shown that interaction of the $f$-electrons with the conduction electrons leads to complex ground state configurations for the solid, which account for the non-trivial features of the photoemission spectra. The theory is based on the Hubbard-I approach suggested in Ref. but augmented with coupling to the conduction sea. To model the photoemission of actinide compounds the $\mathbf{k}$-integrated spectral function $A(\omega)=\pi^{-1}\text{Im} G^{loc}
(\omega)$, with $$\label{spectral}
G^{loc}(\omega) = \frac{1}{N_k} \sum_\mathbf{k} G_\mathbf{k} (\omega)$$ is calculated. A dynamical mean-field, $\Sigma(\omega)$, is added to a band structure Hamiltonian, which is calculated by standard density-functional-theory based methods,[@OKA] and the crystal Green function $G_\mathbf{k}$ obtained by inversion: $$\label{crysgreen}
G_\mathbf{k} (\omega) = \left(\omega - \Sigma (\omega) -
H^{\text{LDA}}_ \mathbf{k}\right)^{-1}.$$ $\Sigma(\omega)$ is calculated from an effective impurity model describing an isolated atom coupled to a bath of uncorrelated electrons: $$\label{Himp}
\hat{H}_{imp}=\hat{H}_{atom}+ \hat{H}_{coup}.$$ The atomic Hamiltonian, $\hat{H}_{atom}$, includes the electron-electron interaction, $\hat{V}_{ee}$, within the $f^n$ shell of an isolated actinide atom: $$\begin{aligned}
\label{eqHatom}
\hat{H}_{atom} & = &
(\epsilon_f-\mu)\sum_m f_{m}^{+} f_{m}
+ \xi \sum_{i} \vec{s}_{i} \cdot \vec{l}_{i}
\nonumber \\
& + &
\frac{1}{2}\sum_{m_j} U_{m_1m_2m_3m_4} f_{m_1}^{+}
f_{m_2}^{+} f_{m_3} f_{m_4}
\end{aligned}$$ Here, indices $m_i=1,..,14$ refer to the individual orbitals of the $f^n$ shell including spin, and $f_{m}^{+}$ and $f_{m}$ are the creation and annihililation operators. The bare $f$-level is $\epsilon_f$, and $\mu$ is the chemical potential. The second term is the spin-orbit energy, $\xi$ being the spin-orbit constant, which is calculated from the self-consistent band structure potential, and $\vec{s}_{i}$ and $\vec{l}_{i}$ are the spin and orbital moment operators for the $i$’th electron.
The two-electron integrals of the Coulomb operator, $U_{m_1m_2m_3m_4}=\langle m_1m_2|\hat{V}_{ee}|m_3m_4\rangle
$, may be expressed in terms of Slater integrals, $F^k$, $k=0,2,4,6$,[@LDA++] which are computed with the self-consistent radial $f$-waves of the band structure calculation. In practice we find it necessary to reduce the leading Coulomb integral $F^0=U_{m_1m_2m_1m_2}$ from its bare value, similar to other studies.[@OG] The physical reason for this is the significant screening of the charge-fluctuations on the $f$-shell by the fast conduction electrons, which happen as part of the photoemission process, but which is not treated in the present model. A complete treatment would involve the computation of the solid dielectric function, as is done in the GW-approximation.[@Hedin; @Silke] The higher Slater integrals, $F^2$, $F^4$ and $F^6$, govern the energetics of orbital fluctuations within the $f^n$ shell, for which the screening is less important, and we therefore use the [*ab-initio*]{} calculated values for those parameters.
The coupling to a sea of conduction electrons is modelled as a simple hopping term for each of the 14 $f$-electrons into an auxiliary state at the Fermi level: $$\begin{aligned}
\label{Hcoup}
H_{coup} =\mu \sum_m c_{m}^{+}c_{m}+V\sum_m\left(c_{m}^{+}f_{m}+
f_{m}^{+}c_{m} \right).
\end{aligned}$$ The impurity Hamiltonian, Eq. (\[Himp\]), is solved by exact diagonalization, and the dynamical mean-field, $\Sigma_{mm^{\prime}}$, follows from the impurity Green’s function, $(G^{imp})_{mm^{\prime}}$ , which is computed from the eigenvalues, $E_{\mu}$, and eigenstates, ${ | {\mu} \rangle }$, of $\hat{H}_{imp}$ as: $$\begin{aligned}
\label{eqGimp}
(G^{imp})_{mm^{\prime}} (\omega) = \sum_{\mu\nu} w_{\mu\nu}
\frac{ { \langle {\mu} | }f_{m}{ | {\nu} \rangle } { \langle {\nu} | }f^{+}_{m^{\prime}}{ | {\mu} \rangle } }
{\omega+E_{\mu}-E_{\nu}} \\
\Sigma_{mm^{\prime}} (\omega) = (G^{0})^{-1}_{mm^{\prime}} (\omega)
- (G^{imp})^{-1}_{mm^{\prime}} (\omega).
\end{aligned}$$ In the zero-temperature limit, $w_{\mu\nu}=1$ if either state ${ | {\mu} \rangle }$ or ${ | {\nu} \rangle }$ is the ground state, and $w_{\mu\nu}=0$ otherwise. Further, $G^{0}=(\omega-\epsilon_f+\mu)^{-1}$ is the Green’s function of the bare $f$-level. Alternatively, $G^{0}$ may be determined by the DMFT self-consistency through $(G^{0})^{-1}=(G^{loc})^{-1}+\Sigma(\omega)$, whith $G^{loc}$ given by Eq. (1). In the present applications there are only insignificant difference between these two expressions for $G^{0}$, [*i.e.*]{} the DMFT self-consistent solid state local Green’s function is fairly close to that of a bare f-electron (at the energy position given by the LDA eigenvalue). The effect of the coupling term in Eq. (\[Hcoup\]) is to mix the multiplet eigenstates corresponding to fluctuations in $f$-occupancy. The eigenstates of $\hat{H}_{imp}$ will not contain an integral number of $f$-electrons, but may deviate more or less from the ideal atomic limit, depending on parameters, in particular the coupling strength $V$.
In a more realistic treatment the conduction states in (\[Hcoup\]) should aquire a width, and the hybridization parameter $V$ be energy dependent.[@OG] The energy dependent hybridization may be calculated directly from a LDA band structure calculation (or from the self-consistent DMFT bath Green’s function),[@OG] but the straightforward mapping to the above model leads to overestimation of the interaction, since the auxiliary electrons cost no excitation energy. Hence, $V$ is treated as an adjustable parameter of the present theory. The hopping parameter $V$ in (\[Hcoup\]) could in principle depend on orbital index (which would be relevant in highly anisotropic crystal structures[@Zwicknagl]), but here we apply the isotropic model of Eq. (\[Hcoup\]). Similarly, with no particular extra effort the bare $f$-energies in (\[eqHatom\]) could be orbital dependent, e.g. for studies of crystal field effects. In the present study only a single $f$-energy is used, given by its average value in the band structure calculation.
![Illustration of the lowest multiplets for a Pu atom at parameter values pertinent to PuSe \[\]. The energy is reckoned from the lowest ${ | {f^5;J=5/2} \rangle }$ state. Only the lowest few $f^n$ states are shown. Each level is labeled by its (approximate) Russell-Saunders term. The ground state, when the coupling to the conduction states is switched on, is formed primarily as a linear combination of the lowest lying $f^5$ and $f^6$ states. The energy gain due to the coupling is 0.14 eV; for clarity the effect is exaggerated on the figure. ](mpfig.eps "fig:"){width="90mm"}\
\[fig1\]
The theory outlined above may be applied to any $f$-element, [*i.e.*]{} any $f$-occupancy. The chemical potential $\mu$ determines the ground state of the model. Thus, this parameter must be adjusted to match the $f$-element under study. The adjustment of $\mu$ to a certain extent balances any inaccuracies in the values of the primary Slater integral, $F^0$, since a shift in the latter may be compensated by a shift in $\mu$. With $F^0$ and $\mu$ fixed, the energies of the ground and excited state multiplets of $\hat{H}_{atom}$, ${ | {f^n;JM} \rangle }$ are determined. Fig. 1 ilustrates this for parameters relevant for a Pu atom in PuSe.[@Puparm; @RS] This selects ${ | {f^5;J=5/2} \rangle }$ as the ground state with ${ | {f^6;J=0} \rangle }$ at marginally 44 meV higher energy. Introducing the coupling to the auxiliary states, Eq. (\[Hcoup\]), leads to a mixed ground state, composed of 70 % $f^5$ and 30 % $f^6$, with 67 % and 24 % stemming from the lowest ${ | {f^5;J=5/2} \rangle }$ and ${ | {f^6;J=0} \rangle }$ states, respectively. Similar mixing occurs in all excited states, and all the degeneracies of the atomic multiplets are completely lifted by the coupling. In particular, the ground state is non-magnetic, and the local-moment character of the $f^5$ ion will only commence at elevated temperatures. A number of eigenstates occur at low excitation energies and allow for low energy transitions in the photoemission. These transitions would be absent in a pure ${ | {f^5;J=5/2} \rangle }$ ground state.
![(Color online) Calculated spectral function for PuSe, $f$-part in full line (red) and non-$f$ in dashed line, compared to the experimental photoemission spectrum of Ref. (in dash-dotted and blue) at 40.8 eV photon energy. The energy is measured relative to the Fermi level, \[\]. The dominating final state atomic term is indicated above each major peak. ](PuSe.eps "fig:"){width="90mm"}\
\[fig-puse\]
![(Color online) Calculated spectral function for americium metal, $f$-part in full line (red) and non-$f$ in dashed line, compared to the experimental photoemission spectrum of Ref. (in dash-dotted and blue) recorded at 40.8 eV photon energy. The energy is measured relative to the Fermi level, \[\]. The dominating final state atomic term is indicated. ](Am.eps "fig:"){width="90mm"}\
\[fig-am\]
The calculated spectral function, Eq. (\[spectral\]), of PuSe is depicted in Figure 2 and compared to the experimental photoemission spectrum of Ref. . There is a one-to-one correspondence between the measured and calculated spectral features, and the theory allows identification of each of these. In particular, a sharp resonance at the Fermi level is due to the low-energy $^7$F$_{0}\rightarrow ^6$H$_{5/2}$ excitations, which may be identified with heavy-fermion type behavior.[@wachter] At 0.4 eV binding energy excitations into the $^6$H$_{7/2}$ final state give rise to a small peak, while the more pronounced peak at $\sim 1$ eV binding energy marks the excitations into the $^6$F$_{5/2}$ final state, which also has a spin-orbit satellite, visible around 1.4 eV binding energy. The major emission peak around 2 eV binding energies is due to $^6$H$_{5/2}\rightarrow ^5$I$_{4}$ transitions with small $^7$F$_{0}\rightarrow ^6$P$_{5/2}$ contribution. At even larger binding energy a broad emission coincides with the Se $p$-band, (dashed line in Fig. 2). It is unclear whether the broad shoulder seen in the experiment between 4 and 5 eV includes emission related to $f$-features or is solely due to the Se $p$-band. The identification of PuSe as a mixed-valent compound was made by Wachter,[@wachter] and the present work supports this view. Details of the identifications of the spectral features are different, though, in particular in the present theory it is not necessary to reduce the multiplet splittings within each $f^n$ configuration compared to the atomic values. The matrix elements in the numerator of Eq. (6) ensures that all atomic selection rules are obeyed, and determine the weights of the individual peaks. Left out of the present treatment is the matrix element between the $f$-electron one-particle wave and the the photoelectron, which may be assumed to vary only slowly with energy over the range of the valence bands. However, the experimental spectrum contains contributions from all allowed transitions, $f$- as well as non-$f$-related. The justification for the comparison in Fig. 2 (and in Fig. 3 for americium below) of the theoretical $f$-spectral function with the experimental photoemission spectrum recorded with 40.8 eV photons is that the cross section strongly favors emission of an $f$-electron at this energy. The experiments reported in Ref. include photoemission spectra taken with 21.2 eV electrons, at which energy the contribution of the non-$f$ electrons dominates, with a spectrum quite close to the non-$f$ spectral function shown in Fig. 2. Finally, life-time effects and secondary electron processes are not considered in the present theory.
![(Color online) Calculated $f$-part of the spectral function for americium metal, for varying strength of the coupling to the conduction electrons, a) $V=0.0$ eV , and b) $V=0.16$ eV (dash-dotted and blue), $V=0.33$ eV (full line and red), and $V=0.49$ eV (dashed and green). The spectrum presented in Figure 3 corresponds to the full line curve of the present plot with a larger broadening \[\]. ](Am-v.eps "fig:"){width="90mm"}\
\[fig-amv\]
The calculated spectral function[@Amparm] of Am is compared to photoemission experiment[@naegele] in figure 3. The multiplet level diagram of the isolated Am atom differs from that of Pu shown in Fig. 1 by shifting the balance such that ${ | {f^6;J=0} \rangle }$ is now the ground state with the ${ | {f^7; {}^8S_{7/2}} \rangle }$ at 0.7 eV higher energy, and ${ | {f^5; {}^6H_{5/2}} \rangle }$ at 2.3 eV higher energy. Spin-orbit interaction seriously violates the Russell-Saunders coupling scheme. The overlap of the americium ground state and the Russell-Saunders $|f^6;\,{}^7F_0\rangle$ state is 0.69. The influence on the photoemission transition probabilities is substantial. For example, the low-energy transitions from the ${ | {f^6;J=0} \rangle }$ ground state to the ${ | {f^5;{}^6H_{5/2}} \rangle }$ and ${ | {f^5;{}^6H_{7/2}} \rangle }$ occur in ratio $2.6:1$, while Russell-Saunders coupling predicts the ratio $1:2.5$,[@Campagna] i.e. exactly reversed weight on the two components. For the same reason, the $J=7/2$ satellite of the peak at the Fermi level in PuSe, Fig. 1, is significantly weaker compared to the main peak.
The coupling to the conduction electrons, Eq. (\[Hcoup\]), leads to further discrepancy with the free atom Russell-Saunders ground state. Good agreement with the experimental spectrum[@naegele] is obtained with a coupling parameter $V=0.33$ eV in Eq. (\[Hcoup\]). The ensuing Am ground state remains a singlet but attains a significant (16 %) admixture of ${ | {f^7; {}^8S_{7/2}} \rangle }$ character. The ${ | {f^6;J=0} \rangle }$ state has still got 61 % weight in the coupled ground state, while the remaining weight scatters over the higher levels of the $f^6$ configuration. The effect on the spectral function is drastic, in particular by leading to significant emission of ${ | {f^7; {}^8S_{7/2}} \rangle }\rightarrow { | {f^6;J} \rangle }$ peaking at 1.7 eV binding energy with a broad tail towards lower binding energy, i.e. this transmission accounts for the shoulder observed in the experiment around 1.8 eV. The main emission of ${ | {f^6; {}^7F_{0}} \rangle }\rightarrow
{ | {f^5; {}^6H, {}^6F, {}^6P} \rangle }$ occurs from $\sim 2.5$ eV and higher binding energies, and accounts for the plateau seen in both experiment and theory between 2.5 and 3.5 eV.
In Figure 4 the evolution of the calculated Am spectrum with the strength of the coupling to the conduction electrons is shown. In Fig. 4(a) the coupling has been set to zero, and the spectrum shows the dominating emission between 2 and 4 eV binding energy due to ${ | {f^5; {}^6H_{5/2}} \rangle }$ and ${ | {f^5; {}^6F_{5/2}} \rangle }$ final states and their $J=7/2$ satellites. The small peaks at -5.1 and -6.2 eV are remnants of the emission to ${ | {f^5; {}^6P_{5/2}} \rangle }$ final states, highly distorted due to spin-orbit interaction in both initial and final state. In Fig. 4(b) the distortion of the spectral function due to coupling to the conduction sea is illustrated, at $V=\frac{1}{2}$, $1$ and $\frac{3}{2}$ times the value used in Figure 3. At binding energies less than 1.7 eV the triangular structure evolves due to ${ | {f^7; {}^8S_{7/2}} \rangle }$ admixture into the initial state. At the highest value of $V$ the emission broadens considerably. At the Fermi level a peak grows up with increasing coupling strength, visible as a small shoulder in Figure 3, but not resolved in the experimental data.
In conclusion, a theory has been developed which allows a detailed description of the photoemission spectra of actinide compounds. The three most important energy scales are the $f$ intra-shell Coulomb interactions, the spin-orbit interaction, and the coupling to the conduction electrons, which are all incorporated in the theory. All atomic selection rules for the photoexcitation are obeyed. The key approximation is that of a dynamical mean-field calculated for a single actinide ion with a simplified interaction with conduction electrons. Compared to recent advancements in the field[@kotliar; @kotliar-am] the present approach treats the impurity atom essentially exact including interaction with the conduction sea. Hence, the theory applies best to systems on the localized side of the Mott transition of the $f$ shell. In particular, for PuSe and Am metal the interaction leads to formation of complex ground states which strongly influence the photoemission spectra, since the initial states of the photoemission process are not free-atom like. The embedment into the solid via the LDA band Hamiltonian in Eq. (\[crysgreen\]) maintains and broadens the atomic features. The DMFT self-consistency cycle has only minute influence on the calculated spectra.
This work was partially funded by the EU Research Training Network (contract:HPRN-CT-2002-00295) ’f-electron’. Support from the Danish Center for Scientific Computing is acknowledged.
T. Gouder, F. Wastin, J. Rebizant, and L. Havela, Phys. Rev. Lett. [**84**]{}, 3378 (2000).
L. Havela, F. Wastin, J. Rebizant, and T. Gouder, Phys. Rev. B [**68**]{}, 85101 (2003).
P. Wachter, Sol. State Commun. [**127**]{} 599 (2003).
J. J. Joyce, J. M. Wills, T. Durakiewicz, M. T. Butterfield, E. Guziewicz, J. L. Sarrao, L. A. Morales, A. J. Arko, and O. Eriksson, Phys. Rev. Lett. [**91**]{}, 176401 (2003).
T. Durakiewicz, C. D. Batista, J. D. Thompson, C. G. Olson, J. J. Joyce, G. H. Lander, J. E. Gubernatis, M. T. Butterfield, A. J. Arko, J. Bonca, K. Mattenberger, and O. Vogt, Phys. Rev. Lett. [**93**]{}, 267205 (2004).
T. Durakiewicz, J. J. Joyce, G. H. Lander, C. G. Olson, M. T. Butterfield, E. Guziewicz, A. J. Arko, L. Morales, J. Rebizant, K. Mattenberger, and O. Vogt, Phys. Rev. B [**70**]{}, 205103 (2004).
P. Söderlind, Adv. Phys. [**47**]{}, 959 (1998).
S. Y. Savrasov, G. Kotliar, and E. Abrahams, Nature, [**410**]{}, 793 (2001). L. Petit, A. Svane, Z. Szotek and W.M. Temmerman, Science [**301**]{}, 498 (2003).
J. M. Wills, O. Eriksson, A. Delin, P. H. Andersson, J. J. Joyce, T. Durakiewicz, M. T. Butterfield, A. J. Arko, D. P. Moore, and L. A. Morales, J. Electr. Spectr. and Rel. Phenom., [**135**]{}, 163 (2004).
P. A. Korzhavyi, L. Vitos, D. A. Andersson, and B. Johansson, Nature Mater. [**3**]{}, 225 (2004).
D. V. Efremov, N. Hasselmann, E. Runge, P. Fulde, and G. Zwicknagl, Phys. Rev. B [**69**]{}, 115114 (2004).
I. Opahle and P. M. Oppeneer, Phys. Rev. Lett. [**90**]{}, 157001 (2003); I. Opahle, S. Elgazzar, K. Koepernik, and P. M. Oppeneer, Phys. Rev. B [**70**]{}, 104504 (2004).
A. B. Shick, V. Janiš, and P. M. Oppeneer, Phys. Rev. Lett. [**94**]{}, 16401 (2005).
S. Y. Savrasov, K. Haule, and G. Kotliar, Phys. Rev. Lett. [**96**]{}, 036404 (2006).
J. W. Allen, Y.-X. Zhang, L. H. Tjeng, L. E. Cox, M. B. Mable, and C.-T. Chen, J. Electr. Spectr. and Rel. Phenom., [**78**]{}, 57 (1996).
J. R. Naegele, L. Manes, J. C. Spirlet, and W. Müller, Phys. Rev. Lett. [**52**]{}, 1834 (1984).
M. Campagna, G. K. Wertheim and Y. Baer, in [*Photoemission in Solids II*]{}, Eds. L. Ley and M. Cardona, (Springer, Berlin, 1979), ch. 4.
A. I. Lichtenstein and M. I. Katsnelson, Phys. Rev. B [**57**]{}, 6884 (1998).
We use the tight-binding linear muffin-tin-orbital method, see O. K. Andersen, Phys. Rev. B [**12**]{}, 3060 (1975); O. K. Andersen and O. Jepsen, Phys. Rev. Lett. [**53**]{}, 2571 (1984).
O. Gunnarsson, O. K. Andersen, O. Jepsen, and J. Zaanen, Phys. Rev. B. [**39**]{}, 1708 (1989).
L. Hedin, Phys. Rev. [**139**]{}, A796 (1965).
S. Biermann, F. Aryasetiawan, and A. Georges, Phys. Rev. Lett. [**90**]{}, 86402 (2003); F. Aryasetiawan, M. Imada, A. Georges, G. Kotliar, S. Biermann, and A. I. Lichtenstein, Phys. Rev. B. [**70**]{}, 195104 (2004).
For a Pu atom in PuSe we calculate: $F^2=7.9$ eV, $F^4=5.0$ eV, $F^6=3.6$ eV, $\xi=0.28$ eV. The screened Coulomb parameter is taken as $F^0=2.7$ eV, and the chemical potential fixed at $\epsilon_f-\mu=-10.8$ eV. The coupling strength used in Fig. 2 is $V=0.12$ eV. The crystal structure is NaCl with the experimental lattice constant $a=5.79$ Å.
For the sake of presentation the approximate term values for the eigenstates of $\hat{H}_{atom}$ are used throughout, even though $L$ and $S$ are not good quantum numbers.
Experimental broadening is simulated by evaluating the spectral functions in Figs. 2, 3 and 4 at complex energies, given by $\omega=(E-E_{fermi})\cdot(1+ib)$, where $b=0.05$ in Fig. 2 and $b=0.075$ in Fig. 3, while $\omega=(E-E_{fermi})+i\delta$, with $\delta=65$ meV is used in Fig. 4.
For an Am atom in americium metal we calculate: $F^2=8.7$ eV, $F^4=5.6$ eV, $F^6=4.0$ eV, $\xi=0.34$ eV. The screened Coulomb parameter is taken as $F^0=3.0$ eV, and the chemical potential fixed at $\epsilon_f-\mu=-14.3$ eV. The coupling strength used in Fig. 3 is $V=0.33$ eV. The crystal structure was taken to be fcc with the experimental equilibrium volume.
|
15.5 cm 20.0 cm -1.0cm = -1.1 cm
0.9 cm
[ **An alternative $SU(4) \otimes SU(2)_L \otimes SU(2)_R$ model**]{}
1.3 cm
R. Foot
1.0 cm
[*School of Physics*]{}
[*Research Centre for High Energy Physics*]{}
[*The University of Melbourne*]{}
[*Parkville 3052 Australia* ]{}
2.0cm
Abstract
A simple alternative to the usual Pati-Salam model is proposed. The model allows quarks and leptons to be unified with gauge group $SU(4) \otimes SU(2)_L \otimes SU(2)_R$ at a remarkably low scale of about 1 TeV. Neutrino masses in the model arise radiatively and are naturally light. 0.7cm
1.2cm
In the standard $SU(2)_L \otimes U(1)_Y$ model of electroweak interactions[@bp] quarks and leptons have many similarities. For example, there are three generations of quarks and three generations of leptons, with 2 quarks and 2 leptons in each generation. Furthermore both left-handed quarks and left-handed leptons transform as $SU(2)_L$ doublets while both right-handed quarks and right-handed leptons are $SU(2)_L$ singlets.
The similarity of quarks and leptons may be due to a spontaneously broken exact symmetry. This symmetry if it exists could be either continuous[@ps] or discrete[@fl]. In the case where it is discrete, unification of quarks and leptons can occur at very low scales of around a TeV[@fl; @flv]. If quarks and leptons are related by a continuous Pati-Salam type gauge symmetry it is usually assumed that this symmetry is broken at a very high scale $M \ge 10^{11}\ GeV$ which means that the idea cannot be tested directly in any conceivable experiment. However if there is no left-right symmetry then it is nevertheless still possible that the standard model is a remnant of a gauge model with Pati-Salam gauge symmetry broken at a relatively low scale of $1000\ TeV$ or even less[@km; @vw; @volkas]. The purpose of this letter is to point out that there exists an interesting alternative Pati-Salam type model which can be broken at a much lower scale of the order of a TeV.
The gauge symmetry of the model is $$SU(4) \otimes SU(2)_L \otimes SU(2)_R.
\label{1}$$ Under this gauge symmetry the fermions of each generation transform in the anomaly free representations: $$Q_L \sim (4,2,1),\ Q_R \sim (4, 1, 2), \ f_L \sim (1,2,2).
\label{2}$$ The minimal choice of scalar multiplets which can both break the gauge symmetry correctly and give all of the charged fermions mass is $$\chi_L \sim (4, 2, 1), \ \chi_R \sim (4, 1, 2),\ \phi \sim (1,2,2).
\label{3}$$ These scalars couple to the fermions as follows: $${\cal L} = \lambda_1 \bar Q_L (f_L)^c \tau_2 \chi_R
+ \lambda_2 \bar Q_R f_L \tau_2 \chi_L
+ \lambda_3 \bar Q_L \phi \tau_2 Q_R +
\lambda_4 \bar Q_L \phi^c \tau_2 Q_R
+ H.c.,
\label{4}$$ where the generation index has been suppressed and $\phi^c = \tau_2
\phi^* \tau_2$. Under the $SU(3)_c \otimes U(1)_T$ subgroup of $SU(4)$, the $4$ representation has the branching rule, $4 = 3(1/3) + 1(-1)$. We will assume that the $T=-1, I_{3R} = -1/2 \ (I_{3L}=1/2)$ components of $\chi_{R} (\chi_L)$ gain non-zero Vacuum Expectation Values (VEVs) as well as the $I_{3L} = I_{3R} = -1/2$ and $I_{3L} = I_{3R} = 1/2$ components of the $\phi$. We denote these VEVs by $w_{R,L}, u_{1,2}$ respectively. In other words, $$\begin{aligned}
\langle \chi_R (T = -1, I_{3R} = -1/2) \rangle = w_R, \
\langle \chi_L (T = -1, I_{3L} = 1/2) \rangle = w_L,
\nonumber \\
\langle \phi (I_{3L} = I_{3R} = -1/2)\rangle = u_1,\
\langle \phi (I_{3L} = I_{3R} = 1/2)\rangle = u_2.\end{aligned}$$ We will assume that the VEVs satisfy $w_R > u_{1,2}, w_L$ so that the symmetry is broken as follows: $$\begin{aligned}
&SU(4)\otimes SU(2)_L \otimes SU(2)_R&
\nonumber \\
&\downarrow \langle \chi_R \rangle&
\nonumber \\
&SU(3)_c \otimes SU(2)_L \otimes U(1)_Y &
\nonumber \\
&\downarrow \langle \phi \rangle, \langle \chi_L \rangle
\nonumber \\
&SU(3)_c \otimes U(1)_Q&\end{aligned}$$ where $Y = T -2I_{3R}$ is the linear combination of $T$ and $I_{3R}$ which annihilates $\langle \chi_R \rangle$ (i.e. $Y\langle \chi_R \rangle = 0$). Observe that in the limit where $w_R \gg w_L, u_1, u_2$, the model reduces to the standard model. The VEV $w_R$ breaks the gauge symmetry to the standard model subgroup. This VEV also gives large (electroweak invariant) masses to an $SU(2)_L$ doublet of exotic fermions, which have electric charges $-1, 0$. We will denote these exotic fermions with the notation $E^-, E^0$. These exotic fermions must have masses greater than $M_Z/2$ otherwise they would contribute to the $Z$ width. Observe that the right-handed chiral components of the usual charged leptons are contained in $Q_R$. They are the $T=-1, I_{3R} = 1/2$ components. The usual left-handed leptons are contained in $f_L$ along with the right-handed components (CP conjugated) of $E^0, E^-$. It is instructive to write out the fermion multiplets explicitly. For the first generation, $$\begin{aligned}
Q_L = \left(\begin{array}{cc}
d_y & u_y \\
d_g & u_g \\
d_b & u_b \\
E^- & E^0
\end{array}
\right)_L,\
Q_R = \left(\begin{array}{cc}
u_y & d_y \\
u_g & d_g \\
u_b & d_b \\
\nu & e
\end{array}
\right)_R, \
f_L = \left(\begin{array}{cc}
(E^-_R)^c & \nu_L \\
(E^0_R)^c & e_L
\end{array}
\right),\end{aligned}$$ and similarly for the second and third generations. In the above matrices the first column of $Q_L$ $(f_L,\ Q_R)$ is the $I_{3L} (I_{3R}) = -1/2$ component while the second column is the $I_{3L} (I_{3R}) = 1/2$ component. The four rows of $Q_L, Q_R$ are the four colours and the rows of $f_L$ are the $I_{3L} = \pm 1/2$ components. Observe that the VEVs $w_L, u_{1,2}$ have the quantum numbers $I_{3L} = -1/2, Y = 1$ (or equivalently $I_{3L} = 1/2, Y = -1$). This means that the standard model subgroup, $SU(3)_c \otimes SU(2)_L \otimes U(1)_Y$ is broken to $SU(3)_c \otimes U(1)_Q$ in the usual way (with $Q = I_{3L} + Y/2$).
Having established that the model is a phenomenologically viable extension to the standard model, we now comment on various features of the model.
Observe that the model has the rather unusual feature that the scalar multiplets required to break the gauge symmetry and give the fermions masses have precisely the same quantum numbers as the fermion multiplets of a generation \[compare Eq.(\[2\]) and Eq.(\[3\])\].
In the model the ordinary neutrinos are naturally light. The neutrino masses vanish at tree-level given the particle content of the theory. The model does however have a light singlet neutrino, $\nu_R$. This electroweak singlet occupies the $T =-1, I_{3R} = -1/2$ component of the $Q_R$ multiplet. Note however that with the minimal Higgs content, this field cannot couple to the ordinary left-handed neutrinos (at tree level). Observe that Majorana neutrino masses arise from the $W_{L,R}$ gauge interactions at the one loop level. Assuming diagonal couplings and examining $\nu_e$ for definiteness, the Feynman diagram for the $\nu_e$ mass is shown in Figure 1. Calculating this finite 1-loop diagram we find that $$m_{\nu} \simeq {2g_R g_L \over (4\pi)^2} \left[
{g_R g_L u_1 u_2 \over M^2_{W_R}}\right]
\left[ {m_e m_d M_E \over M_E^2 - M_{W_L}^2}\right]
log\left({M_E^2 \over M_{W_L}^2}\right),$$ where $g_{L,R}$ are the $SU(2)_{L,R}$ gauge coupling constants and we have assumed that $M_E^2 \ll M_{W_R}^2$. Clearly the neutrino masses are naturally light given that $M_{W_R},
M_E \gg m_e, m_d$.
The gauge interactions of the model conserve an unbroken baryon number symmetry. This baryon charge is defined as $B = B' + T$ where the $B'$ charges of $Q_L, Q_R, \chi_{L,R}$ are $1$ and the $B'$ charges of $f_L, \phi$ are $0$. The existence of the baryon number symmetry implies that protons and neutrons are absolutely stable in the model.
Because the right-handed charged leptons belong to the same multiplet as the right-handed quarks there will be gauge interactions of the form $${\cal L} = {g_s \over \sqrt{2}}\bar D_R^i
W'_{\mu}\gamma^{\mu} K'^{ij} l_R^j + H.c.,$$ where $i,j = 1,...,3$ are family indices, that is $D_R^1 = d_R,
D_R^2 = s_R, D_R^3 = b_R$, $l_R^1 = e_R, l_R^2 = \mu_R,
l_R^3 = \tau_R$ and $W'_{\mu}$ are electrically charged $2/3$ gauge bosons (which gain masses from $\chi_R$ at the first step of symmetry breaking). The matrix $K'$ is a Cabbibo-Kobayashi-Maskawa type matrix. The most stringent bound on the symmetry breaking scale $w_R$ is expected to arise from $K_L \to \mu^{\pm} e^{\mp}$ decays. The decay $\bar K^0 \to \mu^- e^+$ arises from a Feynman diagram with a T-channel exchange of a $W'$ gauge boson. This diagram corresponds (after a Fierz rearrangement) to the effective four fermion Lagrangian density, $${\cal L}_{eff} = {G_X \over \sqrt{2}}
\bar d\gamma_{\mu} (1 + \gamma_5)s
\bar \mu \gamma^{\mu}(1 + \gamma_5) e
+ H.c.,$$ where $G_X = \sqrt{2}g_s^2(M_{W'})/8M^2_{W'}$. Using this effective Lagrangian it is straightforward to calculate the decay rate. We find, $$\Gamma (\bar K^0 \to \mu^- e^+) \simeq {G_X^2
f_K^2 \over 8\pi} M_K m_{\mu}^2,
\label{ii}$$ where $f_K$ is the $K$ meson decay constant, $M_K$ is the $K$ meson mass and we have assumed that the mixing matrix $K'_{ij}$ is approximately diagonal. Evaluating the above equation we find that $$Br(\bar K^0 \to \mu^- e^+) \simeq
10^{-2} \left( {TeV \over M_{W'}}\right)^4.$$ The current experimental bound, $Br(\bar K^0 \to \mu^{\pm} e^{\mp}) <
3.3 \times 10^{-11}$[@pdg], implies the limit $$M_{W'} \stackrel{>}{\sim} 140 \ TeV,
\label{100TeV}$$ assuming that the mixing matrix $K'^{ij}$ is approximately diagonal. This bound is the most stringent bound on the model in the case where $K'^{ij}$ is diagonal. However in the model there is no relationship between the charged lepton mass matrix and the quark masses. Indeed, in the model they are proportional to the VEVs of different scalar multiplets and have independent Yukawa couplings. One consequence of this is that the mixing matrix $K'^{ij}$ connecting the right-handed quarks with the right-handed leptons is theoretically unconstrained (except of course, for the unitrary requirement). For example, the $W'$ could couple $s_R$ predominately with $\tau_R$ (this possibility was discussed in the context of the usual Pati-Salam model in Ref.[@vw]). If this is the case then $K_L$ decays do not give any stringent bounds on the model. In order to explore this scenario further, we will assume for definiteness that $W'$ couples $s_R$ with $\tau_R$, $d_R$ with $e_R$ and $b_R$ with $\mu_R$. This corresponds to a $K'^{ij}$ matrix of the form $$\begin{aligned}
K'= \left(
\begin{array}{ccc}
1&0&0 \\
0&0&1 \\
0&1&0
\end{array}
\right).
\label{zz}\end{aligned}$$ Of course it would not seem natural for the zero elements of this mixing matrix to be exactly zero. However we will assume that they are zero to illustrate a point. In the case of the usual Pati-Salam model with the anzatz Eq.(\[zz\]), the most stringent bound on $M_{W'}$ comes from the $W'$ contribution to the decay $\pi^+ \to e^+ \nu_L$[@vw]. This process leads to a quite stringent bound of $M_{W'} \stackrel{>}{\sim} 250 \ TeV$[@vw] for that model. This bound arises by calculating the interference term between the amplitudes arising from the standard model contribution and the Pati-Salam contribution. However the decay $\pi^+ \to e^+ \nu$ does not provide a stringent constraint for the alternative Pati-Salam model. There are two reasons for this. First, the $W'$ mediates the decay $\pi^+ \to e^+ \nu_R$ (rather than $\pi^+ \to e^+ \nu_L$). Because the final state is distinct from the standard model process $\pi^+ \to e^+ \nu_L$, there will obviously be no interference term between the amplitudes of the two processes. Second, the $W'$ of the alternative model only couples to right-handed quarks and leptons. This means that the decay $\pi^+ \to e^+ \nu_R$ is helicity suppressed by a factor $m_e^2/m^2_{\pi}$ (which is also the case for the standard model contribution). In fact, $${\Gamma (\pi^+ \to e^+ \nu_R) \over
\Gamma(\pi^+ \to e^+ \nu_L)} \simeq {G_X^2 \over G_F^2},$$ where $G_F$ is the usual Fermi constant. The above contribution to $\pi^+ \to
e^+ \nu$ decay leads to a violation of lepton universality and implies a small modification to the ratio $R$ where $R \equiv \Gamma (\pi^+ \to e^+ \nu)/\Gamma (\pi^+ \to
\mu^+ \nu)$. Using $\alpha_s (M_{W'}) \sim 1/10$, we find, $${\delta R \over R}
\simeq 4 \times 10^{-4}\left( {TeV \over M_{W'}}\right)^4.
\label{ew}$$ The theoretical prediction for $R$ agrees within errors to the experimental measurement and thus Eq.(\[ew\]) can be compared to the experimental error $\delta R/R \sim 0.003$[@pdg]. Clearly then, $\pi^+$ decay does not lead to any significant bound for the model.
A more stringent bound on the model \[assuming the ansatz Eq.(\[zz\])\] arises from the $W'$ mediated rare $B_d^0$ decay, $\bar B_d^0 \to \mu^- e^+$. In the case of the usual Pati-Salam model with the ansatz Eq.(\[zz\]), the bound $M_{W'} \stackrel{>}{\sim} 16 \ TeV$ was derived in Ref.[@vw]. However in my model the bound arising from this process is much less stringent. The main difference is that the $W'$ of the usual Pati-Salam model couples vectorially where as the $W'$ of the alternative Pati-Salam model couples only to right-handed quarks and leptons and $\bar B_d^0$ decays will be helicity suppressed by a factor $\sim m^2_{\mu}/M^2_B$. In fact the width for the decay $\bar B_d^0 \to \mu^- e^+$ \[assuming the ansatz Eq.(\[zz\])\] is given by Eq.(\[ii\]) with the replacement $K \to B$. Evaluating the resulting equation we find that $$Br(\bar B_d^0 \to \mu^- e^+) \simeq
3\times 10^{-6} \left( {TeV \over M_{W'}}\right)^4.$$ The current experimental bound, $Br(\bar B_d^0 \to \mu^{\pm} e^{\mp}) <
6 \times 10^{-6}$[@pdg], implies the limit $$M_{W'} \stackrel{>}{\sim} 800 \ GeV.
\label{1TeV}$$ If the $W'$ gauge boson is light, then how large can the zero elements of $K'_{ij}$ be? The most stringently constrained element is the $K'_{s\mu}$ entry \[which is the $K'_{22}$ element of Eq.(\[zz\])\]. This entry is constrained to be $K'_{s\mu} \stackrel{<}{\sim} 10^{-4}$ if $M_{W'}\simeq 1\ TeV$ given the experimental bound $Br(K_L \to \mu^{\pm}e^{\mp})
< 3.3 \times 10^{-11}$[@pdg].
Note that so-called vector lepto-quarks have been studied which have similar properties to the $W'$ gauge bosons[@ab]. However it is usually assumed that vector lepto-quarks must couple to only one generation if they are to be light enough to be seen in collider experiments. One result of this paper is that it is possible to have light vector lepto-quarks coupling chirally to all three quarks and leptons. Furthermore, the model provides a concrete renormalizable framework where vector lepto-quarks with chiral couplings are gauge fields and may thus be fundamental particles.
In addition to the exotic $W'$ gauge bosons, the model contains $W_R^{\pm}$ and $Z'$ gauge bosons. The exotic gauge boson mass matrix arises from the Lagrangian density terms (in the limit where $w_R \gg w_L, u_{1,2}$) $${\cal L} = \left(D_{\mu} \langle \chi_R \rangle\right)^{\dagger}
D^{\mu} \langle \chi_R \rangle.
\label{ee}$$ where the covariant derivative is given by $$D_{\mu} = \partial_{\mu} + ig_s G_{\mu}^a \Lambda_a +
ig_L W_{L\mu}^i \tau^i_L/2 + ig_R W_{R\mu}^i \tau^i_R/2,$$ where $a = 1,...,15, i=1,...,3$ and $G_{\mu}^a$, $W_{L\mu}$, $W_{R\mu}$ ($\Lambda_a$, $\tau^i_L/2$, $\tau^i_R/2$) are the $SU(4)$, $SU(2)_L$, $SU(2)_R$ gauge bosons (generators) respectively. By examining the exotic gauge boson mass matrix Eq.(\[ee\]), it is possible to obtain the usual weak mixing angle, $\sin\theta_w \equiv e/g_L$, as a function of the couplings $g_s, g_L, g_R$. We find that $$\sin^2 \theta_w (M_{W'}) =
{g_s^2 (M_{W'}) g_R^2 (M_{W'}) \over
g_s^2 (M_{W'}) g_R^2 (M_{W'}) +
g_s^2 (M_{W'}) g_L^2 (M_{W'}) +
{2 \over 3}g_L^2 (M_{W'}) g_R^2 (M_{W'})}.
\label{jj}$$ Assuming that $\sin^2 \theta_w (M_{W'}) \simeq 1/4$ which is appropriate for $M_{W'} \sim 1 \ TeV$, Eq.(\[jj\]) implies that $g_R (M_{W'}) \simeq
g_L(M_{W'})/\sqrt{3}$. Also it is easy to show that $M_{W'} \simeq \sqrt{2/3}M_{Z'} \simeq (g_s/g_R)M_{W_R}$. Furthermore the $Z'$ gauge boson couples to fermions via the interaction Lagrangian density, $${\cal L} = -g_s Z'_{\mu} J^{\mu},$$ where the current is given by $$J^{\mu} = \bar \psi Q' \gamma^{\mu} \psi.$$ In the above equation, the summation of fermion fields is implied and the generator $Q'$ is given approximately by $$Q'\simeq \sqrt{{3 \over 8}}\left( T +
{4\over 9}{g_L^2 \over g_s^2}I_{3R} \right),$$ where we have again assumed that $\sin^2 \theta_w (M_{W'}) \simeq 1/4$. From the contributions of the $Z', W_R^{\pm}$ to low energy experiments, a limit of $M_{Z'}, M_{W_R} \stackrel{>}{\sim}
0.5 - 1 \ TeV$ is expected[@rizzo]. Thus, we argue that the model is quite weakly constrained given that the exotic symmetry breaking scale can be as low as a TeV or so.
The model contains scalar lepto-quarks which could be relatively light (e.g. a few hundred GeV). In particular $\chi_L$ contains $SU(3)_c$ triplet $SU(2)_L$ doublet scalars coupling the left-handed leptons with the right handed $d$ type quarks. From Eq.(\[4\]), we can deduce that $${\cal L}_{\chi} = \lambda_2 \bar d_R L_L \chi + H.c.,
\label{y}$$ where $\chi$ is the colour triplet component of $\chi_L$, and $L_L = (\nu_L, \ e_L)^T$. Note that $\chi$-type lepto-quarks coupling to $d_R$ with masses of around $200$ GeV have been put forward as a possible explanation of the excess high $Q^2$ Hera events[@hr]. (However, the Hera anomaly is only a 2-3 sigma excess and may disappear when more data is taken).
Observe that the Yukawa Lagrangian density, Eq.(\[4\]) implies that the right-handed charged leptons will mix slightly with the $E^-$ exotic fermions. For one generation, the mixing has the form, $$\begin{aligned}
{\cal L}_{mass} =
(\bar e_L \bar E_L)\left(
\begin{array}{cc}
\lambda_2 w_L & 0 \\
\lambda_3 u_1 + \lambda_4 u_2 & \lambda_1 w_R
\end{array}
\right)
\left( \begin{array}{c}
e_R \\
E_R
\end{array} \right) + H.c.\end{aligned}$$ Note however that this mixing is expected to be small because $u_{1,2} \ll w_R$ \[given the bound Eq.(\[1TeV\])\]. Because the exotic $E^-$ fermions do not have canonical $Z$ couplings, the mixing will induce small flavour changing neutral current (FCNC) couplings in the general case of three generation mixing. The mixing will be constrained by processes such as $\mu^- \to e^+ e^- e^-$.
Finally, I would like to comment briefly on a cosmological issue. Within the context of the standard big bang model, the phase transition at the temperature scale $T \sim w_R$ will generate monopoles. Monopoles occur because a semi-simple gauge theory is broken down to a gauge symmetry with a $U(1)$ factor. Monopoles in the early Universe can be a problem if they are too abundant. This issue has been examined in Ref.[@volkas] for the case of the usual Pati-Salam model broken at a low scale. It was concluded that there is no problem if the symmetry breaking scale is low, which is the case that is being considered in the present paper.
In conclusion, an alternative Pati-Salam type gauge model has been proposed. The model allows quarks and leptons to be unified with gauge group $SU(4) \otimes SU(2)_L \otimes SU(2)_R$ at a relatively low scale. We argue that present data does not constrain this model very stringently (c.f. the usual Pati-Salam model). As a consequence the exotic gauge boson masses (and thus the symmetry breaking scale) can be as low as about $1$ TeV. Neutrino masses arise radiatively in the model and are naturally light.
0.8cm [**Acknowledgements**]{} 0.4cm The author would like to thank Ray Volkas for the usual discussions and for pointing our some relevant papers. He would also like to thank X-G. He and J.P. Ma for a discussion and to J.Bowes for help with the Figure. The author is supported by an Australian Research Fellowship.
[99]{} For a review, see e.g. V. Barger and R. J. N. Phillips, Collider Physics, (Addison-Wesley, 1987).
J. Pati and A. Salam, Phys. Rev. D10, 275 (1974).
R. Foot and H. Lew, Phys. Rev. D41, 3502 (1990).
R. Foot, H. Lew and R. R. Volkas, Phys. Rev. D44, 1531 (1991).
A. Kuznetsov and M. Mikheev, Phys. Lett. B329, 295 (1994).
G. Valencia and S. Willenbrock, Phys. Rev. D50, 6843 (1994).
R. R. Volkas, Phys. Rev. D53, 2681 (1996).
Particle data group, Phys. Rev. D54, 1 (1996).
See for example, J. L. Hewett et. al., Argonne Accel. Phys. 539 (1993). S. Davidson, D. Bailey and B. Campbell, Z. Phys. C61, 613 (1994); M. Leurer, Phys. Rev. D50, 536 (1994).
For a review, see for example, T. G. Rizzo, hep-ph/9501261 (1995).
See for example, J. L. Hewett and T. G. Rizzo, hep-ph/9703337 and references there-in.
[**Figure Caption**]{} 0.5cm Figure 1: 1-loop Feynman diagram which leads to small electron neutrino Majorana mass. There will be similar diagrams for the other neutrinos. (The $W_L W_R$ mixing mass squared is obtained from ${\cal L} = (D_{\mu} \langle \phi \rangle )^{\dagger}
D^{\mu} \langle \phi \rangle$ and is given by $\mu^2 = g_R g_L u_1 u_2$).
|
---
abstract: '[In population studies on the etiology of disease, one goal is the estimation of the fraction of cases attributable to each of several causes. For example, pneumonia is a clinical diagnosis of lung infection that may be caused by viral, bacterial, fungal, or other pathogens. The study of pneumonia etiology is challenging because directly sampling from the lung to identify the etiologic pathogen is not standard clinical practice in most settings. Instead, measurements from multiple peripheral specimens are made. This paper introduces the statistical methodology designed for estimating the *population etiology distribution* and the *individual etiology probabilities* in the Pneumonia Etiology Research for Child Health (PERCH) study of $9,500$ children for $7$ sites around the world. We formulate the scientific problem in statistical terms as estimating the mixing weights and latent class indicators under a partially-latent class model (pLCM) that combines heterogeneous measurements with different error rates obtained from a case-control study. We introduce the pLCM as an extension of the latent class model. We also introduce graphical displays of the population data and inferred latent-class frequencies. The methods are tested with simulated data, and then applied to PERCH data. The paper closes with a brief description of extensions of the pLCM to the regression setting and to the case where conditional independence among the measures is relaxed.]{}'
address:
- |
Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA\
Email: zhwu@jhu.edu
- 'Department of International Health, Johns Hopkins University, Baltimore, MD 21205, USA'
- 'Department of International Health, Johns Hopkins University, Baltimore, MD 21205, USA'
- 'Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA'
author:
- Zhenke Wu
- 'Maria Deloria-Knoll'
- 'Laura L. Hammitt'
- 'Scott L. Zeger'
- for the PERCH Core Team
bibliography:
- 'refs.bib'
title: |
Partially-Latent Class Models (pLCM) for Case-Control\
Studies of Childhood Pneumonia Etiology
---
=1
Introduction
============
Identifying the pathogens responsible for infectious diseases in a population poses significant statistical challenges. Consider the measurement problem in the Pneumonia Etiology Research for Child Health (PERCH), a case-control study that has enrolled $9,500$ children from 7 sites around the world. Pneumonia is a clinical syndrome that develops because of an infection of the lung tissue by bacteria, viruses, mycobacteria or fungi [@Levine2012]. The appropriate treatment and public health control measures vary by pathogen. Which pathogen is infecting the lung usually cannot be directly observed and must therefore be inferred from multiple peripheral measurements with differing error rates. The primary goals of the PERCH study are to integrate the multiple sources of data to: (1) attribute a particular case’s lung infection to a pathogen, and (2) estimate the prevalences of the etiologic pathogens in a population of children that met clinical pneumonia definitions.
The basic statistical framework of the problem is pictured in Figure \[fig:basicstructure\]. The disease status is determined by clinical examination including chest X-ray [@Deloria2012]. The known pneumonia status (case-control) is directly caused by the presence or absence of a pathogen-caused infection in the lung. For controls, the lung is known to be sterile and has no infection. For a child clinically diagnosed with pneumonia, the pathogen causes the infection in a child’s lung is the scientific target of interest. Among the candidate pathogens being tested, we assume only one is the primary cause. Extensions to multiple pathogens are straightforward. Because, for most cases, it is not possible to directly sample the lung, we do not know with certainty which pathogen infected the lung, so we seek to infer the infection status based upon a series of laboratory measurements of specimens from various body fluids and body sources.
PERCH was originally designed with three sources of measurements relevant to the lung infection: directly from the lung by lung aspirate; from blood culture; and from the nasopharyngeal cavity (by swab). Therefore, our model was designed to accommodate all three sources. As the study progressed, less than $1\%$ of cases had direct lung measurements and this sampled group was unrepresentative of all cases. The model and software here include all three sources of measurements for application to other etiology studies, but the analysis of the motivating PERCH data below uses only blood culture and nasopharyngeal swab data.
The measurement error rates differ by type of measurement. Here, an error rate or *epidemiologic* error rate is the probability of the pathogen’s presence/absence in a specimen test given presence/absence of infection in the lung. For this application, it is convenient to categorize measures into three subgroups referred to as “gold", “silver", and “bronze" standard measurements. A gold-standard (GS) measurement is assumed to have both perfect sensitivity and specificity. Lung aspirate data would have been gold-standard. A silver-standard (SS) measurement is assumed to have perfect specificity, but imperfect sensitivity. Culturing bacteria from blood samples (B-cX) is an example of silver-standard measurements in PERCH. Finally, bronze-standard (BrS) measurements are assumed to have imperfect sensitivity and specificity. Polymerase chain reaction (PCR) evaluation of bacteria and viruses from nasopharyngeal samples is an example.
In the PERCH study, both SS and BrS measurements are available for all cases. BrS, but not SS measures are available for controls. Our goal was to develop a statistical model that combines GS and SS measurements from cases, with BrS data from cases and controls to estimate the distribution of pathogens in the population of pneumonia cases, and the conditional probability that each of the $J$ pathogens is the primary cause of an individual child’s pneumonia given her or his set of measurements. Even in applications where GS data is not available, a flexible modeling framework that can accommodate GS data is useful for both the evaluation of statistical information from BrS data (Section \[sec:simulation\]) and the incorporation of GS data if it becomes available as measurement technology improves.
Latent class models (LCM) [@Goodman1974] have been successfully used to integrate multiple diagnostic tests or raters’ assessments to estimate a binary latent status for all study subjects [@hui1980estimating; @Qu1998; @albert2001latent; @albert2008estimating]. In the LCM framework, conditional distributions of measurements given latent status are specified. Then the marginal likelihood of the multivariate measurements are maximized as a function of the disease prevalence, sensitivities and specificities. This framework has also been extended to infer ordinal latent status [@wang2011evaluation].
There are three salient features of the PERCH childhood pneumonia problem that require extension of the typical LCM approach. First, we have [*partial*]{} knowledge of the latent lung state for some subjects as a result of the case-control design. In the standard LCM approach, the study population comprises subjects with completely unknown class membership. In this study, controls are known to have no pathogen infecting the lung. Also, were gold standard measurements available from the lung for some cases, their latent variable would be directly observed. As the latent state is known for a non-trivial subset of the study population, we refer to this model as a partially-Latent Class Model or pLCM.
Second, in most LCM applications, the number of observed measurements on a subject is much larger than the number of latent state categories. Here, the number of observations is of the same order as the number of categories that the latent status can assume. For example, if we consider only the PERCH study BrS data, we simultaneously observe the presence/absence of each member from a list of possible pathogens for each child. Even with additional control data, the larger number of latent categories of latent status leads to weak model identifiability as is discussed in more detail in Section \[sec:identifiability\].
Lastly, measurements with differing error rates (i.e. GS, SS, BrS) need to be integrated. Note that the modeling framework introduced here is general and can be applied to studies where multiple BrS measurements are available, each with a different set of error rates. Understanding the relative value of each level of measurements is important to optimally invest resources into data collection (number of subjects, type of samples) and laboratory assays. An important goal is therefore to estimate the relative information from each type of measurements about the population and individual etiology distributions.
[@albert2008estimating] studied a model where some subjects are selected to verify their latent status (i.e. collect GS measurements) with the probability of verification either depending on the previous test results or being completely at random. They showed GS data can make model estimates more robust to model misspecifications. We further quantify how much GS data reduces the variance of model parameter estimates for design purposes. Also, they considered binary latent status and did not have available control data. Another related literature that uses both GS and BrS data is on verbal autopsy (VA) in the setting where no complete vital registry system is established in the community [@King2008]. Quite similar to the goal of inferring pneumonia etiology from lab measurements, the VA goal is to infer the cause of death from a pre-specified list by asking close family members questions about the presence/absence of several symptoms. [@King2008] proposed estimating the cause-of-death distribution in a community using data on dichotomous symptoms and GS data from the hospital where cause-of-death and symptoms are both recorded. However, their method involves nonparametric and requires a sizable sample of GS data, especially when the number of symptoms is large. In addition, a key difference between VA and most infectious disease etiology studies is that the VA studies are by definition case-only.
Another approach previously used with case and control data is to perform logistic regression of case status on laboratory measurements and then to calculate point estimates of population attributable risks for each pathogen [@Bruzzi1985; @Blackwelder2012]. This method does not account for imperfect laboratory measurements and cannot use GS or SS data if available. Also, the population attributable fraction method assigns zero etiology for the subset of pathogens that have estimated odds ratios smaller than $1$, without taking account of the statistical uncertainty for the odds ratio estimates.
In this paper, we define and apply a partially-latent class model (pLCM) to incorporate these three features: known infection status for controls; a large number of latent classes; and multiple types of measurements. We use a hierarchical Bayesian formulation to estimate: (1) the [*population etiology distribution*]{} or [*etiology fraction*]{} —the frequency with which each pathogen “causes" clinical pneumonia in the case population; and (2) the [*individual etiology probabilities*]{}—the probabilities that a case is “caused" by each of the candidate pathogens, given observed specimen measurements for that individual.
In Section \[sec:results\], to facilitate communications with scientists, we introduce graphical displays that put data, model assumptions, and results together. They enable the scientific investigators to better understand the various sources of evidence from data and their contribution to the final etiology estimates. The remainder of this paper proceeds as follows. In section \[sec:models\], we formulate the pLCM and the Gibbs sampling algorithms for implementation. In Section \[sec:simulation\], we evaluate our method through simulations tailored for the childhood pneumonia etiology study. Section \[sec:results\] presents the analysis of PERCH data. Lastly, Section \[sec:discussion\] concludes with a discussion of results and limitations, a few natural extensions of the pLCM also motivated by the PERCH data, as well as future directions of research.
A partially-latent class model for multiple indirect measurements {#sec:models}
=================================================================
We develop pLCM to address two characteristics of the motivating pneumonia problem: (1) a partially-latent state variable because the pathogen infection status is known for controls but not cases; and (2) multiple categories of measurements with different error rates across classes. As shown in Figure \[fig:basicstructure\], let $I^L_i$, taking values in $\{0,1,2,...J\}$, represent the true state of child $i$’s lung ($i=1,...,N$) where $0$ represents no infection (control) and $I^L_i=j$, $ j=1,...,J$, represents the $j$th pathogen from a pre-specified cause-of-pneumonia list that is assumed to be exhaustive. $I^L_i$ is the scientific target of inference for individual diagnosis. Let $\bm{M}^S_i$ represent the $J\times 1$ vector of binary indicators of the presence/absence of each pathogen in the measurement at site $S$, where, in our childhood pneumonia etiology study $S$ can be nasopharyngeal (NP), blood (B), or lung (L). Let $\bm{m}_i^{S}$ be the actual observed values. In the following, we replace $S$ with BrS, SS, or GS, because they correspond to the measurement types at NP, B, and L, respectively.
Let $Y_i=y_i\in \{0,1\}$ represent the indicator of whether child $i$ is a healthy control or a clinically diagnosed case. Note $I^L_i=0$ given $Y_i=0$. To formalize the pLCM, we define three sets of parameters:
- $\bm{\pi}=(\pi_1,...,\pi_J)'$ , the vector of compositional probabilities for each of $J$ pathogen causes, that is, $\text{Pr}(I_i^L =j \mid Y_i=1, \bm{\pi}), j= 1, ..., J$;
- $\psi^S_j =\text{Pr}(M_{ij}^S =1|I_i^L=0)$, the false positive rate (FPR) for measurement $j$ ($j=1,...,J$) at site S. Note that The FPRs $\{\psi^S_j\}_{j=1}^J$ can be estimated from the control data at site S, because $I_i^L=0$ denotes that the $i$th subject has no infection in the lung, i.e. a control;
- $\theta^S_j = \text{Pr}(M_{ij}^S =1|I_i^L=j)$, the true positive rate (TPR) for measurement $j$ at site $S$ for a person whose lung is infected by pathogen $j$, $j=1,...J$.
We further let $\bm{\psi}^S=(\psi_1^S,...,\psi_J^S)'$ and $\bm{\theta}^S=(\theta_1^S,...,\theta_J^S)'$. Using these definitions, we have FPR $\psi^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_j=0$ and TPR $\theta^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_j=1$ for GS measurements, so that $M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_{ij}=1$ if and only if $I_i^L=j$, $j = 1, ..., J$ (perfect sensitivity and specificity). For SS measurements, FPR $\psi_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}=0$ so that $M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_j=0$ if $I_i^L\neq j$ (perfect specificity).
We formalize the model likelihood for each type of measurement. We first describe the model for BrS measurement $\bm{M}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ for a control or a case. For control $i$, positive detection of the $j$th pathogen is a false positive representation of the non-infected lung. Therefore, we assume $M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij} \mid \bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}} \sim \text{Bernoulli}(\psi_j^S), j=1,...,J$, with conditional independence, or equivalently, $$\begin{aligned}
P^{0,{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{i} =
\text{Pr}(\bm{M}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_i= \bm{m}\mid \bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}})& = & \prod_{j=1}^J\left(\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{m_j}\left(1-\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{1-m_j},
\label{eq:BrS_lkd_ctrl}\end{aligned}$$ where $\bm{m}=\bm{m}_i^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$. For a case infected by pathogen $j$, the positive detection rate for the $j$th pathogen in BrS assays is $\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j$. Since we assume a single cause for each case, detection of pathogens other than $j$ will be false positives with probability equal to FPR as in controls: $\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$, $l\neq j$. This nondifferential misclassification across the case and control populations is the essential assumption of the latent class approach because it allows us to borrow information from control BrS data to distinguish the true cause from background colonization. We further discuss it in the context of the pneumonia etiology problem in the final section. Then, $$\begin{aligned}
\lefteqn{P^{1,{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{i} =\text{Pr}(\bm{M}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{i}=\bm{m}\mid\bm{\pi}, \bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}, \bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}})}\nonumber\\
& = & \sum_{j=1}^J\pi_{j}\cdot \left(\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{m_j}\left(1-\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j\right)^{1-m_j}\prod_{l\neq j}\left(\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{m_l}\left(1-\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{1-m_l}, \label{eq:casebrs_lkd}
\label{eq:BrS_lkd_case}\end{aligned}$$ is the likelihood contributed by BrS measurements from case $i$, where $\bm{m}=\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$.
Similarly, likelihood contribution from case $i$’s SS measurements can be written as $$\begin{aligned}
P^{1,\text{SS}}_{i} =
\text{Pr}(\bm{M}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{i}=\bm{m}\mid \bm{\pi}, \bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}})= \sum_{j=1}^{J'}\pi_{j}\cdot \left(\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\right)^{m_j}(1-\theta^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_j)^{1-m_j}\mathbf{1}_{\left\{\sum_{l=1}^{J'}m_l\leq 1\right\}},
\label{eq:GnS_lkd}\end{aligned}$$ for $\bm{m}=\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}$, noting the perfect specificity of SS measurements, where $J'\leq J$ represents the number of actual SS measurements on each case, and $\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}} = \left(\theta_1^{{\ensuremath{\mbox{\scriptsize \sf SS } }}},...\theta_{J'}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\right)$. SS measurements only test for a subset of all $J$ pathogens, e.g., blood culture only detects bacteria and $J'$ is the number of bacteria that are potential causes. Finally, for completeness, GS measurement is assumed to follow a multinomial distribution with likelihood: $$\begin{aligned}
P^{1,\text{GS}}_{i}=\text{Pr}\left(M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_{i} = \bm{m}\mid \bm{\pi}\right)
& = & \prod_{j=1}^J \pi_j^{\mathbf{1}\left\{m_j=1\right\}}\mathbf{1}_{\{\sum_j{m_{j}=1}\}},\label{eq:GS_lkd}\end{aligned}$$ where $\bm{m}=\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}$, and $\mathbf{1}_{\{\cdot\}}$ is the indicator function and equals one if the statement in $\{\cdot\}$ is true; otherwise, zero.
Let $\delta_i$ be the binary indicator of a case $i$ having GS measurements; it equals $1$ if the case has available GS data and $0$ otherwise. Combining likelihood components (\[eq:BrS\_lkd\_ctrl\])—(\[eq:GS\_lkd\]), the total model likelihood for BrS, SS, and GS data across independent cases and controls is $$\begin{aligned}
L(\bm{\gamma}; \mathcal{D}) & = &
\prod_{i: Y_i=0}P_{i}^{0,{\ensuremath{\mbox{\scriptsize \sf BrS } }}}
\prod_{i:Y_{i}=1, \delta_{i}=1}P_{i}^{1,{\ensuremath{\mbox{\scriptsize \sf BrS } }}}
\cdot P_{i}^{1,{\ensuremath{\mbox{\scriptsize \sf SS } }}}
\cdot P_{i}^{1,{\ensuremath{\mbox{\scriptsize \sf GS } }}}
\prod_{i:Y_{i}=1, \delta_{i}=0}P_{i}^{1,{\ensuremath{\mbox{\scriptsize \sf BrS } }}}
\cdot P_{i}^{1,{\ensuremath{\mbox{\scriptsize \sf SS } }}}
,\label{eq:model}\end{aligned}$$ where $\bm{\gamma}=(\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}},\bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}, \bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}},\bm{\pi})'$ stacks all unknown parameters, and data $\mathcal{D}$ is $$\left\{\left\{\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right\}_{i: Y_i=0}\right\}\cup \left\{\left\{\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}},\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf GS } }}},\bm{m}_{i}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\right\}_{i: Y_{i}=1, \delta_{i}=1}\right\} \cup \left\{\left\{\bm{m}_{i^{''}}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}},\bm{m}_{i^{''}}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\right\}_{i^{''}: Y_{i^{''}}=1,\delta_{i^{''}}=0}\right\}$$ collects all the available measurements on study subjects. Our primary statistical goal is to estimate the posterior distribution of the population etiology distribution $\bm{\pi}$, and to obtain individual etiology ($I^L_{*}$) prediction given a case’s measurements.
To enable Bayesian inference, prior distributions on model parameters are specified as follows: $\bm{\pi} \sim \text{Dirichlet}(a_1,\dots,a_{J})$, $\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}} \sim \text{Beta}(b_{1j},b_{2j})$, $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}} \sim \text{Beta}(c_{1j},c_{2j}), j=1,...,J$, and $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}} \sim \text{Beta}(d_{1j},d_{2j})$, $j=1,...,J'$. Hyperparameters for etiology prior, $a_1, ..., a_J$, are usually $1$s to denote equal and non-informative prior weights for each pathogen if expert prior knowledge is unavailable. The FPR for the $j$th pathogen, $\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$, generally can be well estimated from control data, thus $b_{1j}=b_{2j}=1$ is the default choice. For TPR parameters $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ and $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}$, if prior knowledge on TPRs is available, we choose $(c_{1j},c_{2j})$ so that the $2.5\%$ and $97.5\%$ quantiles of Beta distribution with parameter $(c_{1j},c_{2j})$ match the prior minimum and maximum TPR values elicited from pneumonia experts . Otherwise, we use default value $1$s for the Beta hyperparameters. Similarly we choose values of $(d_{1j},d_{2j})$ either by prior knowledge or default values of $1$. We finally assume prior independence of the parameters as $[\bm{\gamma}]=[\bm{\pi}][\bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}][\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}][\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}]$, where $[A]$ represents the distribution of random variable or vector $A$. These priors represent a balance between explicit prior knowledge about measurement error rates and the desire to be as objective as possible for a particular study. As described in the next section, the identifiability constraints on the pLCM require specifying a reasonable subset of parameter values to identify parameters of greatest scientific interest.
Model identifiability {#sec:identifiability}
---------------------
Potential non-identifiability of LCM parameters is well-known [@Goodman1974]. For example, an LCM with four observed binary indicators and three latent classes is not identifiable despite providing $15$ degree-of-freedom to estimate $14$ parameters [@Goodman1974]. In principle, the Bayesian framework avoids the non-identifiability problem in LCMs by incorporating prior information about unidentified parameter subspaces (e.g., @garrett2000latent). Many authors point out that the posterior variance for non-identifiable parameters does not decrease to zero as sample size approaches infinity (e.g., [@Kadane1974; @Gustafson2001; @Gustafson2005]). Even when data are not fully informative about a parameter, an identified set of parameter values consistent with the observed data shall, can nevertheless, be valuable in a complex scientific investigation [@gustafson2009limits] like PERCH.
When GS data is available, the pLCM is identifiable; when it is not, the two sets of parameters, $\bm{\pi}$ and $\{\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\}_{j=1}^J$ are not both identified and prior knowledge must be incorporated. Here we restrict attention to the scenario with only BrS data for simplicity but similar arguments pertain to the BrS + SS scenario. The problem can be understood from the form of the positive measurement rates for pathogens among cases. In the pLCM likelihood for the BrS data (only retaining components in (\[eq:model\]) with superscripts ${\ensuremath{\mbox{\scriptsize \sf BrS } }}$), the positive rate for pathogen $j$ is a convex combination of the TPR and FPR: $$\text{Pr}\left(M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}=1\mid \pi_j,\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}},\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)=\pi_j\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j+(1-\pi_j)\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j,\label{eq:convex_comb}$$ where the left-hand side of the above equation can be estimated by the observed positive rate of pathogen $j$ among cases. Although the control data provide $\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ estimates, the two parameters, $\pi_j$ and $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$, are not both identified. GS data, if available, identifies $\pi_j$ and resolves the lack of identifiability. Otherwise, we need to incorporate prior scientific information on one of them, usually the TPR ($\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$). In PERCH, prior knowledge about $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ is obtained from infectious disease and laboratory experts [@Murdoch2012] based upon vaccine probe studies [@cutts2005efficacy; @Madhi15052005]. If the observed case positive rate is much higher than the rate in controls ($\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$), only large values of TPR ($\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$) are supported by the data making etiology estimation more precise (Section \[sec:ind.pred\]).
The full model identification can be generally characterized by inspecting the Jacobian matrix of the transformation $F$ from model parameters $\bm{\gamma}$ to the distribution $\bm{p}$ of the observables, $\bm{p} = F(\bm{\gamma})$. Let $\bm{\gamma}=(\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}},\bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}, \pi_1,...,\pi_{J-1})'$ represent the $3J-1$-dimensional unconstrained model parameters. The pLCM defines the transformation $(\bm{p}_1,\bm{p}_0)'=F(\bm{\gamma})$, where $\bm{p}_1$ and $\bm{p}_0$ are the two contingency probability distributions for the BrS measurements in the case and control populations, each with dimension $2^J-1$. It can be shown that the Jacobian matrix has $J-1$ of its singular values being zero, which means model parameters $\bm{\gamma}$ are not fully identified from the data. The FPRs ($\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}, j=1,...,J$) in pLCM are, however, identifiable parameters that can be estimated from control data. Therefore, pLCM is termed partially identifiable [@Jones2010].
Parameter estimation and individual etiology prediction {#sec:ind.pred}
-------------------------------------------------------
The parameters in likelihood (\[eq:model\]) include the population etiology distribution ($\bm{\pi}$), TPRs and FPRs for BrS measurements ($\bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ and $\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$), and TPRs for SS measurements ($\bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}$). The posterior distribution of these parameters can be estimated by constructing approximating samples from the joint posterior via a Markov chain Monte Carlo (MCMC) Gibbs sampler. The full conditional distributions for the Gibbs sampler are detailed in Section A of the supplementary material.
We develop a Gibbs sampler with two essential steps:
1. Multinomial sampling of lung infection state among cases:\
$I^L_{i}\mid \bm{\pi}, Y_{i}=1\sim \text{Multinomial}(\bm{\pi})$;
2. Measurement stage given lung infection state:
$M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}\mid I^L_{i}, \bm{\theta}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}, \bm{\psi}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}} \sim\text{Bernoulli}\left(\mathbf{1}_{\{I^L_{i}=j\}}\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j+\left(1-\mathbf{1}_{\{I^L_{i}=j\}}\right)\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j\right), j = 1,...,J$, conditionally independent.
This is readily implemented using freely available software `WinBUGS 1.4`. In the application below, convergence was monitored using auto-correlations, kernel density plots, and Brooks-Gelman-Rubin statistics [@brooks1998general] of the MCMC chains. The statistical results below are based on $10,000$ iterations of burn-in followed by $50,000$ production samples from each of three parallel chains.
The Bayesian framework naturally allows individual within-sample classification (infection diagnosis) and out-of-sample prediction. This section describes how we calculate the etiology probabilities for an individual with measurements $\bm{m}_{*}$. We focus on the more challenging inference scenario when only BrS data are available; the general case follows directly.
The within-sample classification for case $i$ is based on the posterior distribution of latent indicators given the observed data, i.e. $\text{Pr}(I^L_{i}=j \mid \mathcal{D})$, $j=1,...,J$, which can be obtained by averaging along the cause indicator ($I^L_{i}$) chain from MCMC samples. For a case with new BrS measurements $\bm{m}_*$, we have $$\begin{aligned}
\Pr(I^L_{i}=j\mid\bm{m}_*, \mathcal{D}) & = &\int \Pr(I^L_{i}=j\mid \bm{m}_*,\bm{\gamma})\Pr(\bm{\gamma} \mid \bm{m}_*,\mathcal{D}) \mathrm{d}\bm{\gamma}, j=1,...J,\label{eq:outofsamplepred}\end{aligned}$$ where the second factor in the integrand can be approximated by the posterior distribution given current data, i.e., $\text{Pr}(\bm{\gamma} \mid \mathcal{D})$. For the first term in the integrand, we explicitly obtain the model-based, one-sample conditional posterior distribution, $\text{Pr}(I^L_{i}=j \mid \bm{m}_*,\bm{\gamma}) = \pi_j \ell_j(\bm{m}_*; \bm{\gamma})\bigg /\sum_{m} \pi_{\bm{r}_m}\ell_m(\bm{m}_*; \bm{\gamma})$, $j=1,...,J$, where $$\ell_m(\bm{m}_*; \bm{\gamma})=\left(\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{m_{*j}}\left(1-\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j\right)^{1-m_{*j}}\prod_{l\neq j}\left(\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{m_{*l}}\left(1-\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{1-m_{*l}}$$ is the $m$th mixture component likelihood function evaluated at $\bm{m}_*$. The log relative probability of $I^L_i=j$ versus $I^L_i=l$ is $$\begin{aligned}
\lefteqn{R_{jl}=\log \left(\frac{\pi_j}{\pi_l}\right)+
\log \left \{\left(\frac{\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j}{\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j}\right)^{m_{*j}}\left(\frac{1-\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j}{1-\psi^{BrS}_j}\right)^{1-m_{*j}}\right \}}\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad+\log\left\{\left(\frac{\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l}{\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l}\right)^{m_{*l}}\left(\frac{1-\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l}{1-\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l}\right)^{1-m_{*l}} \right \}.\end{aligned}$$ The form of $R_{jl}$ informs us about what is required for correct diagnosis of an individual. Suppose $I^L_i=j$, then averaging over $\bm{m}_{*}$, we have $E[R_{jl}]= \log\left({\pi_j}/{\pi_l}\right)+I(\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j; \psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j)+I(\psi^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l;\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_l)$, where $I(v_1;v_2)=v_1\log(v_1/v_2)+(1-v_1)\log\left((1-v_1)/(1-v_2)\right)$ is the information divergence [@kullback2012information] that represents the expected amount of information in $m_{*j}\sim \text{Bernoulli}(v_1)$ for discriminating against $m_{*j}\sim \text{Bernoulli}(v_2)$. If $v_1=v_2$, then $I(v_1;v_2)=0$. The form of $E[R_{jl}] $ shows that there is only additional information from BrS data about an individual’s etiology in the person’s data when there is a difference between $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$ and $\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$, $j=1,...,J$.
Following equation (\[eq:outofsamplepred\]), we average $\text{Pr}(I^L_{i}=j \mid \bm{m}_*,\bm{\gamma})$ over MCMC iterations to obtain individual prediction for the $j$th pathogen, $\hat{p}_{ij}$, with $\bm{\gamma}$ replaced by its simulated values $\bm{\gamma}^*$ at each iteration. Repeating for $j=1,...,J$, we obtain a $J\time 1$ probability vector, $\hat{\bm{p}}_{i}=(\hat{p}_{i1},...,\hat{p}_{iJ})'$, that sums to one. This scheme is especially useful when a newly examined case has a BrS measurement pattern not observed in $\mathcal{D}$, which often occurs when $J$ is large. The final decisions regarding which pathogen to treat can then be based upon $\hat{\bm{p}}_{i}$. In particular, the pathogen with largest posterior value might be selected. It is Bayes optimal under mean misclassification loss. Individual etiology predictions described here generalize the positive/negative predictive value (PPV/NPV) from single to multivariate binary measurements and can aid diagnosis of case subjects under other user-specified misclassification loss functions.
Simulation for three pathogens case with GS and BrS data {#sec:simulation}
========================================================
To demonstrate the utility of the pLCM for studies like PERCH, we simulate BrS data sets with $500$ cases and $500$ controls for three pathogens, A, B, and C using known pLCM specifications. We focus on three states to facilitate viewing of the $\bm{\pi}$ estimates and individual predictions in the 3-dimensional simplex $\mathcal{S}^2$. We use the ternary diagram [@Aitchison1986] representation where the vector $\bm{\pi}=(\pi_A,\pi_B,\pi_C)'$ is encoded as a point with each component being the perpendicular distance to one of the three sides. The parameters involved are fixed at $\text{TPR}=\bm{\theta}=(\theta_A,\theta_B,\theta_C)'=(0.9,0.9,0.9)'$, $\text{FPR}=\bm{\psi}=(\psi_A,\psi_B,\psi_C)'=(0.6,0.02,0.05)'$, and $\bm{\pi}=(\pi_A,\pi_B,\pi_C)'=(0.67,0.26,0.07)'$. We focus on BrS and GS data here and have dropped the “${\ensuremath{\mbox{\scriptsize \sf BrS } }}$" superscript on the parameters for simplicity. We further let the fraction of cases with GS measurements ($\Delta$) be either $1\%$ as in PERCH or $10\%$. Although GS measurements are rare in the PERCH study, we investigate a large range of $\Delta$ to understand in general how much statistical information is contained in BrS measurements relative to GS measurements.
For any given data set, three distinct subsets of the data can be used: BrS-only, GS-only, and BrS+GS, each producing its posterior mean of $\bm{\pi}$, and $95\%$ credible region (Bayesian confidence region) by transformed Gaussian kernel density estimator for compositional data [@Chacon2011]. To study the relative importance of the GS and BrS data, the primary quantity of interest in the simulations is the relative sizes of the credible regions for each data mix. Here, we use uniform priors on $\bm{\theta}$, $\bm{\psi}$, and $\text{Dirichlet}(1,...,1)$ prior for $\bm{\pi}$. The results are shown in Figure \[fig:BSvalue.simulation\].
First, in Figures \[fig:BSvalue\_a\] ($1\%$ GS) and \[fig:BSvalue\_b\] ($10\%$ GS), each region covers the true etiology $\bm{\pi}$. In data not shown here, the nominal $95\%$ credible regions covers slightly more than $95\%$ of $200$ simulations. Credible regions narrow in on the truth as we combine BrS and GS data, and as the fraction of subjects with GS data ($\Delta$) increases. Also, the posterior mean from the BrS+GS analysis is an optimal balance of information contained in the GS and BrS data.
Using the same simulated data sets, Figures \[fig:BSvalue\_a\_ind\] and \[fig:BSvalue\_b\_ind\] also show individual etiology predictions for each of the $8 (=2^3)$ possible BrS measurements $(m_A,m_B,m_C)', m_j=0,1$, obtained by the methods from Section \[sec:ind.pred\]. Consider the example of a newly enrolled case without GS data and with no pathogen observed in her BrS data: $\bm{m}=(0,0,0)'$. Suppose she is part of a case population with $10\%$ GS data. In the case illustrated in Figure \[fig:BSvalue\_b\_ind\], her posterior predictive distribution has highest posterior probability ($0.76$) on pathogen A reflecting two competing forces: the FPRs that describe background colonization (colonization among the controls) and the population etiology distribution. Given other parameters, $\bm{m}=(0,0,0)'$ gives the smallest likelihood for $I^L_i=A$ because of its high background colonization rate (FPR $\psi_A=0.6$). However, prior to observing $(0,0,0)'$, $\pi_A$ is well estimated to be much larger than $\pi_B$ and $\pi_C$. Therefore the posterior distribution for this case is heavily weighted towards pathogen A.
Because it is rare to observe pathogen $B$ in a case whose pneumonia is not caused by B, for a case with observation $(1,1,1)'$, the prediction favors B. Although B is not the most prevalent cause among cases, the presence of B in the BrS measurements gives the largest likelihood when $I_i^L=B$. For any measurement pattern with a single positive, the case is always classified into that category in this example.
Most predictions are stable with increasing gold-standard percentage, $\Delta$. Only $000$ cases have predictions that move from near the center to the corner of A. This is mainly because that TPR $\bm{\theta}$ and etiology fractions $\bm{\pi}$ are not as precisely estimated in GS-scarce scenarios relative to GS-abundant ones. Averaging over a wider range of $\bm{\theta}$ and $\bm{\pi}$ produces $000$ case predictions that are ambiguous, i.e. near the center. As $\Delta$ increases, parameters are well estimated, and precise predictions result.
Analysis of PERCH data {#sec:results}
======================
The Pneumonia Etiology Research for Child Health (PERCH) study is an on-going standardized and comprehensive evaluation of etiologic agents causing severe and very severe pneumonia among hospitalized children aged $1$-$59$ months in seven low and middle income countries [@Levine2012]. The study sites include countries with a significant burden of childhood pneumonia and a range of epidemiologic characteristics. PERCH is a case-control study that has enrolled over $4,000$ patients hospitalized for severe or very severe pneumonia and over $5,000$ controls selected randomly from the community frequency-matched on age in each month. More details about the PERCH design are available in [@Deloria2012].
To analyze PERCH data with the pLCM model, we have focused on preliminary data from one site with good availability of both SS and BrS laboratory results (no missingness). Final analyses of all $7$ countries will be reported elsewhere upon study completion. Included in the current analysis are BrS data (nasopharyngeal specimen with PCR detection of pathogens) for $432$ cases and $479$ frequency-matched controls on $11$ species of pathogens ($7$ viruses and $4$ bacteria; representing a subset of pathogens evaluated; their abbreviations shown on the right margin in Figure \[fig:site02GAM\], and full names in Section B of the supplementary material), and SS data (blood culture results) on the $4$ bacteria for only the cases.
In PERCH, prior scientific knowledge of measurement error rates is incorporated into the analysis. Based upon microbiology studies [@Murdoch2012], the PERCH investigators selected priors for the TPRs of our BrS measurements, $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$, in the range of $50\%-100\%$ for viruses and $0-100\%$ for bacteria. Priors for the SS TPRs were based on observations from vaccine probe studies—randomized clinical trials of pathogen-specific vaccines where the total number of clinical pneumonia cases prevented by the vaccine is much larger than the few SS laboratory-confirmed cases prevented. Comparing the total preventable disease burden to the number of blood culture (SS) positive cases prevented provides information about the TPR of the bacterial blood culture measurements, $\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}},j=1,...,4$. Our analysis used the range $5-15\%$ for the SS TPRs of the four bacteria consistent with the vaccine probe studies [@cutts2005efficacy; @Madhi15052005]. We set Beta priors that match these ranges (Section \[sec:models\]) and assumed Dirichlet($1,...,1$) prior on etiology fractions $\bm{\pi}$.
In latent variable models like the pLCM, key variables are not directly observed. It is therefore essential to picture the model inputs and outputs side-by-side to better understand the analysis. In this spirit, Figure \[fig:site02GAM\] displays for each of the 11 pathogens, a summary of the BrS and SS data in the left two columns, along with some of the intermediate model results; and the prior and posterior distributions for the etiology fractions on the right (rows ordered by posterior means). The observed BrS rates (with $95\%$ confidence intervals) for cases and controls are shown on the far left with solid dots. The conditional odds ratio contrasting the case and control rates given the other pathogens is listed with $95\%$ confidence interval in the box to the right of the BrS data summary. Below the case and control observed rates is a horizontal line with a triangle. From left to right, the line starts at the estimated false positive rate (FPR, $\hat{\psi}_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$) and ends at the estimated true positive rate (TPR, $\hat{\theta}_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$), both obtained from the model. Below the TPR are two boxplots summarizing its posterior (top) and prior (bottom) distributions for that pathogen. These box plots show how the prior assumption influences the TPR estimate as expected given the identifiability constraints discussed in Section \[sec:identifiability\]. The triangle on the line is the model estimate of the case rate to compare to the observed value above it. As discussed in Section \[sec:identifiability\], the model-based case rate is a linear combination of the FPR and TPR with mixing fraction equal to the estimated etiology fraction. Therefore, the location of the triangle, expressed as a fraction of the distance from the FPR to the TPR, is the model-based point estimate of the etiologic fraction for each pathogen. The SS data are shown in a similar fashion to the right of the BrS data. By definition, the FPR is $0.0\%$ for SS measures and there is no control data. The observed rate for the cases is shown with its $95\%$ confidence interval. The estimated SS TPR ($\hat{\theta}^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_j$) with prior and posterior distributions is shown as for the BrS data, except that we plot $95\%$ and $50\%$ credible intervals for SS TPR above its prior $95\%$ and $50\%$ intervals.
On the right side of the display are the marginal posterior and prior distributions of the etiologic fraction for each pathogen. We appropriately normalized each density to match the height of the prior and posterior curves. The posterior mean, $50\%$ and $95\%$ credible intervals are shown above the density.
Figure \[fig:site02GAM\] shows that respiratory syncytial virus (RSV), *Streptococcus pneumoniae* (PNEU), rhinovirus (RHINO), and human metapneumovirus (HMPV$\_$A$\_$B) occupy the greatest fractions of the etiology distribution, from $15\%$ to $30\%$ each. That RSV has the largest estimated mean etiology fraction reflects the large discrepancy between case and control positive rates in the BrS data: $25.1\%$ versus $0.8\%$ (marginal odds ratio $38.5$ ($95\%$ $(18.0,128.7)$ ). RHINO has case and control rates that are close to each other, yet its estimated mean etiology fraction is $16.7\%$. This is because the model considers the joint distribution of the pathogens, not the marginal rates. The conditional odds ratio of case status with RHINO given all the other pathogen measures is estimated to be $1.5$ $(1.1,2.1)$ as in contrast to the marginal odds ratio close to $1$ $(0.8,1.3)$.
As discussed in Section \[sec:identifiability\], the data alone cannot precisely estimate both the etiologic fractions and TPRs absent prior knowledge. This is evidenced by comparing the prior and posterior distributions for the TPRs in the BrS boxes for some pathogens like HMPV$\_$A$\_$B and PARA1 (i.e. left hand column of Figure \[fig:site02GAM\]). The posteriors are similar to their priors indicating little else about TPR is learned from the data. The posteriors for some pathogens making up $\bm{\pi}$ (i.e. shown in the right hand column of Figure \[fig:site02GAM\]) are likely to be sensitive to the prior specifications of the TPRs.
We performed sensitivity analyses using multiple sets of priors for the TPRs. At one extreme, we ignored background scientific knowledge and let the priors on the FPR and TPR be uniform for both the BrS and SS data. Ignoring prior knowledge about error rates lowers the etiology estimates of the bacteria PNEU and *Staphylococcus aureus* (SAUR). The substantial reduction in the etiology fraction for PNEU, for example, is a result of the difference in the TPR prior for the SS measurements. In the original analysis (Figure \[fig:site02GAM\]), the informative prior on the SS sensitivity (TPR) places $95\%$ mass between $5-15\%$. Hence the model assumes almost $90\%$ of the PNEU infections are being missed in the SS sampling. When a uniform prior is substituted, the fraction assumed missed is greatly reduced. For RSV, its posterior mean etiology fraction is stable ($29.4\%$ to $30.0\%$). The etiology estimates for other pathogens are fairly stable, with changes in posterior means between $-2.3\%$ and $3.4\%$.
Under the original priors for TPR, PARA1 has an estimated etiologic fraction of $6.4\%$, even though it has conditional odds ratio $5.9~(2.6,15.0)$. In general, pathogens with larger conditional odds ratios have larger etiology fraction estimates. But a pathogen also needs a reasonably high observed case positive rate to be allocated a high etiology fraction. The posterior etiology fraction estimate of $6.4\%$ for PARA1 results because the prior for the TPR takes values in the range of $50-99\%$. By Equation (\[eq:convex\_comb\]), the TPR weight in the convex combination with FPR (around $1.5\%$) has to be very small to explain the small observed case rate $5.6\%$. When a uniform prior is placed on TPR instead, the PARA1 etiology fraction increases to $9.4\%$ with a wider $95\%$ credible interval.
We believe that RHINO’s etiologic fraction may be inflated as a result of its negative association with RSV among cases. Under the conditional independence assumption of the pLCM, this dependence can only be explained by multinomial correlation among the latent cause indicators: $I^L_i=\text{\footnotesize RSV}$ versus $I^L_i=\text{\footnotesize RHINO}$ that is $-\pi_{\text{\footnotesize RSV}}\pi_{\text{\footnotesize RHINO}}$. There is strong evidence that RSV is a common cause with a stable estimate $\hat{\pi}_{\text{\footnotesize RSV}}$ around $30\%$. The strong negative association in the cases’ measurements between RHINO and RSV therefore is being explained by a larger etiologic fraction estimate $\hat{\pi}_{\text{\footnotesize RHINO}}$ relative to other pathogens that have less or no association with RSV among the cases. The conditional independence assumption is leveraging information from the associations between pathogens in estimation of the etiologic fractions. If true, this issue can be addressed by extending the pLCM to allow for alternate sources of correlation among the measurements, for example, competition among pathogens within the NP space.
We have checked the model in two ways by comparing the characteristics of the observed measurements joint distribution with the same characteristic for the distribution of data of the same size generated by the model. By generating the new data characteristics at every iteration of the MCMC chain, we can obtain the posterior predictive distribution by integrating over the posterior distribution of the parameters [@garrett2000latent].
Among the cases, the $95\%$ predictive interval includes the observed values in all but two of the BrS patterns and even there the fits are reasonable. Among the controls, there is evidence of lack of fit for the most common BrS pattern with only PNEU and HINF (Figure S1 in supplementary materials). There are fewer cases with this pattern observed than predicted under the pLCM. This lack of fit is likely due to associations of pathogen measurements in control subjects. Note that the FPR estimates remain consistent regardless of such correlation as the number of controls increases, however posterior variances for them may be underestimated.
A second model-checking procedure is for the conditional independence assumption. We estimated standardized log odds ratios (SLORs) for cases and controls (see Figure S2 in supplementary materials). Each value is the observed log odds ratio for *a pair* of BrS measurements minus the mean LOR from the posterior predictive distribution value, under the model’s independence assumption, divided by the standard deviation of the same posterior predictive distribution. We find two large deviations among the cases: RSV with RHINO and RSV with HMPV. These are likely caused by strong seasonality in RSV that is out of phase with weaker seasonality in the other two. Otherwise, the number of SLOR’s greater than 2 (8 out of 110) associations is only slightly larger than what is expected under the assumed model (6 expected).
An attractive feature of using MCMC to estimate posterior distributions is the ease of estimating posteriors for functions of the latent variables and/or parameters. One interesting question from a clinical perspective is whether viruses or bacteria are the major cause and among each subgroup, which species predominate. Figure \[fig:category\_pie\_WF\] shows the posterior distribution for the rate of viral pneumonia on the top, and then the conditional distributions of the two leading viruses (bacteria) among viral (bacterial) causes below on the right (left). The posterior distribution of the viral etiologic fraction has mode around $70.0\%$ with $95\%$ credible interval $(57.0\%,79.2\%)$. As shown at the bottom left in Figure \[fig:category\_pie\_WF\], PNEU accounts for most bacterial cases ($47.2\%$ $(24.9\%,71.1\%)$), and SAUR accounts for $25.5\%$ $(8.7\%,49.9\%)$. Of all viral cases (bottom right), RSV is estimated to cause about $42.9\%$ $(32.8\%,54.8\%)$, and RHINO about $24.2\%$ $(13.7\%,37.2\%)$.
Discussion {#sec:discussion}
==========
In this paper, we estimated the frequency with which pathogens cause disease in a case population using a partially-latent class model (pLCM) to allow for known states for a subset of subjects and for multiple types of measurements with different error rates. In a case-control study of disease etiology, measurement error will bias estimates from traditional logistic regression and attributable fraction methods. The pLCM avoids this pitfall and more naturally incorporates multiple sources of data. Here we formulated the model with three levels of measurement error rates.
Absent GS data, we show that the pLCM is only partially identified because of the relationship between the estimated TPR and prevalence of the associated pathogen in the population. Therefore, the inferences are sensitive to the assumptions about the TPR. Uncertainty about their values persists in the final inferences from the pLCM regardless of the number of subjects studied.
The current model provides a novel solution to the analytic problems raised by the PERCH Study. This paper introduces and applies pLCM to a preliminary set of data from one PERCH study site. Confirmatory laboratory testing, incorporation of additional pathogens, and adjustment for potential confounders may change the scientific findings that will be reported the final complete analysis of the study results when it is completed.
An essential assumption relied upon in the pLCM is that the probability of detecting one pathogen at a peripheral body site depends on whether that pathogen is infecting the child’s lung, but is unaffected by the presence of other pathogens in the lung, that is, the non-differential misclassification error assumption. We have formulated the model to include GS measures even though they are available only for a small and unrepresentative subset of the PERCH cases. In general, the availability of GS measures makes it possible to test this assumption as has been discussed by [@albert2008estimating].
Several extensions have potential to improve the quality of inferences drawn and are being developed for PERCH. First, because the control subjects have known class, we can model the dependence structure among the BrS measurements and use this to avoid aspects of the conditional independence assumption central to most LCM methods. The approach is to extend the pLCM to have $K$ subclasses within each of the current disease classes. These subclasses can introduce correlation among the BrS measurements given the true disease state. An interesting question is about the bias-variance trade-off for different values of $K$. This ideas follows previous work on the PARAFAC decomposition of probability distribution for multivariate categorical data [@dunson2009nonparametric]. This extension will enable model-based checking of the standard pLCM.
Second, in our analyses to date, we have assumed that the pneumonia case definition is error-free. Given new biomarkers and availability of chest radiographs that can improve upon the clinical diagnosis of pneumonia, one can introduce an additional latent variable to indicate true disease status and use these measurements to probabilistically assign each subject as a case or control. Finally, regression extensions of the pLCM would allow PERCH investigators to study how the etiology distributions vary with HIV status, age group, and season.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank the members of the larger PERCH Study Group for discussions that helped shape the statistical approach presented herein, and the study participants. We also thank the members of PERCH Expert Group who provided external advice.
Full conditional distributions in Gibbs sampler {#appendix:fullconditionals}
===============================================
In this section, we provide analytic forms of full conditional distributions that are essential for Gibbs sampling algorithm. We use data augmentation scheme by introducing latent lung state $I^L_i$ into the sampling chain and we have the following full conditional distributions:
- $\left[I^L_i\mid \text{others}\right]$. If $M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_i$ is available, $\text{Pr}\left(I_i^L=j \mid \text{others}\right)= 1$, if $M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_{ij}=1$ and $M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_{il}=0$, for $l\neq j$; otherwise zero. If $M^{{\ensuremath{\mbox{\scriptsize \sf GS } }}}_i$ is missing, according as whether $M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_i$ is available, the full conditional is given as $$\begin{aligned}
\text{Pr}(I^L_i=j\mid \text{others})& \propto &\left(\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}}\left(1-\theta^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_j\right)^{1-M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}}\prod_{l\neq j}\left(\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{il}}\left(1-\psi_l^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\right)^{1-M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{il}}\nonumber\\
&&\cdot \left[\left(\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\right)^{M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{ij}}(1-\theta^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_j)^{1-M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{ij}}\mathbf{1}_{\left\{\sum_{l\neq j}M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{il}=0\right\}}\right]^{\mathbf{1}_{\{j\leq J'\}}}\cdot \pi_j;\end{aligned}$$ if SS measurement is not available for case $i$, we remove terms involving $M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{ij}$.
- $\left[\psi_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\mid\text{others}\right]\sim \text{Beta}\left(N_j+b_{1j},
n_1-\sum_{i:Y_i=1}\mathbf{1}_{\{I^L_i=j\}}+n_0-N_j+b_{2j}\right)$, where $n_1$ and $n_0$ are number of cases and controls, respectively, and $N_j = \sum_{i:Y_i=1, I^L_i\neq j}M_{ij}^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}+\sum_{i:Y_i=0}M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}$ is the number of positives at position $j$ for cases with $I^L_i\neq j$ and all controls.
- $\left[\theta_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}\mid\text{others}\right]\sim \text{Beta}\left( S_j+c_{1j}, \sum_{i:Y_i=1} \mathbf{1}_{\{I^L_i=j\}} - S_j+c_{2j}\right)$, where $S_j=\sum_{i:Y_i=1,I^L_i=j}M^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}_{ij}$ is the number of positives for cases with $j$th pathogen as their causes.
- $\left[\theta_j^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}\mid\text{others}\right]\sim \text{Beta}\left(T_j+d_{1j},\sum_{i:Y_i=1, {\ensuremath{\mbox{\scriptsize \sf SS } }}\text{available}}\mathbf{1}_{\{I^L_i=j\}}-T_j+d_{2j}\right)$, where $T_j = \sum_{i:Y_i=1, I^L_i=j, {\ensuremath{\mbox{\scriptsize \sf SS } }}\text{available}}M^{{\ensuremath{\mbox{\scriptsize \sf SS } }}}_{ij}.$ When no SS data is available, this conditional distribution reduces to $\text{Beta}(d_{1j},d_{2j})$, the prior.
- $\left[\bm{\pi}\mid I_{i}^L,i:Y_i=1\right] \sim
\text{Dirichlet}(a_1+U_1,...,a_J+U_J)$, where $U_j=\sum_{i:Y_i=1}\mathbf{1}_{\{I^L_i=j\}}$.
Pathogen names and their abbreviations {#appendix:pathname}
======================================
**Bacteria**: HINF- *Haemophilus influenzae*; PNEU-*Streptococcus pneumoniae*; SASP-*Salmonella* species; SAUR-*Staphylococcus aureus*. **Viruses**: ADENO-adenovirus; COR$\_$43-coronavirus OC43; FLU$\_$C-influenza virus type C; HMPV$\_$A$\_$B-human metapneumovirus type A or B; PARA1-parainfluenza type 1 virus; RHINO-rhonovirus; RSV$\_$A$\_$B-respiratory syncytial virus type A or B.
![Directed acyclic graph (DAG) illustrating relationships among lung infection state ($I^L$), imperfect lab measurements on the presence/absence of each of a list of pathogens at each site($M^{NP}$, $M^{B}$ and $M^{L}$), disease outcome ($Y$), and covariates ($X$).[]{data-label="fig:basicstructure"}](basicstructure.png){width="\textwidth"}
[.5]{} ![Population (top) and individual (bottom) etiology estimations for a single sample with $500$ cases and $500$ controls with true $\bm{\pi}=(0.67,0.26,0.07)'$ and either $1\%(N=5)$ or $10\%(N=50)$ GS data on cases. In (a) or (b), *red circled plus* shows the true population etiology distribution $\bm{\pi}$. The closed curves are $95\%$ credible regions for analysis using BrS data only (*blue dashed lines* “- - -"), BrS+GS data (*light green solid lines* “—"), GS data only (textit[black dotted lines]{} “$\cdots$"); *Solid square/dot/triangle* are the corresponding posterior means of $\bm{\pi}$; The $95\%$ highest density region of uniform prior distribution is also visualized by red “$\cdot - \cdot -$" for comparison. In (c) or (d), $8 (=2^3)$ BrS measurement patterns and predictions for individual children are shown with measurement patterns attached. The numbers at the vertices show empirical frequencies of GS measurements.[]{data-label="fig:BSvalue.simulation"}](realGSpercent=1_ncase=500_ncontrol=500_combined_triangle "fig:"){width="\linewidth"}
[.5]{} ![Population (top) and individual (bottom) etiology estimations for a single sample with $500$ cases and $500$ controls with true $\bm{\pi}=(0.67,0.26,0.07)'$ and either $1\%(N=5)$ or $10\%(N=50)$ GS data on cases. In (a) or (b), *red circled plus* shows the true population etiology distribution $\bm{\pi}$. The closed curves are $95\%$ credible regions for analysis using BrS data only (*blue dashed lines* “- - -"), BrS+GS data (*light green solid lines* “—"), GS data only (textit[black dotted lines]{} “$\cdots$"); *Solid square/dot/triangle* are the corresponding posterior means of $\bm{\pi}$; The $95\%$ highest density region of uniform prior distribution is also visualized by red “$\cdot - \cdot -$" for comparison. In (c) or (d), $8 (=2^3)$ BrS measurement patterns and predictions for individual children are shown with measurement patterns attached. The numbers at the vertices show empirical frequencies of GS measurements.[]{data-label="fig:BSvalue.simulation"}](realGSpercent=10_ncase=500_ncontrol=500_combined_triangle "fig:"){width="\linewidth"}
[.5]{} ![Population (top) and individual (bottom) etiology estimations for a single sample with $500$ cases and $500$ controls with true $\bm{\pi}=(0.67,0.26,0.07)'$ and either $1\%(N=5)$ or $10\%(N=50)$ GS data on cases. In (a) or (b), *red circled plus* shows the true population etiology distribution $\bm{\pi}$. The closed curves are $95\%$ credible regions for analysis using BrS data only (*blue dashed lines* “- - -"), BrS+GS data (*light green solid lines* “—"), GS data only (textit[black dotted lines]{} “$\cdots$"); *Solid square/dot/triangle* are the corresponding posterior means of $\bm{\pi}$; The $95\%$ highest density region of uniform prior distribution is also visualized by red “$\cdot - \cdot -$" for comparison. In (c) or (d), $8 (=2^3)$ BrS measurement patterns and predictions for individual children are shown with measurement patterns attached. The numbers at the vertices show empirical frequencies of GS measurements.[]{data-label="fig:BSvalue.simulation"}](realGSpercent=1_ncase=500_ncontrol=500_combined_triangle_individual "fig:"){width="\linewidth"}
[.5]{} ![Population (top) and individual (bottom) etiology estimations for a single sample with $500$ cases and $500$ controls with true $\bm{\pi}=(0.67,0.26,0.07)'$ and either $1\%(N=5)$ or $10\%(N=50)$ GS data on cases. In (a) or (b), *red circled plus* shows the true population etiology distribution $\bm{\pi}$. The closed curves are $95\%$ credible regions for analysis using BrS data only (*blue dashed lines* “- - -"), BrS+GS data (*light green solid lines* “—"), GS data only (textit[black dotted lines]{} “$\cdots$"); *Solid square/dot/triangle* are the corresponding posterior means of $\bm{\pi}$; The $95\%$ highest density region of uniform prior distribution is also visualized by red “$\cdot - \cdot -$" for comparison. In (c) or (d), $8 (=2^3)$ BrS measurement patterns and predictions for individual children are shown with measurement patterns attached. The numbers at the vertices show empirical frequencies of GS measurements.[]{data-label="fig:BSvalue.simulation"}](realGSpercent=10_ncase=500_ncontrol=500_combined_triangle_individual "fig:"){width="\linewidth"}
![The observed BrS rates (with $95\%$ confidence intervals) for cases and controls are shown on the far left. The conditional odds ratio given the other pathogens is listed with $95\%$ confidence interval in the box to the right of the BrS data summary. In the left box, below the case and control observed rates is a horizontal line with a triangle. The line starts on the left at the model estimated false positive rate (FPR, $\hat{\psi}_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$) and ends on the right at the estimated true positive rate (TPR, $\hat{\theta}_j^{{\ensuremath{\mbox{\scriptsize \sf BrS } }}}$). Below the TPR are two boxplots summarizing its posterior (top) and prior (bottom) distributions. The location of the triangle, expressed as a fraction of the distance from the estimated FPR to the TPR, is the point estimate of the etiologic fraction for each pathogen. The SS data are shown in a similar fashion to the right of the BrS data using support intervals rather than boxplots.[]{data-label="fig:site02GAM"}](02GAM_three_panel_plot){width="\textwidth"}
![Summary of posterior distribution of pneumonia etiology estimates. Top: posterior distribution of viral etiology; bottom left (right): posterior etiology distribution for top two causes given a bacterial (viral) infection. The blue circles are the $95\%$ credible regions *within* the bacterial or viral groups.[]{data-label="fig:category_pie_WF"}](02GAM_group_triangle_plot){width="\linewidth"}
Web-based supplementary materials for “Partially-Latent Class Models (pLCM) for Case-Control Studies of Childhood Pneumonia Etiology" {#web-based-supplementary-materials-for-partially-latent-class-models-plcm-for-case-control-studies-of-childhood-pneumonia-etiology .unnumbered}
======================================================================================================================================
Z.Wu [*et al.*]{}
![Posterior predictive checking for 10 most frequent BrS measurement patterns among cases and controls with expert priors on TPRs.[]{data-label="fig:freq.check"}](frequent_pattern_fit_check){width=".8\textwidth"}
![Posterior predictive checking for pairwise odds ratios separately for cases (lower triangle) and controls (upper triangle) with expert priors on TPRs. Each entry is a standardized log odds ratio (SLOR): the observed log odds ratio for a pair of BrS measurements minus the mean LOR for the posterior predictive distribution divided by the standard deviation of the posterior predictive distribution. The first significant digit of absolute SLORs are shown in red for positive and blue for negative values, and only those greater than 2 are shown.[]{data-label="fig:or.check"}](02GAMppd_stat_excess_2_case_control){width=".9\textwidth"}
|
---
abstract: 'We investigate properties of the entropy density related to a generalized extensive statistics and derive the thermodynamic Bethe ansatz equation for a system of relativistic particles obeying such a statistics. We investigate the conformal limit of such a system. We also derive a generalized Y-system. The Gentile intermediate statistics and the statistics of $\gamma$-ons are considered in detail. In particular, we observe that certain thermodynamic quantities for the Gentile statistics majorize those for the Haldane-Wu statistics. Specifically, for the effective central charges related to affine Toda models we obtain nontrivial inequalities in terms of dilogarithms.'
---
=
November 2000\
hep-th/0011004\
[ ]{}
14.5pt
Introduction
============
Although all experimentally observed particles are either bosons or fermions, the general principles of quantum mechanics do not prohibit existence of particles obeying other types of statistics [@GM]. The motivation to consider an exotic statistics is that it may provide an [*effective*]{} description of the dynamics of particles if their interaction is so strong that the available occupancy for a given state depends on the number of particles already present in the state [@Ha]. An exotic statistics can also arise if particles possess a hidden internal degree of freedom that is invisible in the Hamiltonian but that can become dynamically relevant. Finally, an exotic statistics can emerge in description of physical models in one or two spatial dimensions. The reason is that in this case the multi-particle wave function is not necessarily even or odd and particles (anyons) may obey a fractional statistics [@any]. Moreover, in low-dimensional theories the exchange statistics of the fields present in a Hamiltonian is not directly related to the exclusion statistics of the corresponding particles. For instance, although particles in the one-dimensional real coupling affine Toda field theories (ATFT) are bosons, one has to impose the Pauli principle on their momenta in the thermodynamic Bethe ansatz analysis in order to obtain correct values of corresponding central charges [@TBAKM]. Another example is the conformal field theory, where quasi-particle spectra can be constructed with use of an exotic exclusion statistics [@quasi].
The first attempt to introduce an exotic statistics is attributed to G. Gentile [@Gent], who proposed an intermediate statistics, in which at most $G$ indistinguishable particles can occupy a given state. This intermediate statistics interpolates between fermions and bosons, that are recovered for $G \= 1$ and $G \= \infty$. A decade after Gentile’s work H.S. Green introduced the parafermi statistics [@Gr] that also fixes maximal occupancy for a given state. Since then, various aspects of systems governed by the Gentile or parafermi statistics were discussed in the literature, among them possible applications to the elementary particles theory [@gen] and to the quark confinement problem in the QCD [@qu].
Many authors discussed thermodynamic properties of an ideal gas obeying the Gentile or parafermi statistics [@Gent; @gas; @app]. In the present paper we will study the thermodynamic limit of one-dimensional relativistic integrable field models governed by a Gentile-like statistics. More precisely, the main purpose of this manuscript is to implement systematically a generalized extensive exclusion statistics into the thermodynamic Bethe ansatz (TBA) analysis of systems where the interaction of particles is relativistic, short-range and characterized by a factorizable scattering matrix.
The TBA was originally developed in the seminal papers by Yang and Yang [@Yang] for treating a one-dimensional ideal gas. In later works [@Zam; @Y] this technique was adopted to one-dimensional relativistic models. The boundary condition for a many-particle wave function leads to what is commonly referred to as the Bethe ansatz equation, that provides the quantization condition for possible momenta of the system. The TBA constitutes an interface between massive integrable models and conformal field theories. From the TBA one can extract information about the ultraviolet limit of the underlying massive model, in particular, find the corresponding (effective) central charge.
In the derivation of the TBA equation the underlying exclusion statistics is usually taken to be either of bosonic or fermionic type [@TBAKM; @Yang; @Zam]. In the present paper the TBA equation will be derived and studied for significantly more general situation. Namely, let $W(N,n)$ denote the total dimension of the Hilbert space for a system of $n$ indistinguishable particles that can occupy $N$ different states in the Fock-space. Assume that there exists a function such that its $N$-th power is a generating function for the dimensions $W(N,n)$, i.e. $$\label{fgen}
\bigl( f(t) \bigr)^N = \sum_{k \geq 0} W(N,k) \, t^k \,.$$ In the statistical mechanics, if the variable $t$ is understood as fugacity, the sum on the right hand side is the grand partition function of the system. The property of a grand partition function to be an $N$-th power of a function independent of $N$ is called [*extensivity*]{}. As we are going to demonstrate below, this property allows us to develop the TBA analysis for a rather general choice of $f(t)$. Let us stress that for the purpose of deriving the TBA equation even asymptotical extensivity suffices, i.e., equation (\[fgen\]) should hold in the large $N$ limit.
Of course, if the explicit form of $f(t)$ is known, we can use it to simplify the general formulae. For instance, an instructive example is the Gentile statistics of order $G$ which is defined by the following choice of $f(t)$: $$\label{Gf}
f_{\scriptscriptstyle G}(t) = 1+t+t^2+\ldots + t^G \,.$$ For $G \= 1$ and $G \= \infty$ this statistics describes ordinary fermions and bosons, respectively. Although $f_{\scriptscriptstyle G}(t)$ is the simplest non-trivial choice of $f(t)$, it has all features of a generalized extensive statistics. Therefore, the Gentile statistics will be our main working example below.
The paper is organized as follows. In Section 2 we describe possible types of a generalized extensive statistics and study properties of the corresponding entropy densities in the thermodynamic limit. The Gentile statistics and the $\gamma$-ons statistics are discussed in this context. In Section 3 we compare properties of the Gentile statistics and the Haldane-Wu statistics. In Section 4 we derive the thermodynamic Bethe ansatz equation and the finite-size scaling function for a relativistic multi-particle system in which statistical interaction is governed by a generalized extensive statistics. In Section 5 we derive the Y-system related to such a generalized statistics. In Section 6 we study the ultraviolet limit of the generalized TBA equation and obtain an expression for the corresponding effective central charge. In Section 7 we compute finite-size scaling functions and central charges related to some affine Toda models for the Gentile statistics and the $\gamma$-ons statistics and compare them with their counterparts for the Haldane-Wu statistics. Our conclusions are stated in Section 8.
Types of extensive statistics and entropy density
=================================================
Consider a system of $n$ indistinguishable particles that can occupy $N$ different states in the Fock-space. Assume that the particles obey the Gentile statistics of order $G$, i.e., that each state can be occupied by at most $G$ particles (with $G$ being a positive integer number). There are several combinatorial ways to compute the total dimension of the Hilbert space for such a system. For instance, we first choose $m_1$ states which are occupied by at least one particle, then among these $m_1$ states we choose $m_2 \leq m_1$ states which are occupied by at least two particles, etc. This way of counting yields the following expression for the total dimension of the Hilbert space $$\label{Gcount}
W_G(N,n) = \sum_{N \geq m_1 \geq \ldots \geq m_{G-1}
\geq 0} C_N^{m_1} \, C_{m_1}^{m_2} \ldots C_{m_{G-2}}^{m_{G-1}}
\, C_{\, m_{G-1}}^{n-m_1- \ldots - m_{G-1}} \,,$$ where $C_k^m=k!/(m!(k-m)!)$ are binomial coefficients (and $C_k^m=0$ if $m \> k$ or if $m \< 0$). Now, let us compute $N$-th power of the polynomial $f_{\scriptscriptstyle G}(t)$ defined in (\[Gf\]). Thanks to the obvious recursive relation $f_{\scriptscriptstyle G}(t)=1+t\, f_{\scriptscriptstyle G-1}(t)$, we can do this by $G$ consecutive applications of the binomial formula. Then it is easy to see that $N$-th power of $f_{\scriptscriptstyle G}(t)$ is a generating function for the dimensions $W_G(N,n)$, i.e. $$\label{gen}
\bigl( f_{\scriptscriptstyle G}(t) \bigr)^N =
\sum_{n=0}^{G N} W_G(N,n) \, t^n \,.$$ Thus, the Gentile statistics is a particular case of an extensive statistics.
Let us stress that the order $G$ of the Gentile statistics must be a positive integer. Although we could try to extend it to other values by formal replacement of (\[Gf\]) with $$\label{Gff}
f_{\scriptscriptstyle G}(t) = \frac{1-t^{G+1}}{1-t} \,,$$ the resulting function $f_{\scriptscriptstyle G}(t)$ will not satisfy the important positivity property (see below).
Consider now a more general system of $n$ indistinguishable particles that possesses, at least for large $N$, the extensivity property (\[fgen\]) with $$\label{f}
f(t) = \sum_{k=0}^{d} P_k t^k \,,$$ where $d$ can be a finite positive number or infinity. The Taylor coefficients $P_k$ are fractional dimensions of levels in a single-state Hilbert subspace and can be regarded as probabilities of occupation of the given state by $k$ particles (see related discussions in [@NW; @Pn; @Po]). The total dimension of the single-state subspace is $f(1)$. It is natural to require [*positivity*]{} of the quantities $P_k$: $$\label{posit}
P_0=1 \qquad {\rm and} \qquad P_k \geq 0 \quad
{\rm for} \quad k \geq 1 \,.$$ The first condition here implies that the vacuum is realized with probability one independently of the size $N$ of the system. Furthermore, we assume that the series (\[f\]) converges in the complex plane for $0 \leq |z| < R$, where $R \leq \infty$. This implies that $f(t)$ belongs to one of the following types: $$\label{types}
\begin{array}{rl}
{\rm I.} & \quad d < \infty \,, \qquad R=\infty \,; \\
{\rm II.} & \quad d = \infty \,, \qquad R=\infty \,; \\
{\rm IIIa.} &\quad d = \infty \,, \qquad 1< R < \infty \,; \\
{\rm IIIb.} & \quad d = \infty \,, \qquad 0< R \leq 1 \,.
\end{array}$$ The types I, II, and III consist of finite degree polynomials, analytic functions with infinite Taylor series, and meromorphic functions, respectively. For example, a finite order Gentile statistics belongs to the type I, the Boltzmann statistics ($f(t) \= \exp t$) belongs to the type II, and the bosonic statistics ($f(t) \= (1 \- t)^{-1}$) is of the type IIIb. The reason we divided the type III into two subtypes will become clear below.
Let us remark that, in addition to (\[posit\]), one may require that $P_1 \= 1$ with the motivation that the statistics should not be deformed if only one particle is present in the system. Although all the concrete cases which we consider below do fulfill this requirement, it does not seem to be crucial for the general considerations.
The entropy of the system under consideration is $S=k \ln W(N,n)$, where $k$ is the Boltzmann constant. We need to compute the entropy in the thermodynamic limit, i.e., when $N\rightarrow \infty$ and $n/N$ is fixed. The extensivity property (\[fgen\]) allows us to express $W(N,n)$ as the following contour integral $$\label{cint}
W(N,n) = \frac{1}{2\pi i} \, \oint_{|z|=\rho} \, dz \,
\bigl( f(z) \bigr)^N \, z^{-n-1} =
\frac{1}{2\pi i} \, \oint_{|z|=\rho} \frac{dz}{z} \,
\exp \{ N ( \ln f(z) - \mu \ln z ) \} \,,$$ where $\rho < R$, and we denoted $\mu = n/N$. Now we can find an asymptotic expression for the entropy in the thermodynamic limit by application of the saddle point method to the integral (\[cint\]). The saddle point of the exponent in (\[cint\]), denote it $x$, is found as the positive root of the equation (prime stands for a derivative) $$\label{xeq}
x \, f^{\prime}(t)|_{t=x} = \mu \, f(x) \,.$$ Choosing $\rho=x$ in (\[cint\]), so that the saddle point belongs to the integration contour, we apply the standard result (see, e.g., [@asym]) for the Laplace integral: $$\label{expr}
W(N,\mu N) =
\frac{ \exp \{ N (\ln f(x) - \mu \ln x) \} }
{ \sqrt{ 2\pi N h_\mu(x) } } \,
\Bigl( \frac{1}{x} + O(1/N) \Bigr)
\qquad {\rm as} \quad N\rightarrow \infty \,,$$ where $h_\mu (t)=f^{\prime\prime}(t)/(\mu f(t))+(1-\mu)/t^2$. For any $f(t)$ satisfying (\[posit\]) and $x$ obeying (\[xeq\]) we have $h_\mu (x) \> 0$ due to the Hadamard theorem [@asym; @Tit]. From (\[expr\]) we obtain the entropy density $s(\mu)$ for a given value of $\mu$ $$\label{lim}
\lim_{N \rightarrow \infty} \frac{1}{N} \ln W(N,\mu N)
\equiv s(\mu) = \ln f(x) - \mu \, \ln x \,,$$ where $x$ is positive and satisfies (\[xeq\]).
Two remarks are in order here. Strictly speaking, if $f(t)$ is degenerated, i.e., if there exists integer $p>1$ such that $f(t) = \sum_{k} P_{pk} t^{pk}$, then the above derivation modifies since $f(t)$ has $p$ saddle points. However, the corresponding physical interpretation is simply that particles can be added only in $p$-tuples. Therefore, it is natural to introduce cluster variables $t^\prime = p t$, $n^\prime = p n$, etc. In terms of these new variables formulae (\[cint\])-(\[expr\]) remain valid.
More important remark is that for deriving (\[lim\]) the property $P_k \geq 0$ is crucial. In general, without the positivity property (\[cint\]) does not have a real and uniform in $\mu$ limit. Moreover, if we allow $f(t)$ to have negative Taylor coefficients, then some $W(N,n)$ in the expansion of $(f(t))^N$ may also be negative. This leads to an immediate problem with the definition of the entropy as a logarithm of $W(N,n)$. However, if the first negative coefficient in the expansion of $(f(t))^N$ appears only at sufficiently large power $n_0$ (of order $O(N)$), then $f(t)$ violating the positivity condition can still have well defined entropy density for certain range of $\mu$. For instance, such is the case of the Haldane-Wu statistics with statistical interaction $g$ (see Section 3), where $n_0 \approx N/g$. However, in the present paper we will restrict our consideration, for simplicity, only to partition function strictly obeying the positivity condition.
The entropy density is a key quantity for developing the TBA analysis. From the definition of $\mu$ it is clear that $0 \!\leq\! \mu \!\leq\! d$ for $f(t)$ of the type I and $0 \!\leq\! \mu \!\leq\! \infty$ for the other types. Formally this follows from equation (\[xeq\]) which shows that $\mu$ varies from $\mu(x \= 0)$ to $\mu(x \= R)$. Moreover, $\mu(x)$ is a strictly increasing function because $\partial_x \mu = x \mu h_\mu (x) \> 0$ for $x \> 0$.
Since $f(t)$ is regular at zero, it is easy to see that $\mu \ln x$ tends to zero for small $\mu$. Together with the condition $P_0=1$ this yields $s(0)=0$. In order to analyze further behaviour of $s(\mu)$, we employ (\[xeq\]) and derive from (\[lim\]) that $$\label{der}
\partial_\mu s(\mu) = -\ln x \,.$$ Since $\partial_x \mu \> 0$, we conclude that $\partial^2_\mu s(\mu) \< 0$, that is $s(\mu)$ is a convex up function. Furthermore, (\[der\]) implies that $\max s(\mu) = \ln f( \min \{1,R\} )$. Note, however, that $s(\mu)$ is not guaranteed to be positive for the whole range of $\mu$. More precisely, $s(\mu)$ may have one root at the interval $\mu(1) < \mu \leq \mu(R)$. Indeed, for $f(t)$ of the type I we derive from (\[xeq\]) and (\[lim\]) that $$\label{sd}
s(d)=\ln P_d \,.$$ That is, $s(\mu)$ is non-negative everywhere at $0 \!\leq\! \mu \!\leq\! d$ only if the last Taylor coefficient $P_d$ is greater or equal to one. For the other types of $f(t)$ we have the following properties $$\begin{aligned}
& s(\infty) = \infty \quad
\hbox{for type IIIb} \,, & \label{sinf1} \\
& s(\infty) = -\infty \quad \hbox{\rm for types II and IIIa}
\,. & \label{sinf2}\end{aligned}$$ Indeed, in the first case $x \!\leq\! R \!\leq\! 1 $, hence, according to (\[der\]), $s(\mu)$ is strictly increasing. In the second case we have $s(\infty)=\ln f(1)-\int_{\mu(1)}^{\infty} {\rm d}\mu \,\ln x$. The integral here apparently diverges since it exceeds $\int_{\mu(R_0)}^\infty {\rm d}\mu \, \ln R_0$, where we can choose any $R_0$ such that $1 \< R_0 \< R$.
Investigation of thermodynamic properties of models in which the entropy density becomes negative is rather problematic. Therefore eqs. (\[sd\])-(\[sinf2\]) suggest that we should restrict the choice of $f(t)$ to the type I with $P_d \geq 1$ (the Gentile-like statistics) and to the type IIIb (the bose-like statistics). However, let us notice that in the TBA framework the variable $x$ depends on the rapidity $\theta$. For some relativistic models with factorizable scattering the typical picture [@TBAKM] is that $x(\theta)$ falls off as $|\theta|$ grows (for instance, $x(\theta)=\exp\{-mr \cosh \theta\}$ for the trivial S-matrix). In this case $x(\theta)<x(0)$ and we can consider $f(t)$ of the types II and IIIa if $s(\mu)$ remains non-negative for $\mu \leq \mu(x(0))$.
In the context of this discussion it is instructive to consider the so-called $\gamma$-ons [@al; @app]. These are particles with statistics interpolating between fermions ($\gamma \= 1$) and bosons ($\gamma \= -1$) in the following simple way $$\label{mga}
\mu(x) = \frac{x}{1+\gamma x} \,.$$ Solving equation (\[xeq\]) for this choice of $\mu$ it is easy to find the corresponding single-state partition function: $$\label{fga}
f_\gamma(t) = (1+\gamma t)^{1/\gamma} =
1 + t + \sum_{k\geq 2} [k \- 1]_\gamma \frac{t^k}{k!} \,,$$ where $[k]_\gamma \= \prod_{m=1}^{k} (1 \- m\gamma)$. For $\gamma \> 0$ the positivity condition (\[posit\]) is fulfilled only if $\gamma \= 1/d$, with $d$ positive integer. In this case $f_\gamma(t)$ is of the type I, the corresponding maximal occupancy is $\mu \= d$. Since $\ln P_d \< 0$ for $d \> 1$, the entropy density becomes negative for certain range of $\mu$. The case of $\gamma \= 0$ describes the Boltzmann statistics. Here $f_\gamma(t) \= \exp{t}$ belongs to the type II and $s(\mu)$ is negative for $\mu \> e$. Finally, for $\gamma \< 0$ the positivity condition (\[posit\]) is always fulfilled and $f_\gamma(t)$ is of the type IIIa or IIIb depending on the value of $\gamma$.
Returning to the Gentile statistics, we can summarize the properties of the corresponding entropy density $s_{\scriptscriptstyle G}(\mu)$ as follows (see Fig. 1 for illustration). $s_{\scriptscriptstyle G}(\mu)$ is a convex up function defined for $0 \!\leq\! \mu \!\leq\! G$ and vanishing at the end points of this interval. It attains the maximum at $\mu=G/2$: $$\label{Gmax}
s_{\scriptscriptstyle G} (G/2) = \ln (G+1) \,.$$ Furthermore, $s_{\scriptscriptstyle G}(\mu)$ possesses the following symmetry $$\label{sym}
s_{\scriptscriptstyle G} (G-\mu) =
s_{\scriptscriptstyle G} (\mu) \,.$$ This is a consequence of the equality $W_G(N,GN \- n)= W_G(N,n)$ which, in turn, follows from the relation (\[gen\]).
Comparison with Haldane-Wu statistics
=====================================
It is interesting to compare thermodynamic properties of a system of particles obeying the Gentile statistics with those of a system obeying the so-called Haldane-Wu statistics. For the latter, the total dimension of the Hilbert space is postulated to be [@Wu] $$\label{Wahr}
\hat{W}_g(N,n) =
\frac{(N+ (1-g)n +g-1)!}{n! \, ( N - g n + g-1)!} \,.$$ This expression interpolates between the bosonic ($g=0$) and the fermionic ($g=1$) statistics. Introduction of such an interpolating statistics was motivated by Haldane’s generalization [@Ha] of the Pauli exclusion principle to the form $$\label{Pauli}
\Delta D/\Delta n=-g \,.$$ Here $D$ is the number of available states (holes) before the $n$-th particle has been added to the system. The parameter $g$ is called the statistical interaction. Properties of an ideal gas obeying the Haldane-Wu statistics were actively discussed in the literature [@NW; @Wu; @HW]. The corresponding TBA equation for relativistic integrable models was obtained in [@BF] and investigated in [@BF; @F].
A direct comparison of the quantities $W_G(N,n)$ and $\hat{W}_g(N,n)$ is problematic if the system contains a finite number of particles. Indeed, formula (\[Gcount\]) has a bulky form for generic $G$ (i.e., for $2 \!\leq\! G \< \infty$). Moreover, it requires additional conventions to make exact sense of expression (\[Wahr\]) for generic $g$ (i.e., for $0 \< g \< 1$). Actually, there exists no prescription for counting of states on the microscopical level that would lead to (\[Wahr\]) (see related discussions in [@NW; @Pn; @Po]). To overcome these difficulties we can compare the two statistics in the thermodynamic limit. More precisely, let us compare the entropy density $s_{\scriptscriptstyle G}(\mu)$ with its counterpart $\hat{s}_g(\mu)\equiv \lim_{N \rightarrow\infty}
\frac{1}{N} \ln\hat{W}_g(N,\mu N)$ that is readily derived from (\[Wahr\]) with the help of the Stirling formula $$\label{limhw}
\hat{s}_g(\mu) = (1+\mu(1-g))\ln (1+\mu(1-g)) -
\mu\ln \mu - (1-g\mu) \ln (1-g\mu) \,.$$
Let us notice that the Haldane-Wu statistics is extensive but the corresponding function $f_g(t)$ does not satisfy the positivity property (\[posit\]). Indeed, we can reconstruct $f_g(t)$ first as a function of $\mu$ by the following formula $$\label{le}
f(\mu) = \exp \{ s(\mu) - \mu \, \partial_\mu s(\mu) \} \,,$$ that follows from (\[lim\]) and (\[der\]). For $\hat{s}_g(\mu)$ given by (\[limhw\]) this yields $$\label{fhw}
f_g(\mu) = \frac{1+(1-g)\mu}{1-g\mu} \,,$$ which is a well-defined function. But reexpressing it in terms of $t$ with the help of (\[xeq\]) we will always obtain a function that breaks the positivity property (\[posit\]). For instance, for $g=1/2$ we find (which coincides with the result of [@NW]) . However, the first negative term in expansion of the $N$-th power of this series appears for $n= 2N\+ 3$. This implies that in the large $N$ limit thermodynamic quantities are well-defined for $\mu \leq 2$.
For our purposes we can regard (\[limhw\]) as [*a definition*]{} of the Haldane-Wu statistics. Then we have a finite maximal occupancy $N/g$ for a given number of particles $N$ simply because $\hat{s}_g(\mu)$ is defined for $0 \!\leq\! \mu \!\leq\! 1/g$. Furthermore, $\hat{s}_g(\mu)$ is a convex up function vanishing at the end points of this interval. Thus, we see that properties of the entropy density $\hat{s}_g(\mu)$ are similar to those of a type I extensive statistics and, specifically, to those of the Gentile statistics. Therefore, it is natural to ask how much the behaviour of $s_{\scriptscriptstyle G}(\mu)$ differs from $\hat{s}_g(\mu)$ if $g=1/G$ (when the corresponding maximal occupancies coincide).
First, notice that the function $\hat{s}_{\scriptscriptstyle 1/G}(\mu)$ cannot coincide with $s_{\scriptscriptstyle G}(\mu)$ identically for generic $G$ since $\hat{s}_g(\mu)$ does not possess a symmetry like (\[sym\]) if $g\neq 0,1$. Next, we can compare the mid-point value $\hat{s}_{\scriptscriptstyle 1/G}(G/2)=\frac{G+1}{2}\ln (G+1) -
\frac{G}{2}\ln G$ with the corresponding value of $s_{\scriptscriptstyle G}(\mu)$ given by (\[Gmax\]). Taking into account the inequalities $$\label{Gin}
\begin{array}{l}
t \ln t > (t-1)\ln(t+1) \qquad {\rm for} \quad t>1 \,, \\[0.5mm]
t \ln t < (t-1)\ln(t+1) \qquad {\rm for} \quad 0<t<1 \,,
\end{array}$$ we establish that $ s_{\scriptscriptstyle G}(G/2) >
\hat{s}_{\scriptscriptstyle 1/G}(G/2) $ for $G \> 1$. Actually, numerical computations for various values of $G \> 1$ show that $s_{\scriptscriptstyle G}(\mu)$ majorizes $\hat{s}_{\scriptscriptstyle 1/G}(\mu)$ [*everywhere*]{} at $0 \< \mu \< G$, $$\label{ineq2}
s_{\scriptscriptstyle G}(\mu) >
\hat{s}_{\scriptscriptstyle 1/G}(\mu)
\qquad {\rm for} \quad G > 1 \,.$$ For illustration, the case of $G=2$ is presented in Fig. 1.
Thus, we see that for generic $G$ the difference between $\hat{s}_{\scriptscriptstyle 1/G}(\mu)$ and $s_{\scriptscriptstyle G}(\mu)$ is not small if $\mu$ is not close to $\mu=0$ or $\mu=G$. It is instructive to compare these functions also near the end points (in particular, this will provide an additional support to the assertion (\[ineq2\])).
In a vicinity of $\mu=0$ equation (\[xeq\]) is solved as $x=\mu +(2\delta_{G,1} -1)\mu^2 + O(\mu^3)$, and we find that $ s_{\scriptscriptstyle G}(\mu) =
(1-\ln\mu) \mu +(1/2-\delta_{G,1}) \mu^2 + O(\mu^3) $, where $\delta_{m,n}$ stands for the Kronecker symbol. Comparing this expansion with $ \hat{s}_g(\mu) = (1-\ln\mu) \mu + (1/2-g)\mu^2 + O(\mu^3) $ which follows from (\[limhw\]), we infer that for small $\mu$ functions $\hat{s}_g(\mu)$ and $s_{\scriptscriptstyle G}(\mu)$ take close values (up to the order $\mu^2$) for [*any choice*]{} of $G$ and $g$. Actually, employing (\[xeq\]), it is easy to derive the following expansion for the entropy density of an arbitrary extensive statistics with $f(t)$ satisfying (\[posit\]): $$\label{exp}
s(\mu) = (P_1 - \ln \mu) \, \mu + O(\mu^2) \,.$$ Thus, for small $\mu$ in the thermodynamic limit, the Haldane-Wu and Gentile statistics are close not only to each other but to any extensive statistics for which two first Taylor coefficients of $f(t)$ are $P_0 = P_1 =1$. For instance, for small $\mu$ the Haldane-Wu and Gentile statistics are close also to the statistics of $\gamma$-ons (\[fga\]).
Consider now $\mu=G-\epsilon$ for small positive $\epsilon$. Then, employing (\[exp\]) and the symmetry (\[sym\]), we derive: $ s_{\scriptscriptstyle G}(G-\epsilon) =
(1-\ln\epsilon) \,\epsilon + O(\epsilon^2) $. On the other hand, setting $\mu=1/g-\epsilon$ in (\[limhw\]), we obtain $ \hat{s}_g(1/g-\epsilon) =
(1-2\ln g -\ln\epsilon) \,g\,\epsilon +O(\epsilon^2) $. We see that in a vicinity of $\mu=G$ the difference between $s_{\scriptscriptstyle G}(\mu)$ and $\hat{s}_{\scriptscriptstyle 1/G}(\mu)$ is not even of first order in $\epsilon$ but of order $\epsilon\ln\epsilon$ (so that the corresponding plots are visibly different here, see Fig. 1).
Summarizing, it appears plausible that the Gentile statistics of order $G>1$ [*majorizes*]{} the Haldane-Wu statistics with parameter $g=1/G$ and these statistics are [*not close*]{} except for small values of $\mu \= n/N$.
Thermodynamic Bethe ansatz equation
===================================
Now we will consider a relativistic multi-particle system of $l$ different species of particles confined to a finite region of the size $L$. We denote by $n_a$ the number of particles and by $N_a$ the dimension of the Fock-space related to the species $a$. In the thermodynamic limit both the dimensions of the Fock-space and the size $L$ of the region were the system is confined approach infinity but the ratios $n_a/L$ remain finite [@Yang]. For the fraction of particles with rapidities between $\theta \- \Delta \theta /2$ and $\theta \+ \Delta \theta /2$ it is convenient to introduce densities $$\label{dens}
\Delta N_a = \rho_a(\theta )\,\Delta \theta \,L \,,\qquad
\Delta n_a = \rho_a^{r}(\theta )\,\Delta \theta \,L \,.$$ As usual, the rapidity $\theta $ parameterizes the two-momentum $P=m\left( \cosh \theta ,\sinh \theta \right) $ and the total energy of the system is given by $$\label{ener}
E \left[ \rho^r \,\right] = L \sum_{a=1}^l
\int_{-\infty}^{\infty} {\rm d}\theta \,
\rho_a^r (\theta ) \, m_a \cosh \theta \,.$$ According to (\[dens\]) the variable $\mu$ in (\[lim\]) is now related to the particle densities $$\label{mu}
\mu_a (\theta) = \rho_a^r (\theta)/\rho_a (\theta) \,.$$ Thus, we can express the entropy as the following functional $$\label{entro}
S[ \rho ,\rho^r \,] = kL \sum_{a=1}^l
\int_{-\infty }^{\infty } {\rm d}\theta \,
\rho_a(\theta) s(\mu_a)
= kL \sum_{a=1}^l \int_{-\infty }^{\infty } {\rm d}\theta \,
\bigl\{ \rho_a(\theta) \ln f_a(x_a(\theta)) -
\rho_a^r(\theta) \ln x_a(\theta) \bigr\} \,,$$ where each $x_a(\theta)$ satisfies (\[xeq\]) and hence it is a function of $\rho_a^r (\theta)/\rho_a (\theta)$. The subscript of $f_a$ means that different species may have different single-state partition functions.
In the formulation of the TBA equation for ordinary statistics it is common to introduce the so-called pseudo-energies $\epsilon_a(\theta)$ such that bosonic (upper sign) and fermionic (lower sign) distributions are $ \rho^r_a(\theta)/\rho_a(\theta) =
\left( \exp (\epsilon_a (\theta))\mp 1 \right)^{-1}$. In view of (\[xeq\]), it is natural to define the pseudo-energies for an arbitrary extensive statistics as follows $$\label{Id1}
\epsilon_a (\theta ) = - \ln x_a (\theta) \,,$$ so that $ \rho^r_a(\theta)/\rho_a(\theta) =
-\partial_{\epsilon_a(\theta)}
\ln f_a( e^{-\epsilon_a(\theta)})$. For instance, the distribution for the Gentile statistics acquires the following form in terms of the pseudo-energies (with a slightly different meaning it was introduced by Gentile [@Gent] for an ideal gas) $$\label{Id2}
\frac{\rho^r_a (\theta) }{ \rho_a (\theta) }=
\frac{ \sum_{k=1}^{G_a} k \exp\{ (G_a-k)\epsilon_a (\theta) \} }{
\sum_{k=0}^{G_a} \exp\{ k \epsilon_a (\theta) \} } \,.$$
According to the fundamental principles of thermodynamics the equilibrium state of a system is found by minimizing the free energy $F$. Hence, keeping the temperature constant we obtain the equilibrium condition by minimizing $F\left[ \rho ,\rho^r \,\right] =E\left[ \rho^r \,\right]
-TS\left[ \rho ,\rho^r \,\right] $ with respect to $\rho_a^r$. The equilibrium condition reads $$\label{equ}
\frac{\delta F}{\delta \rho^r_a } =
\frac{\delta E}{\delta \rho^r_a}
- T \frac{\delta S}{\delta \rho^r_a}-
T \sum_{b=1}^{l}\frac{\delta S}{\delta \rho_b }
\frac{ \delta \rho_b }{ \delta \rho^r_a }=0 \,.$$
The admissible momenta in the system are restricted by the boundary conditions. To derive the corresponding quantization equations one takes a particle in the multi-particle wave function on a trip through the whole system [@Yang; @Zam]. The particle will scatter with all other particles, which yields the following set of equations determining admissible rapidities $$\label{BA}
\exp (i L m_i \sinh \theta_i )\prod_{j\neq i}^{N}S_{ij}
(\theta_i - \theta_j )= \kappa_i \,, \qquad i=1,\ldots,N \,,$$ where $N \= \sum\nolimits_a N_a$, and $S_{ij}(\theta)$ is a two-particle scattering matrix. The constant phases $\kappa_i$ on the r.h.s. of (\[BA\]) may depend on the particle’s species; for our purposes their exact values are irrelevant. Taking the logarithmic derivative of (\[BA\]) and employing densities as in (\[dens\]), we obtain $$\label{Bou}
m_a \cosh \theta + 2\pi \sum_{b=1}^l \,
\bigl( \varphi_{ab} * \rho^r_b \bigr) (\theta)
= 2\pi \, \rho_a (\theta ) \,.$$ Here $(u*v) (\theta) \equiv 1/(2\pi )\int {\rm d}\theta^\prime
u(\theta -\theta^\prime) v(\theta^\prime)$ stands for the convolution and we introduced as usual the notation $\varphi_{ab} (\theta)= -i\partial_\theta ( \ln S_{ab}(\theta) )$.
Employing equations (\[xeq\]), (\[lim\]) and (\[mu\]), we compute variations of the entropy (\[entro\]): $$\delta S/\delta \rho^r_a = -\ln x_a (\theta) \,, \qquad
\delta S/\delta \rho_a = \ln f_{a} (x_a(\theta)) \,.$$ Substitution of these relations into the equilibrium condition (\[equ\]) together with (\[Bou\]) yields the desired thermodynamic Bethe ansatz equations for a system in which statistical interaction between particles of $a$-th species is described by a single-state partition function $f_a$, $$\label{GTBA}
\frac{1}{kT} m_a \cosh \theta =
- \ln x_a (\theta) + \sum_{b=1}^l \,
\bigl( \varphi_{ab} * \ln f_b (x_b) \bigr) (\theta) \,.$$
In general, this set of integral equations cannot be solved analytically. However, one can try to solve these TBA equations numerically with the help of the iteration method (as we will do for some examples in Section 7). For the ordinary statistics and the Haldane-Wu case it is known that such an iterative procedure converges provided that $\varphi_{ab} (\theta)$ falls off sufficiently fast. The most complete proof of this assertion is presented in [@F; @CK] and is based on the fixed point theorem. Presumably this proof extends to the case of (\[GTBA\]) (although some further restrictions on $f_a(t)$ may be required), but we will not discuss this here. Let us notice only that if (\[GTBA\]) can be solved iteratively and $\varphi_{ab}(\theta)$ is symmetric in $\theta$ (which is always the case for a unitary S-matrix), then $x_a(\theta)$ is also symmetric in $\theta$.
We now substitute the expressions for the total energy (\[ener\]) and the entropy (\[entro\]) together with equations (\[Bou\]) and (\[GTBA\]) back into the expression for the free energy, and obtain $$\label{Free}
F(T)=-\frac{LkT}{2\pi } \sum_{a=1}^l m_a \int_{-\infty}^{\infty}
{\rm d}\theta \, \cosh \theta \, \ln f_{a} (x_a (\theta)) \, .$$ The relation between the free energy and the finite-size scaling function is well-known to be $c(T)=-6F(T)/(\pi LT^{2})$ [@Cardy]. As usual we introduce a variable $r=1/T$ and set now the Boltzmann constant to be one. Then the finite-size scaling function acquires the form $$\label{c(r)}
c(r)= \sum_{a=1}^l \frac{6 m_a r}{\pi^2} \int_0^{\infty}
{\rm d}\theta \, \cosh \theta \, \ln f_{a} (x_a (\theta)) \,.$$ Here we assumed that $x_a(\theta) \= x_a(-\theta)$. Once more for $f(t)=1+t$ and $f(t)=1/(1-t)$ we recover the well-known expressions for the fermionic and bosonic scaling functions.
It is instructive to compare the above TBA equation (\[GTBA\]) and the finite-size scaling function (\[c(r)\]) with those for the Haldane-Wu statistics [@BF]: $$\label{hwTBA}
\frac{1}{kT} m_a \cosh \theta = \ln(1+y_a (\theta)) +
\sum_{b=1}^l \, \bigl( \Phi_{ab} * \ln (1+y^{-1}_b)
\bigr) (\theta) \,,$$ $$\label{hwcr}
c_g(r) = \sum_{a=1}^l \frac{6 m_a r}{\pi^2} \int_0^{\infty}
{\rm d}\theta \, \cosh \theta \, \ln (1+y^{-1}_a (\theta)) \,.$$ Here $\Phi_{ab} (\theta) \= \varphi_{ab}(\theta) - 2\pi
g_{ab} \delta(\theta)$, and $g$ is a matrix which appears on the r.h.s. of (\[Pauli\]) if we consider a system of several species. As was discussed in [@BF], the TBA equation (\[hwTBA\]) leads to an equivalence principle: two systems having the same mass spectra and identical quantities $\Phi_{ab}(\theta)$ are thermodynamically equivalent (as seen from (\[hwcr\]), their finite-size scaling functions coincide). Our TBA equation (\[GTBA\]) does not possess such a feature. More precisely, for (\[GTBA\]) there exists no way to compensate a difference in statistics by a change of $\varphi_{ab}(\theta)$ independent of the explicit form of the S-matrix.
Y-systems
=========
For some class of models it is possible to carry out certain manipulations on the TBA equations such that the original integral TBA equations acquire the form of a set of functional equations in new variables $Y_a$ [@Y]. These functional equations (commonly referred to as ) have the further virtue that unlike the original TBA equations they [*do not*]{} involve the mass spectrum. In the $Y$-variables certain periodicities in the rapidities are exhibited more clearly. These periodicities may then be utilized in order to express the quantity $Y_a(\theta)$ as a Fourier series, which in turn is useful to find solution of the TBA equations and expand the scaling function as a power series in the scaling parameter $r$. We will now demonstrate that similar equations may be derived for a multi-particle system in which the dynamical scattering is governed by the scattering matrix related to the ADE-affine Toda field theories and the statistical interaction is of a general type considered above.
Consider the minimal part (i.e., independent of the coupling constant) of the scattering matrix of the ADE-affine Toda field theories. As was shown in [@Rav], upon appropriate choice of CDD-ambiguities, these S-matrices satisfy the identity $$\varphi_{ab} \Bigl( \theta +\frac{i\pi }{h} \Bigr) +
\varphi_{ab} \Bigl( \theta -\frac{i\pi }{h} \Bigr)
=\sum_{c=1}^{l} I_{ac} \varphi_{cb} ( \theta ) -
2\pi I_{ab} \delta ( \theta ) \, ,
\label{RAD}$$ where $h$ denotes the Coxeter number, $l$ the rank and $I$ the incidence matrix of the corresponding Lie algebra. It is then straightforward to derive the “Y-system” $$\label{YS}
Y_a \Bigl( \theta +\frac{i\pi }{h} \Bigr) \,
Y_a \Bigl( \theta -\frac{i\pi }{h} \Bigr)
=\prod_{b=1}^{l} \Bigl( z_b(Y_b (\theta)) \Bigr)^{I_{ab}} \,,$$ where $Y_a(\theta)=x_a^{-1}(\theta)$ and $$\label{Y}
z_a(t) = t \, f_a(t^{-1}) \,.$$ Equations (\[YS\]) follow upon first adding (\[GTBA\]) at $\theta + \frac{i\pi }{h}$ and $\theta - \frac{i\pi }{h}$ and subtracting $I$ times (\[GTBA\]) at $\theta $ from the sum. Thereafter we employ the fact [@Mass] that the masses of an ADE affine Toda field theory are proportional to the Perron-Frobenius vector of the corresponding Cartan matrix, i.e., $\sum_b C_{ab} m_{b} =4\sin^{2} (\pi/(2h)) m_a$. Then, with the help of (\[RAD\]), equations (\[YS\]) follow.
In comparison with (\[GTBA\]) equations (\[YS\]) have already the virtue that they are simple functional equations and do not involve the mass spectrum. For the Gentile statistics we have $z(t)= t + 1 + t^{-1} + \ldots + t^{1-G}$. In particular, for $G=1$ we have $z(Y) \= 1 \+ Y$ and recover the known fermionic Y-system. The bosonic case ($G \= \infty$) corresponds to $z(Y) \= Y^{2}/(Y \- 1)$. And the case of $G \= 2$ seems to be particularly interesting since here $z(Y) \= 1+Y+Y^{-1}$ possesses an additional symmetry $z(Y) \= z(Y^{-1})$ or, equivalently, $z(\epsilon_a(\theta)) \= z(-\epsilon_a(\theta))$.
Ultraviolet limit
=================
In the small $r$ limit the scaling function $c(r)$ becomes the effective central charge of a conformal field theory [@Cardy] which describes the ultraviolet limit of a given massive model. That is, $\lim_{r\rightarrow 0}c(r)=c_{\rm eff} \equiv c-24h^\prime$, where $c$ is the conformal anomaly and $h^\prime$ is the lowest conformal weight in the corresponding conformal field theory.
In order to evaluate $c(r)$ in the $r\rightarrow 0$ limit we substitute the derivative of (\[GTBA\]) into (\[c(r)\]), replacing in both equations $\cosh \theta$ by $\frac 12 e^\theta$. Then we integrate the resulting equation by parts, assuming that $\varphi_{ab}(\theta)$ is symmetric and falls off sufficiently fast (typically, $\varphi_{ab}(\theta)= O(e^{-\theta})$ as $\theta\rightarrow\infty$). Next we substitute back equation (\[GTBA\]). Finally we change variables from $\theta$ to $x(\theta)$, taking into account that $x(\infty)=0$ (as follows from (\[GTBA\])), and again integrate by parts. All this yields $$\label{c0}
c_{\rm eff} = \frac{6}{\pi^2} \sum_{a=1}^l c_a \,,$$ $$\label{c2}
c_a = - \frac{1}{2} \ln x_a \ln f_a (x_a)
+ \int_0^{x_a} \frac{ {\rm d}t }{t} \ln f_a(t) \,.$$ Here we denoted $x_a \equiv x_a(\theta \= 0)$. These quantities can be determined from the [*constant*]{} TBA equations that follow from (\[GTBA\]) when $r \rightarrow 0$ (this derivation requires that $x_a(\theta)$ become constant in a region of $\theta$ of order $-\ln r$ when $r$ is small; see more detailed explanations in [@TBAKM; @CK]) $$\label{cTBA}
\ln x_a = - \sum_{b=1}^l N_{ab} \ln f_b (x_b) \,,
\qquad a=1 ,\ldots ,l \,,$$ where $2\pi N_{ab}=
-\int_{-\infty}^{\infty} {\rm d}\theta \varphi_{ab}(\theta)$.
As a simple example we consider the statistics of $\gamma$-ons (\[fga\]), assuming for simplicity that $\gamma$ is the same for all species (generalization to different $\gamma_a$ is obvious). Here the integral in (\[c2\]) can be computed explicitly in terms of the Euler dilogarithm which is defined for $t\leq 1$ as $Li_2(t)= \sum_{k\geq 1} t^k/k^2$. We have $$\int_0^{x} \frac{ {\rm d}t }{t} \ln f_{\gamma}(t) =
\frac{1}{\gamma} \int_0^{x} \frac{ {\rm d}t }{t}
\ln ( 1 +\gamma t) = - \frac{1}{\gamma} \sum_{k=1}^{\infty}
\frac{(-\gamma x)^k}{k^2} = -\frac{1}{\gamma} Li_2(-\gamma x)$$ and hence $$\label{cga1}
c_a = -\frac{1}{\gamma} Li_2(-\gamma x_a)
- \frac{1}{2\gamma} \ln x_a \ln (1+\gamma x_a) \,,$$ where $x_a$ satisfies (\[cTBA\]). Since $Li_2(-t)=
\frac 12 Li_2(t^2) - Li_2(t)$, we can rewrite (\[cga1\]) for positive $\gamma$ employing the Rogers dilogarithm which is defined as $L(t) = Li_2(t) + \frac 12 \ln t \ln (1-t) $ for $0 \!\leq\! t \!\leq\! 1$. Then, using the Abel identity $L(t^2)=2L(t)-2L(\frac{t}{1+t})$, we obtain $$\label{cga2}
c_a = \frac{1}{\gamma}
L\Bigl( \frac{\gamma x_a}{1+\gamma x_a} \Bigr)
+ \frac{1}{2\gamma} \ln \gamma \ln (1+\gamma x_a) \,.$$ Notice that this expression does not diverge for small $\gamma$; $\lim_{\gamma\rightarrow 0} c_a= x_a(1- \frac 12 \ln x_a)$ is the value corresponding to the Boltzmann statistics.
Consider now the Gentile statistics, assuming that the order $G$ is the same for all species. In this case we also can compute the integral in (\[c2\]) explicitly: $$\int_0^{x} \frac{ {\rm d}t }{t} \ln f_{\scriptscriptstyle G}(t)
= \int_0^{x} \frac{ {\rm d}t }{t}
\ln \Bigl( \frac{1-t^{G+1}}{1-t} \Bigr)
=\sum_{k=1}^{\infty} \frac{1}{k^2} \Bigl( x^k -
\frac{ x^{k(G+1)} }{ G+1} \Bigr) = Li_2(x) -
\frac{ Li_2 \bigl( x^{(G+1)} \bigr) }{ G+1 } \,,$$ and we see that for the Gentile statistics expression (\[c2\]) acquires the following nice form involving only the Rogers dilogarithms: $$\label{cG}
c_a = L \left( x_a \right) - {\textstyle \frac{1}{ G+1} }
L \left( x_a^{G+1} \right) \,.$$ Introducing, in agreement with (\[Id1\]), the constants $\epsilon_a= -\ln x_a$, we obtain from (\[cG\]) for $G=\infty$ and $G=1$ $$\label{cbcf}
c_a= L(e^{-\epsilon_a}) \qquad {\rm and} \qquad
c_a= L(e^{-\epsilon_a}) - {\textstyle \frac 12 }
L( e^{-2\epsilon_a}) \,.$$ These are the well-known values for the bosonic and fermionic statistics [@TBAKM]. Notice that in the latter case one usually derives $c_a= L(\frac{1}{1+ e^{\epsilon_a} })$. This is equivalent to our formula (\[cbcf\]) due to the Abel identity. In fact, the Abel identity can be used in a similar way for any $G$ of the form $G = 2^m \- 1$. In this case we can rewrite (\[cG\]) as follows $$\label{c2G}
c_a = \sum_{k=1}^{ m } \frac{1}{ 2^{k-1} } \, L \Bigl(
\frac{ (x_a)^{2^{k-1}} }{ 1+ (x_a)^{ 2^{k-1} } } \Bigr) \,.$$ In this form $c_a$ is a manifestly positive and monotonic function of $x_a$ (the dilogarithm $L(t)$ grows monotonically for $0 \!\leq\! t \!\leq\! 1$). Actually, this property holds not only for the Gentile statistics but in general case as well. To prove this statement we take a derivative of (\[c2\]) and use (\[xeq\]) and the formula (\[lim\]) for the entropy density. This yields $$\label{dc}
\partial_{x_a} c_a = \frac{ s(\mu_a(x_a)) }{2 x_a} \,.$$ As we discussed in Section 2, we should consider only such models where $s(\mu(x)) \> 0$. In this case the r.h.s. of (\[dc\]) is positive. Hence, $c_a$ is always a monotonically increasing function of $x_a$ (and therefore a monotonically decreasing function of $\epsilon_a$). Moreover, taking into account (\[posit\]), we infer from (\[c2\]) that $c_a(0) \= 0$, thus we also proved positivity of $c_a(x_a)$. Since these properties are crucial for a physical interpretation of $c_a$, let us underline that they hold only if the first condition in (\[posit\]) is fulfilled. Let us notice also that the properties of $c_a$ together with (\[cTBA\]) imply that for any model such that $N_{ab} \geq 0$ (which is the case, for instance, for all the ADE-affine Toda models [@TBAKM]) the value of $c_a$ does not exceed that of the corresponding free model (where all $N_{ab} = 0$ and consequently all $x_a =1$). Hence, (\[c2\]) evaluated for $x_a =1$ yields the upper bound on $c_a$, $$\label{cmax}
c_a \leq \int_0^1 \frac{ {\rm d}t }{t} \ln f_a(t) \,.$$ Notice, however, that for a statistics of the type IIIb with $R \< 1$ we have $x_a \< 1$, so that in this case the upper bound on $c_a$ will be lower than (\[cmax\]). From a physical point of view this implies that such a statistics cannot emerge in models with weak interaction and, in particular, in a free model.
It was argued in [@TBAKM] that the quantity $\frac{6}{\pi^2} c_a$ (for the ordinary statistics) can be interpreted as a massless degree of freedom associated to the $a$-th species in the massive model. The properties of $c_a$ which we proved above show that the interpretation of $\frac{6}{\pi^2} c_a$ can be extended to the case of a generalized extensive statistics as well. In this context, it is important to remark that for the ordinary statistics (actually, for the Gentile statistics of any order) we have $\frac{6}{\pi^2}c_a \leq 1$ as follows from (\[cG\]) (recall that $L(1) \= Li_2(1) \= \pi^2/6$) if all $N_{ab}$ are non-negative. However, for a general statistics the upper bound (\[cmax\]) on $\frac{6}{\pi^2} c_a$ depends on the choice of $f_a$ and may not equal to one. For instance, taking $f_a(t)=(1-t)^{-m}$ with $m > 1$, we obtain $\frac{6}{\pi^2} c_a = m$ for the corresponding free theory. In such a case, the massless degree of freedom of the corresponding free particle exceeds that of a free boson. This can possibly imply that we have to restrict the choice of statistics to such $f_a$ that $c_a \leq \frac{\pi^2}{6}$.
Examples related to affine Toda models
======================================
*.1 Ising and Klein-Gordon models* {#ising-and-klein-gordon-models .unnumbered}
-----------------------------------
The most simple examples which illustrate the features outlined above more concretely are the Ising model ($A_{1}$-minimal affine Toda field theory) with $S(\theta )=-1$ and the Klein-Gordon model with $S(\theta )=1.$ In both cases equation (\[GTBA\]) is solved trivially, $ x(\theta )= \exp \{ - rm\cosh \theta \} $. Then for the Gentile statistics (\[Gf\]) we can compute the entire scaling function (\[c(r)\]): $$\begin{aligned}
c_{\scriptscriptstyle G}(r) &=&
\frac{6rm}{\pi^2} \int_0^{\infty} {\rm d}\theta \,
\cosh \theta \, \Bigl( \ln(1- e^{-(G+1) \, rm\cosh \theta} ) -
\ln(1- e^{-rm\cosh \theta}) \Bigr) \nonumber \\
&=& \frac{6rm}{\pi^2} \sum_{k=1}^{\infty} \frac{1}{k}
\Bigl( K_{1}(krm) - K_{1}((G+1) krm) \Bigr) \,, \label{cf}\end{aligned}$$ where $K_{1}(t)$ is the modified Bessel function. We depict $c_{\scriptscriptstyle G}(r)$ in Fig. 2, referring to them by a slight abuse of notation as $A_1$. One observes that the difference in statistics affects most severely the ultraviolet region. The scaling functions for different $G$ converge relatively fast towards each other in the infrared regime. These appear to be common features of a general deformed statistics.
As seen from Fig. 2, the behaviour of $c_{\scriptscriptstyle G}(r)$ in the ultraviolet region (there is no first order term in the small $r$ expansion of $c_{\scriptscriptstyle G}(r)$) is similar for any finite $G$ to that of the fermionic scaling function and is different from the bosonic case (where the first order term is present). To explain this we observe that, as follows from (\[cf\]), $$\label{cb}
c_{\scriptscriptstyle G}(r) = c_{\rm b}(r) -
{\textstyle \frac{1}{G+1} } c_{\rm b} ((G+1)r) \,,$$ where $c_{\rm b}(r)$ is the bosonic scaling function. Now it is obvious that the first order terms on the r.h.s. of (\[cb\]) cancel each other for any finite $G$.
The well-known property of the asymptotic behaviour of the modified Bessel function, $\lim_{t\rightarrow 0} tK_{1}(t)=1$, allows us to derive from (\[cf\]) $$\label{cGt}
c^{\rm eff}_{\scriptscriptstyle G} =
\lim_{r\rightarrow 0} c_{\scriptscriptstyle G}(r)=
\frac{6}{\pi^2} \sum_{k=1}^{\infty} \frac{1}{k^2}
\Bigl( 1 - \frac{1}{G+1} \Bigr) = \frac{G}{G+1} \,,$$ which is in agreement with (\[cG\]) since $x_a \= 1$ now. It is worth noticing that $c^{\rm eff}_{\scriptscriptstyle G}$ can be identified for every positive integer $G$ as the effective central charge of a minimal conformal model ${\cal M}(s,t)$ (such that $st=6(G+1)$). In particular, $G=1,2,3,4$ correspond to the ${\cal M}(3,4)$, ${\cal M}(2,9)$, ${\cal M}(3,8)$, ${\cal M}(5,6)$ minimal models, respectively.
For the $\gamma$-ons (\[fga\]) the scaling function related to the $A_1$-models can also be computed: $$\label{cfga}
c_{\gamma}(r) = \frac{6rm}{\gamma\pi^2} \int_0^{\infty}
{\rm d}\theta \, \cosh \theta \,
\ln (1+ \gamma e^{- rm\cosh \theta} ) =
\frac{6 rm}{ \pi^2} \sum_{k=1}^{\infty} \frac{1}{k}
(-\gamma)^{k-1} K_{1}(krm) \,.$$ Using again the asymptotic behaviour of the modified Bessel function, we obtain $c^{\rm eff}_{\gamma} = \lim_{r\rightarrow 0} c_{\gamma}(r)=
- \frac{6}{ \gamma \pi^2} Li_2(-\gamma)$ in agreement with (\[cga1\]). Here, unlike in the Gentile case, it is not easy to see whether the effective central charge $c^{\rm eff}_{\gamma}$ has rational value for a given $\gamma$. As we discussed in Section 2, positive $\gamma$ is to be of the form $\gamma = 1/d$ with $d$ positive integer. It appears then that the only $d$ leading to rational value of $c^{\rm eff}_{\gamma}$ is $d=1$. On the other hand, $\gamma$ varies continuously in the range $-1 \leq \gamma \leq 0$, so that any rational value of $c^{\rm eff}_{\gamma}$ not exceeding $\frac{6}{\pi^2}$ occurs here. For the Boltzmann statistics $c^{\rm eff}_{0}=\frac{6}{\pi^2}$ is irrational. Finally, $\gamma \< -1$ is the case of a type IIIb statistics with $R \< 1$, which, as we discussed above, cannot emerge in a free model.
*.2 Scaling Potts and Yang-Lee models* {#scaling-potts-and-yang-lee-models .unnumbered}
--------------------------------------
Next we consider the scaling Potts model, which was studied previously in the TBA framework for the fermionic statistics in [@Zam] and for the Haldane-Wu statistics in [@BF]. The two particles in the model are conjugate to each other and consequently their masses are the same, $m_{1} \= m_{2} \= m$. The conjugate particle occurs as a bound state when two particles of the same species scatter. The S-matrix of the scaling Potts model equals the minimal S-matrix of the $A_{2}$-affine Toda field theory. The corresponding quantities $\varphi_{ab}(\theta)$ that enter the TBA equation are given by $$\label{SPphi}
\varphi_{11}(\theta) = \varphi_{22}(\theta) =
\frac{-\sqrt{3}}{2\cosh \theta +1} \qquad {\rm and} \qquad
\varphi_{12}(\theta) = \frac{\sqrt{3}}{1-2\cosh \theta } \,.$$
Consider the scaling Potts model with both species of particles obeying the Gentile statistics of the same order $G$. In this case the $Z_{2}$-symmetry of the model is preserved so that $x_{1}(\theta )=x_{2}(\theta) \equiv x(\theta)$ and (\[GTBA\]) reduces to a single integral equation. It appears to be impossible to find an analytic solution $x(\theta)$ to this equation, but it is straightforward to solve it numerically. Taking $x^{[0]}(\theta ) = \exp\{-rm\cosh \theta\} $ as the first approximation, we can iterate (\[GTBA\]) as follows $$\label{a2}
\ln \left( x^{[n+1]}(\theta )\right) = -rm \cosh \theta -
\frac{2\sqrt{3}}{\pi} \int_{-\infty }^{\infty}
{\rm d}\theta^\prime \frac{\cosh (\theta -\theta^\prime)}{
1+2 \cosh 2(\theta -\theta^\prime)}
\ln \left( f_{\scriptscriptstyle G}
\bigl( x^{[n]}(\theta^\prime) \bigr) \right).$$ Convergence is achieved relatively quickly (it depends on the value of $mr$). The results for $G \= 1$ (which is in complete agreement with the calculation in [@Zam]) and for $G \= 2$ are shown in Fig. 3. To make contact with the literature, we introduced the quantity $L(\theta )=\ln f_{\scriptscriptstyle G}(x(\theta))$. One observes the typical plateau $L(\theta)=const$ for some region of $\theta$ when $mr \rightarrow 0$, which is required to derive (\[cTBA\]). We see also that for the same value of $mr$ the functions $L(\theta)$ corresponding to different $G$ have similar profiles but different heights of the plateau.
Having solved the generalized TBA equation, we can substitute $x(\theta)$ into (\[c(r)\]) and compute the entire scaling function $c_{\scriptscriptstyle G}(mr)$. The result of the numerical computation is shown in Fig. 4. The behaviour of the scaling functions is similar to what we have already observed in the $A_1$ case.
Due to the fact that the scattering matrices of the scaling Potts model and the scaling Yang-Lee model are related as $S^{YL}(\theta) = S_{11}^{A_{2}}(\theta )S_{12}^{A_{2}}(\theta )$, the generalized TBA equations for the two models coincide. The only difference is that the scaling Yang-Lee model has only one particle. Therefore its scaling function equals a half of that for the scaling Potts model.
*.3 Comparison with Haldane-Wu statistics* {#comparison-with-haldane-wu-statistics-1 .unnumbered}
-------------------------------------------
In Section 3 we have seen that the entropy density corresponding to the Gentile statistics majorizes that for the Haldane-Wu statistics. It appears (at least for certain models) that an analogous relation holds for the corresponding finite-size scaling functions. Specifically, if $c_{\scriptscriptstyle G}(r)$ is the scaling function (\[c(r)\]) of $A_1$ or $A_2$ minimal affine Toda model where all species obey order $G$ Gentile statistics and $c_g(r)$ is the scaling function (\[hwcr\]) of the same model where the Haldane-Wu statistical interaction is of the form $g_{ab} = \frac{1}{G} \delta_{ab}$, then for all $r \geq 0$ we have $$\label{crin}
c_{\scriptscriptstyle G}(r) > c_g(r) \,, \qquad {\rm where}
\qquad g_{ab} = {\textstyle \frac{1}{G} } \delta_{ab}
\quad {\rm and} \quad G>1 \,.$$ For the $A_1$ case we can rigorously prove relation (\[crin\]) since the solutions of the generalized TBA equations (\[GTBA\]) and (\[hwTBA\]) are found explicitly: $$\label{xy0}
g \ln y(\theta) + (1-g) \ln (1+y(\theta)) =
mr \, \cosh\theta = - \ln x(\theta) \,.$$ This equation allows us to express $x(\theta)$ in terms of $y(\theta)$: $$\label{xy1}
x(\theta) = y^{-g}(\theta) \, (1+y(\theta))^{g-1} \,.$$ Moreover, since $mr \cosh\theta \> 0$, we infer from (\[xy0\]) that $$\label{yy0}
y(\theta) > y_0 \,, \qquad {\rm where} \qquad
y_0^g = (1+y_0)^{g-1} \,.$$ As seen from (\[c(r)\]) and (\[hwcr\]), in order to establish (\[crin\]) it suffices to show that $f_{\scriptscriptstyle G =1/g}(x(\theta)) > 1+y^{-1}(\theta)$. The latter relation is equivalent to the inequality $$\label{yy}
1- (1+y(\theta))^{-\frac 1g} >
y^g(\theta) \, (1+y(\theta))^{-g} \,,$$ as can be verified by simple algebraic manipulations using formula (\[xy1\]). Now it is elementary to check that (\[yy\]) is valid provided that (\[yy0\]) holds. Thus, we proved the assertion (\[crin\]) in the $A_1$ case. For illustration, the scaling function $c_g(r)$ for $g \= 1/2$ Haldane-Wu statistics is shown in Fig. 2. It lies everywhere below $c_{\scriptscriptstyle G}(r)$ for $G \= 2$ Gentile statistics with the maximal difference reached in the ultraviolet region.
For a higher rank case, already for $A_2$, the generalized TBA equations are integral and it is not clear whether we can compare $x_a(\theta)$ and $y_a(\theta)$ analytically. However, numerical computations for various $G$ show that (\[crin\]) holds in the $A_2$ case as well. For illustration, we present in Fig. 4 the scaling function $c_g(r)$ for the Haldane-Wu statistics with $g_{ab} \= \frac 12 \delta_{ab}$.
A particular consequence of (\[crin\]) is the following inequality for the corresponding central charges: $$\label{cin}
c^{\rm eff}_{\scriptscriptstyle G} > c^{\rm eff}_g
\,, \qquad {\rm for}
\quad g_{ab} = {\textstyle \frac{1}{G} } \delta_{ab}
\quad {\rm and} \quad G>1 \,.$$ Here $c^{\rm eff}_{\scriptscriptstyle G}$ is found from (\[c0\]), (\[cTBA\]) and (\[cG\]) whereas $c^{\rm eff}_g$ is the effective central charge related to the Haldane-Wu statistics with $g_{ab} = \frac{1}{G} \delta_{ab}$ and is given by [@BF] $$\label{chw1}
c^{\rm eff}_g = \frac{6}{\pi^2} \sum_{a=1}^l
L\Bigl( \frac{1}{1+y_a} \Bigr) \,, \qquad {\rm where}
\quad \ln(1+y_a) = \sum_{b=1}^l ( N_{ab} + g \delta_{ab} )
\ln (1+y_a^{-1}) \,.$$ Again, in the $A_1$ case we can prove relation (\[cin\]) directly. Here $c^{\rm eff}_{\scriptscriptstyle G}$ has a simple form (\[cGt\]) and therefore (\[cin\]) reduces to $$\label{din1}
\frac{1}{1+g} > \frac{6}{\pi^2}
L\Bigl( \frac{1}{1+y_0} \Bigr) \,,$$ where $y_0$ is defined as in (\[yy0\]). Notice that $g \< y_0 \< 1$ for $g\neq 0,1$ (since $y_0 \leq g$ would contradict (\[Gin\])). Therefore, $\frac 12 \< \frac{1}{1+y_0} \< \frac{1}{1+g}$. Using now a property of the dilogarithm, $$\label{Lin}
L(t) < \frac{\pi^2}{6} t \qquad {\rm for} \quad
{\textstyle \frac 12 < t < 1 } \,; \qquad
L(t) > \frac{\pi^2}{6} t \qquad {\rm for} \quad
{\textstyle 0 < t < \frac 12 } \,,$$ we establish the desired inequality (\[din1\]).
As we discussed above, the order $G$ of the Gentile statistics is to be a positive integer. However, if we regard (\[c0\]), (\[cTBA\]) and (\[cG\]) as a [*definition*]{} of the quantity $c^{\rm eff}_{\scriptscriptstyle G}$, then (\[din1\]) extends to any $G \>1 $ (since the proof does not require that $G$ be an integer) as shown in Fig. 5. Furthermore, considering $G$ in the range $0 \< G \< 1$, we can prove with the help of (\[Gin\]) that here $y_0 < g$ and hence $\frac{6}{\pi^2} L(\frac{1}{1+y_0}) \> \frac{1}{1+g}$ due to (\[Lin\]). Thus, for this range of $G$ we have a [*reverse*]{} inequality, i.e., $$\label{cin2}
c^{\rm eff}_{\scriptscriptstyle G} < c^{\rm eff}_g
\,, \qquad {\rm for}
\qquad g_{ab} = {\textstyle \frac{1}{G} } \delta_{ab}
\quad {\rm and} \quad 0<G<1 \,.$$ As we will see below, this inequality holds for higher rank cases as well. But let us stress again that $c^{\rm eff}_{\scriptscriptstyle G}$ in (\[cin2\]) is a formally defined quantity which is not related directly to the Gentile statistics.
The inequality for the central charges (\[cin\]) is a weaker statement than the inequality for the scaling functions (\[crin\]). Furthermore, (\[cin\]) involves dilogarithms, which makes its proof more complicated (already in the $A_1$ case we used a rather subtle property (\[Lin\])). But the advantage of (\[cin\]) for a numerical verification is that it relates numbers and not functions as (\[crin\]). Moreover, since $c(r)$ is a continuous function, validity of (\[cin\]) implies that (\[crin\]) holds at least in some ultraviolet region. Therefore, we will discuss below validity of (\[cin\]) for several affine Toda models. As a mathematical by-product, this provides us with interesting dilogarithm inequalities.
In the $A_2$ case, eq. (\[cin\]) contains dilogarithms in a more involved way. As seen from (\[cTBA\]), (\[cG\]) and (\[chw1\]), it is equivalent to the following inequality $$\label{din2}
L ( \tilde{x}_0 ) - \frac{g}{1+g}
L\Bigl( \tilde{x}_0^{1+\frac 1g} \Bigr) >
L\Bigl(\frac{1}{ 1+\tilde{y}_0 } \Bigr) \,,
\qquad {\rm for} \quad 0 < g < 1 \,.$$ Here $\tilde{x}_0$ and $\tilde{y}_0$ are determined from the following equations that follow from (\[cTBA\]) and (\[chw1\]) upon noticing that $N_{11} \+ N_{12} \= 1$ $$\label{din3}
\tilde{x}_0 \, f_{1/g} ( \tilde{x}_0) = 1 \qquad
{\rm and} \qquad \tilde{y}_0^{1+g} = ( 1+\tilde{y}_0 )^g \,.$$ It is worth remarking that, as was noticed in [@BF], $\frac{6}{\pi^2} L(\frac{1}{ 1+ \tilde{y}_0} )$ coincides with the effective central charge of the Calogero-Sutherland model with the coupling constant $\lambda =g$.
Numerical computations show that (\[din2\]) indeed holds for any $0 \< g \< 1$, that is for any $G \>1$ (here again we need not to restrict $G$ to be an integer). Furthermore, like in the $A_1$ case, inequality (\[din2\]) reverses for $0 \< G \< 1$, i.e., for $g \> 1$. For illustration, we depict the corresponding $c^{\rm eff}_{\scriptscriptstyle G}$ and $c^{\rm eff}_g$ in Fig. 5. This result provides an additional support to our claim that in the $A_2$ case the entire scaling function $c_{\scriptscriptstyle G}(r)$ majorizes $c_g(r)$ for $G \> 1$.
It is interesting to remark that, as follows from (\[cG\]) and (\[chw1\]), the bosonic version ($G \= \infty$ or, equivalently, $g \= 0$) of the $A_2$-minimal affine Toda model has $c_a \= \frac 12$. That is, then each species in the scaling Potts model (or the only species in the scaling Yang-Lee model) appears to be a free fermion, whereas the entire central charge is that of a free boson (see also related comments in [@BF]). That is why all curves on Fig. 5 converge to the same value $c \= 1$.
*.4 Higher rank cases* {#higher-rank-cases .unnumbered}
-----------------------
Now we discuss briefly the simplest minimal affine Toda models which have particle species with different masses. We will compare numerical values of $c^{\rm eff}_{\scriptscriptstyle G}$ and $c^{\rm eff}_{g}$ again setting $g_{ab} \= \frac 1G \delta_{ab}$. For this purpose we use (\[cTBA\]) and (\[chw1\]), taking into account [@TBAKM] that the $N$-matrix in these equations is given by $N_{\rm\bf g} \= I_{\rm\bf g} (2-I_{\rm\bf g})^{-1}$, where $I_{\rm\bf g}$ stands for the incidence matrix of the corresponding Lie algebra [**g**]{}.
In the $A_3$ case the masses of the three species are $m_1 \= m_3 \= m_2/\sqrt{2}$. Then, computing $c^{\rm eff}_{\scriptscriptstyle G}$ by formulae (\[cTBA\]) and (\[cG\]), we find: $c^{\rm eff}\approx 0.77$, $c^{\rm eff}=1$, $c^{\rm eff}\approx 1.12$, $c^{\rm eff}\approx 1.16$ for $G=\frac 12, 1, 2, \infty$, respectively. The values of $c^{\rm eff}_{g}$ corresponding to the Haldane-Wu statistics with $g_{ab} = \frac{1}{G} \delta_{ab}$ were found in [@BF]: $c^{\rm eff}\approx 0.89$, $c^{\rm eff}=1$, $c^{\rm eff}\approx 1.07$, $c^{\rm eff}\approx 1.16$ for $g=2, 1, \frac 12, 0$, respectively. Carring out analogous computations for the $A_4$ case, where the masses of the four species are $m_1 \= m_4$, $m_2 \= m_3 \= m_1 (\sqrt{5} \+ 1)/2$, we find (Gentile statistics): $c^{\rm eff}\approx 0.92$, $c^{\rm eff}=8/7$, $c^{\rm eff}\approx 1.25$, $c^{\rm eff}\approx 1.28$ for $G=\frac 12, 1, 2, \infty$, respectively; and (Haldane-Wu statistics): $c^{\rm eff}\approx 1.05$, $c^{\rm eff}=8/7$, $c^{\rm eff}\approx 1.20$, $c^{\rm eff}\approx 1.28$ for $g=2, 1, \frac 12, 0$, respectively.
One can also consider a folded algebra case with two different masses, namely $A_4^{(2)}$, where $m_2 \= m_1 (\sqrt{5} \+ 1)/2$. In general, one assigns to the $A_{2n}^{(2)}$ minimal affine Toda model a tad-pole graph [@TBAKM; @Rav] such that the corresponding incidence matrix differs from that of $A_n$ only by 1 in the lower-right entry. Then one can check that $x_a$ in (\[cTBA\]) and $y_a$ in (\[chw1\]) coincide with those for the $A_{2n}$ case (a well-known fact for the fermionic statistics [@TBAKM; @Zam]). Consequently, for any statistics we have $c^{\rm eff}(A_{2n}^{(2)}) \= \frac 12
c^{\rm eff}(A_{2n})$. Therefore, the data for $A_4^{(2)}$ follow from the above results for $A_4$.
Finally, in the $D_4$ case, where the masses of the four species are $m_1 \= m_3 \= m_4 \= m_2/\sqrt{3}$, we obtain for the Gentile statistics: $c^{\rm eff}\approx 0.82$, $c^{\rm eff}=1$, $c^{\rm eff}\approx 1.08$, $c^{\rm eff}\approx 1.09$ for $G=\frac 12, 1, 2, \infty$, respectively; and for the Haldane-Wu statistics: $c^{\rm eff}\approx 0.93$, $c^{\rm eff}=1$, $c^{\rm eff}\approx 1.04$, $c^{\rm eff}\approx 1.09$ for $g=2, 1, \frac 12, 0$, respectively.
We see that all the above results are in agreement with (\[cin\]) and (\[cin2\]). This, together with the more detailed results we obtained in the $A_1$ and $A_2$ cases, allows us to conjecture that these inequalities and the inequality (\[crin\]) for the scaling functions hold actually for all simply-laced minimal affine Toda models.
We remark also that in all the computations of $c^{\rm eff}$ in this subsection we always have the rule that the value of $c_a$ is smaller for the species which has heavier particle (this was known for the fermionic statistics [@TBAKM]). This observation provides an additional support to the above discussed interpretation of $\frac{6}{\pi^2}c_a$ as a massless degree of freedom of the corresponding particle.
Conclusion
==========
Summarizing, we considered possible types of a generalized extensive statistics and established properties of the corresponding entropy densities. In particular, we established numerically (and gave some supporting analytical arguments) that the entropy density for the Gentile statistics of order $G \> 1$ majorizes that for the Haldane-Wu statistics if they correspond to the same maximal occupancy. Further, we derived the thermodynamic Bethe ansatz equation and the finite-size scaling function $c(r)$ for a relativistic multi-particle system obeying a generalized extensive exclusion statistics. We put particular emphasis on the ultraviolet limit of such a system. We derived an expression for an effective central charge in the general case and showed that for the Gentile statistics it acquires an elegant form involving dilogarithms. We discussed a physical interpretation of the ‘partial’ central charges $c_a$ and argued that it possibly leads to restrictions on the choice of statistics. Finally, we observed (and partially proved) that the majorizing properties of the Gentile statistics with respect to the Haldane-Wu statistics extend also to finite-size scaling functions and central charges related to the minimal simply-laced affine Toda models.
In the presented analysis of thermodynamic properties of a relativistic multi-particle system the single-state partition function $f(t)$ played a key role. For a given model, if we know $f(t)$ (at least in the thermodynamic limit) and the quantities $\varphi_{ab}(\theta)$, then we can compute, at least in principle, the entire scaling function which provides complete thermodynamic description of a system. In principle, $f(t)$ should be determinable from the corresponding Hamiltonian. However, this might be a non-trivial task since the exclusion statistics of particles can differ from the exchange statistics of the fields present in the Hamiltonian. Therefore, it may be more practical for a given system to fix an exotic statistics a-priory and investigate whether it leads to realistic physical properties. In particular, one can obtain information about the ultraviolet limit and see if it corresponds to an appropriate conformal field theory. As we have seen above, the different types (\[types\]) of $f(t)$ lead to significantly different thermodynamic properties. The question, which of these statistics can emerge in physical models, remains open.
Another open problem is to prove rigorously the assertion (\[ineq2\]) that the entropy density for the Gentile statistics majorizes that for the Haldane-Wu statistics. Such a proof will probably provide a deeper insight into general properties of exotic exclusion statistics. Also it would be interesting to verify the majorizing inequalities for the scaling functions (\[crin\]) and central charges (\[cin\]) related to the affine Toda models and to understand whether such a majorization is a model independent feature of the two involved statistics.
I am grateful to A. Fring for proposing the problem and participation in the initial stage of the research. This work was supported in part by INTAS grant 99-01459 and by Russian Fund for Fundamental Investigations grant 99-01-00101.
[10]{} O.W. Greenberg and A.M.L. Messiah, [*Phys. Rev.*]{} [**B136**]{} (1964) 248.
F.D.M. Haldane, [*Phys. Rev. Lett.* ]{} [**67**]{} (1991) 937.
F. Wilczek, [*Phys. Rev. Lett.* ]{} [**48**]{} (1982) 1114, [**49**]{} (1982) 957;\
Y.S. Wu, [*Phys. Rev. Lett.* ]{} [**52**]{} (1984) 2103.
T.R. Klassen and E. Melzer, [*Nucl. Phys.* ]{} [**B338**]{} (1990) 485.
K. Schoutens, [*Phys. Rev. Lett.*]{} [**79**]{} (1997) 2608;\
A.G. Bytsko and A. Fring, [*Nucl. Phys.*]{} [**B521**]{} (1998) 573, [*Phys. Lett.*]{} [**B454**]{} (1999) 59;\
P. Bouwknegt and K. Schoutens, [*Nucl. Phys.*]{} [**B547**]{} (1999) 501.
G. Gentile, [*Nuovo Cim.*]{} [**17**]{} (1940) 493; [*Nuovo Cim.*]{} [**19**]{} (1942) 109.
H.S. Green, [*Phys. Rev.*]{} [**90**]{} (1953) 270.
H.S. Green, [*Prog. Theor. Phys.*]{} [**47**]{} (1972) 1400;\
Y. Ohnuki, S. Kamefuchi, [*Prog. Theor. Phys.*]{} [**52**]{} (1974) 1369;\
R.Y. Levine, Y. Tomozawa, [*Phys. Lett.*]{} [**B128**]{} (1983) 189.
A.J. Bracken and H.S. Green, [*J. Math. Phys.*]{} [**14**]{} (1973) 1784;\
P.G.O. Freund, [*Phys. Rev.*]{} [**D13**]{} (1976) 2322;\
M. Cattani and N.C. Fernandes, [*Nuovo Cim.*]{} [**B87**]{} (1985) 70; [*Phys. Lett.*]{} [**A124**]{} (1987) 229.
J.D. Anand, S.N. Biswas and M. Hasan, [*Phys. Rev.*]{} [**D18**]{} (1978) 4529;\
M.C. de Sousa Vieira and C. Tsallis, preprint CBPF-NF-038/86 (1986);\
P. Suranyi, [*Phys. Rev. Lett.*]{} [**65**]{} (1990) 2329;\
A.K. Rajagopal, [*Phys. Lett.*]{} [**A214**]{} (1996) 127.
K. Byczuk, J. Spalek, G.S. Joyce and S. Sarkar, [*Acta Phys. Polon.*]{} [**B26**]{} (1995) 2167.
C.N. Yang and C.P. Yang, [*Phys. Rev.*]{} [**147**]{} (1966) 303; [*J. Math. Phys.*]{} [**10**]{} (1969) 1115.
Al.B. Zamolodchikov, [*Nucl. Phys.*]{} [**B342**]{} (1990) 695; [**B358**]{} (1991) 524.
Al.B. Zamolodchikov, [*Phys. Lett.*]{} [**B253**]{} (1991) 391.
C. Nayak, F. Wilczek, [*Phys. Rev. Lett.*]{} [**73**]{} (1994) 2740.
K.N. Ilinski, J.M.F. Gunn and A.V. Ilinskaia, [*Phys. Rev.*]{} [**B53**]{} (1996) 2615.
A.P. Polychronakos, [*Phys. Lett.*]{} [**B365**]{} (1996) 202.
M.V. Fedoruk, [*Asymptotics: integrals and series*]{} (in Russian), (Nauka, 1987).
E.C. Titchmarsh, [*The theory of functions*]{}, (Oxford University Press, Oxford, 1939).
R. Acharya and P. Narayana Swamy, [*J. Phys.*]{} [**A27**]{} (1994) 7247;\
A.V. Ilinskaia, K.N. Ilinski and J.M.F. Gunn, [*Nucl. Phys.*]{} [**B458**]{} (1996) 562.
Y.S. Wu, [*Phys. Rev. Lett.* ]{} [**73**]{} (1994) 922.
D. Bernard and Y.S. Wu, cond-mat/9404025 (1994);\
M.V.N. Murthy and R. Shankar, [*Phys. Rev. Lett.*]{} [**73**]{} (1994) 3331;\
K. Hikami, [*Phys. Lett.*]{} [**A205**]{} (1995) 364;\
S.B. Isakov, D.P. Arovas, J. Myrheim and A.P. Polychronakos, [*Phys. Lett.* ]{} [**A212**]{} (1996) 299.
A.G. Bytsko and A. Fring, [*Nucl. Phys.* ]{} [**B532**]{} (1998) 588.
A. Fring, C. Korff and B.J. Schulz, [*Nucl. Phys.*]{} [**B549**]{} (1999) 579.
C. Korff, hep-th/0008200 (2000).
H.W.J. Blöte, J.L. Cardy and M.P. Nightingale, [*Phys. Rev. Lett.*]{} [**56**]{} (1986) 742;\
I. Affleck, [*Phys. Rev. Lett.* ]{} [**56**]{} (1986) 746.
F. Ravanini, A. Valleriani and R. Tateo, [*Int. J. Mod. Phys.*]{} [**A8**]{} (1993) 1707;\
M.J. Martins, [*Nucl. Phys.*]{} [**B394**]{} (1993) 339.
M.D. Freeman, [*Phys. Lett.*]{} [**B261**]{} (1991) 57;\
A. Fring, H.C. Liao and D.I. Olive, [*Phys. Lett.*]{} [**B266**]{} (1991) 82.
![image](plot1.eps){width="62mm"}
[Fig. 1: Entropy densities for $G=2$ Gentile statistics (the upper curve), for $g=1/2$ Haldane-Wu statistics (the middle curve) and for the fermionic statistics (the lower curve). ]{}
![image](plot2.eps){width="62mm"}
[Fig. 2: Scaling functions of the $A_1$-minimal affine Toda model for the Gentile statistics of order $G=1,2,3,4$ (solid curves, from down to up), for $g=1/2$ Haldane-Wu statistics (the dashed curve) and for the bosonic statistics (the dotted curve). ]{}
![image](plot3.eps){width="62mm"}
[Fig. 3: Solution $L(\theta)$ of the TBA equation for the $A_2$-minimal affine Toda model for the scaling parameter $mr=0.1$ (dotted curves), $mr=0.01$ (dashed curves), and $mr=0.001$ (solid curves). For the same value of $mr$ the lower curve corresponds to the fermionic statistics and the upper curve to $G \= 2$ Gentile statistics.]{}
![image](plot4.eps){width="62mm"}
[Fig. 4: Scaling functions of the $A_2$-minimal affine Toda model for the Gentile statistics of order $G\ = 1,2,3,4$ (solid curves, from down to up) and for the Haldane-Wu statistics with $g_{ab} = \frac 12 \delta_{ab}$ (the dashed curve).]{}
![image](plot5.eps){width="62mm"}
[Fig. 5: Effective central charge of the $A_1$ and $A_2$-minimal affine Toda models for the Gentile statistics of order $G$ (lower and upper solid curves) and for the Haldane-Wu statistics with $g_{ab} = \frac{1}{G} \delta_{ab}$ (lower and upper dashed curves).]{}
|
---
---
16.5cm -0.2cm 0.31cm -0.91cm
[Retarded Electromagnetic Interaction and Dynamic ]{}\
[Foundation of Classical Statistical Mechanics and]{}\
[Elimination of Reversibility Paradox of Time Reversal]{}\
0.2in [Mei Xiaochun]{}\
0.2in
(Department of Physics, Fuzhou University, Fuzhou, 350025, China, E-mail: fzbgk@pub3.fz.fj.cn )
It is proved in the paper that the non-conservative dissipative force and asymmetry of time reversal can be naturally introduced into classical statistical mechanics after retarded electromagnetic interaction between charged micro-particles is considered. In this way, the rational dynamic foundation for classical statistical physics can established and the revised Liouville’s equation is obtained. The micro-canonical ensemble, the canonical ensemble, the distribution of near-independent subsystem, the distribution law of the Maxwell-Boltzmann and the Maxwell distribution of velocities are achieved directly from the Liouville’s equation without using the hypothesis of equal probability. The micro-canonical ensemble is considered unsuitable as the foundation of equivalent state theory again, for most of equivalent states of isolated systems are not the states with equal probabilities actually. The reversibility paradoxes in the processes of non-equivalent evolutions of macro-systems can be eliminated completely, and the united description of statistical mechanics of equivalent and non-equivalent states is reached. The revised BBKGY series equations and hydromechanics equations are reduced, the non-equivalent entropy of general systems is defined and the principle of entropy increment of the non-equivalent entropy is proved at last.\
\
PACS numbers:0520, 0570\
\
[1. The fundamental problems existing in classical statistical physics ]{}\
Though classical statistical mechanics has been highly developed, its foundation has not yet been built up well up to now $^{(1)}$. The first problem is about the rationality of the equal probability hypothesis or the micro-canonical ensemble hypothesis£¬which is used as the foundation of equilibrium theory now. The hypothesis had got much criticism since it was put forward. In order to provide the hypothesis a rational base, Boltzmann raised the ergodic theory, proving that as long as a system was ergodic, the hypothesis of equal probability would be tenable. However, the study shows that the evolutions of systems can’t be ergodic generally $^{(2)}$. So the hypothesis of equal probability can only be regarded as a useful work principle without strict proof. As for the non-equilibrium statistical systems, we have no united and perfect theory at present. We do not know how a system to transform from a non-equilibrium state to an equilibrium one at present and how to define the non-equivalent entropy of general systems. Besides, there still exists a so-called reversibility paradox in the theory. Though lots of researches have been done, the really rational solution still remains to be explored.
The reversibility paradox has been a long-standing problem. There exist two forms of the reversibility paradox at present. The first is the so-called Poincanre’s recurrence. It was put forward by Zermelo in 1896 based on a theorem provided by Poincanre in 1890 $^{(3)}$. According to this form, a conservative system in a limited space would return to the infinitely nearing neighbor reign of its initial state. The basic ideal of the proof is described as follows. For a conservative system, the Liouville’s theorem is tenable, so the volume of phase space is unchanged in the evolution process. Because the volume of phase space is limited, the system should be recurrent after a long enough time’s evolution. That is to say, the process is reversible. The second form was put forward by Loschmidt in 1876. Loschmidt thought that any micro-dynamic motion equation did not change under time reversal, or the motion of any single micro-particle was reversible, so after the velocities of all particles in a macro-system are reversed at the same time, the system would evolve along the completely opposite direction. Therefore, the process would be reversible. However, in the evolution processes of isolated macro-systems, what can be observed is that the processes are always irreversible. Therefore, there exists the so-called reversibility paradox. Though a large numbers of explanations have been given up to now, none of them are satisfying.\
\
[2. The Lorents retarded force and the basic hypothesis of classical statistical physics ]{}\
It is well-known that there exist two kinds of motion equations, one is for single particles, the other is for a statistical systems composed of a larger numbers of particles. The so-called micro-equation of motion that Loschmidt mentioned about in his time was actually the Newtonian equation $md^2\vec{r}/{d}t^2=\vec{F}$. In that time quantum mechanics has not been found. Whether the Newtonian equation can keep unchanged under time reversal depends on the form of force $\vec{F}$. Only when $\vec{F}$ is conservative, it dose. In general situations when $\vec{F}$ is relative to time $t$ and momentum $\vec{p}$, the Newtonian equation can’t keep unchanged under time reversal in general. In the common statistical systems composed of charged micro-particles, the interaction forces between micro-particles are electromagnetic forces. In the systems composed of neutral atoms and molecules, atoms and molecules can be regarded as electromagnetic dipole moments and quadrupole moments for their deformations caused by interactions and interaction between them can considered as one between electromagnetic dipole moments and quadrupole moments. In the paper, we only discuss interaction between charged particles, but the principle is the same for neutral atoms and molecules.
If the retarded interaction is not considered, the Lorentz force between two particles with charges $q$ and $q'$ as well as velocities $\vec{v}$ and $\vec{v}'$ is $$\vec{F}={{qq'\vec{r}}\over{r^3}}+{{qq'\vec{v}'\times(\vec{v}\times\vec{r})}\over{c^2r^2}}$$ In the current statistical physics, we use the formula to describe interactions between micro-particles. Because the force is unchanged when $\vec{v}\rightarrow-\vec{v}$ and $\vec{v}'\rightarrow-\vec{v}'$ under time reversal, it is impossible for us to introduce the irreversibility of time reversal into the theory to solve the problems in non-equivalent statistical mechanics. However, it should be noted that according to special relativity, instantaneous interaction does not exist. Electromagnetic interaction propagates in the speed of light. So retarded interaction should be considered between micro-particles in statistical systems. It is proved below that after retarded interaction is considered, the Lorentz forces can not keep unchanged under time reversal again and a rational dynamic foundation can be established for classical statistical mechanics and the reversibility paradox can resolved well.
Let $t',\vec{r}',\vec{v}'$ and $\vec{a}'$ represent retarded time, coordinate, velocity and acceleration, $t,\vec{r},\vec{v}$ and represent non-retarded time, coordinate, velocity and acceleration. A particle with charge $q_j$, velocity $\vec{v}'_j$ and acceleration $\vec{a}'_j$ at space point $\vec{r}'_j(t')$ and time $t'$ would cause retarded potentials as follows at space point $\vec{r}_i(t)$ and time $t$ $$\varphi_{ij}={{q_j}\over{(1-{{\vec{\nu}'_j\cdot\vec{n}'_{ij}}\over{c}})r'_{ij}}}~~~~~~~~~~~~\vec{A}_{ij}={{q_j\vec{\nu}'_j}\over{c(1-{{\vec{\nu}'_j\cdot\vec{n}'_{ij}}\over{c}})r'_{ij}}}$$ In the formula $\vec{r}'_{ij}(t,t')=\vec{r}_i(t)-\vec{r}'_j(t')$, $r'_{ij}=\mid\vec{r}'_{ij}\mid$, $\vec{n}'_{ij}=\vec{r}'_{ij}/{r}'_{ij}$. Let $v'_{jn}=\vec{v}'_j\cdot\vec{n}'_{jn}$, $a'_{jn}=\vec{a}'_j\cdot\vec{n}'_{ij}$£¬ the intensities of electromagnetic fields caused by $j$ particle at space point $\vec{r}_i$ and time $t$ are $$\vec{E}'_{ij}={{q_j(1-{{v'^2_j}\over{c^2}})(\vec{n}'_{ij}-{{\vec{v}'_j}\over{c}})}\over{(1-{{\vec{v}'_j\cdot\vec{n}'_{ij}}\over{c}})^3r'^2_{ij}}}+{{q_j\vec{n}'_{ij}\times[(\vec{n}'_{ij}-{{\vec{v}'_j}\over{c}})\times\vec{a}'_j]}\over{c^2(1-{{\vec{v}'_j\cdot\vec{n}'_{ij}}\over{c}})^3r'_{ij}}}~~~~~~~~\vec{B}'_{ij}=\vec{n}'_{ij}\times\vec{E}'_{ij}$$ We call the forces as the retarded Lorentz forces in which retarded interaction has been considered. Suppose there are $N$ particles in the system. Suppose the $i$ particle with charge $q_i$ at space point $\vec{r}_i(t)$ at time $t$ moves in velocity $\vec{v}_i$, the retarded Lorentz force acted on the $i$ particle caused by the $j$ particle is $$\vec{F}'_{Rij}={{q_i{q}_j(1-{{v'^2_j}\over{c^2}})}\over{(1-{{v'_{jn}}\over{c}})^3{r}'^2_{ij}}}[\vec{n}_{ij}(1-{{\vec{v}_i\cdot\vec{v}'_j}\over{c^2}})-{{\vec{v}'_j}\over{c}}(1-{{v_{in}}\over{c}})]$$ $$+{{q_i{q}_j}\over{c^2(1-{{v'_{jn}}\over{c}})^3r'_{ij}}}\{\vec{n}_{ij}[a'_{jn}(1-{{\vec{v}_i\cdot\vec{v}'_j}\over{c^2}})-{{\vec{v}_i\cdot\vec{a}'_j}\over{c}}(1-{{v'_{jn}}\over{c}})]$$ $$-{{\vec{v}'_j{a}'_{jn}}\over{c}}(1-{{v_{in}}\over{c}})-\vec{a}'_j(1-{{v_{in}}\over{c}})(1-{{v'_{jn}}\over{c}})\}$$ In the formula, $v_{in}=\vec{n}_{ij}\cdot\vec{v}_i$, $v'_{jn}=\vec{n}_{ij}\cdot\vec{v}'_j$, $a'_{jn}=\vec{n}'_{ij}\cdot\vec{a}'_j$. In order to do rational approximate calculation, let’s accumulate the magnitude order of acceleration. According to formula $a=v^2/{r}'$, $a/{c}^2$ has the magnitude order of $1/{r}'$. Suppose hydrogen atom can be regarded as a harmonic oscillator with amplitude $b$ and angle frequency $\omega$, so the acceleration of oscillator is $b\omega^2$. The magnitude order of energy of hydrogen atom is $E=hc{/}\lambda=mv^2=mb^2\omega^2$. Suppose the wavelength of photons emitted by hydrogen atom is $\lambda=4\times{10}^{-7}{M}$, it can be calculated that $a/{c}^2=1/{r}'\simeq{60}$. But the magnitude order of distance between atoms is $r\simeq{10}^{-10}{M}$, $1/{r}\simeq{10}^{10}$. According to Eq.(4), the first item of the retarded Lorentz force directs ratio to $1/{r}^2$. So for the neighbor interaction, we have $a/{c}^2{r}=1/{r}r'=6\times{10}^{-9}/{r}^2<<{1}/{r}^2$, the item containing acceleration can be omitted. For the distance interaction, $a/{c}^2\geq{1}/{r}^2$, the items containing acceleration can not be omitted On the other hand, according to the Maxwell distribution of velocity, the average speed of hydrogen atoms is $\bar{v}=\sqrt{8kT/\pi{m}}$. Under common temperature $T=300K$, we have $\bar{v}=6.3\times{10}^6{M}$, $\bar{v}/{c}=2.1\times{10}^{-2}$. So the item containing $v/{c}$ is much bigger than the item containing acceleration, even the items containing $v^2/{c}^2$ are bigger than that containing acceleration in near neighbor interaction. So we should retain the items containing $v/{c}$, $v^2/{c}^2$ and $va/{c}^3$, but omitted high order items, then write Eq.(4) as $$\vec{F}_{Rij}={{q_i{q}_j}\over{r'^2_{ij}}}[\vec{n}'_{ij}(1+{{3v'_{jn}}\over{c}}+{{6v'^2_{jn}}\over{c^2}}-{{v'^2_j}\over{c^2}}-{{\vec{v}_i\cdot\vec{v}'_j}\over{c^2}})-{{\vec{v}'_j}\over{c}}(1-{{v'_{in}}\over{c}}+{{3v'_{jn}}\over{c}})]$$ $$+{{q_i{q}_j}\over{c^2{r}'_{ij}}}\{\vec{n}_{ij}[a'_{ij}(1+{{3v'_{jn}}\over{c}})-{{\vec{v}_i\cdot\vec{a}'_j}\over{c}}]-{{\vec{v}'_j{a}'_{jn}}\over{c}}-\vec{a}'_j(1-{{v_{in}}\over{c}}+{{2v'_{jn}}\over{c}})\}$$ For the convenience of discussion later, we write $\vec{F}'_{Rij}=\vec{F}'_{0ij}+\vec{F}'_{ij}$, $\vec{F}'_{0ij}=q_i{q}_j\vec{n}'_{ij}/r'^2_{ij}$ represent conservative part and $\vec{F}'_{ij}$ represents non-conservative part. The total force acted on the i-particle caused by the other particles is $\vec{F}'_{Ri}=\sum\vec{F}'_{0ij}+\sum\vec{F}'_{ij}=\vec{F}'_{0i}+\vec{F}'_i$. For the macro-systems composed of large numbers of neutral atoms and molecular, we can consider atoms and molecular as electromagnetic dipole moments and quadrupoles and obtain the retarded Lorents forces in the same way.
On the other hand, according to classical electromagnetic theory, there exists the radiation damping forces $\vec{G}$ acted on the accelerated particles. The radiation damping forces are relative to the acceleration of acceleration. Suppose $l$ is the dimension of the particles, particle’s speed $v<<{c}$, acceleration $a<<{c}^2/{l}$, the acceleration of acceleration $\dot{a}<<{c}^3/{l}^2$, when the distribution of particle’s charge is with spherical symmetry, we have $\vec{G}=\kappa\dot{\vec{a}}$, $\kappa=2q^2/{3}c^3$.
In classical electromagnetic theory, the Hamiltonian equations of motions can be described by using the space coordinates and common momentums $\vec{p}_i$, or by using the spaces coordinates and canonical momentums $\vec{p}_{zi}$ with relation $\vec{p}_{zi}=\vec{p}_i+q_i\vec{A}_i/c$. Both are equivalent. In statistical mechanics, it is more convenient to use common momentums. The accelerations and the accelerations of accelerations of particles should be regarded as the function of coordinate, momentum and time. On the other hand, the acceleration of the -particle is caused by other particle’s interactions, and it is relative to the retarded speeds and distances, i,e., $\vec{a}'_i=\vec{a}'_i(\vec{r}_i,\vec{p}_i,\vec{r}'_j,\vec{p}'_j)$, $\dot{\vec{a}}'_i=\dot{\vec{a}}'_i(\vec{r}_i,\vec{p}_i,\vec{r}'_j,\vec{p}'_j)$. So the total Hamiltonian of the non-conservative system composed of $N$ charged particles can be written as $H=H_0+H'$, in which $H_0$ is the conservative Hamiltonian $$H_0=\sum_{i}{{\vec{p}^2_i}\over{2m_i}}+\sum_{i<j}{U}_{0ij}(r'_{ij})$$ Here $U_{ij}(r'_{ij})$ is the conservative interaction energy, $H'$ is the non-conservative Hamiltonian $$H'=\sum_{i}{U}_i(\vec{r}_i,t)+\sum_{i<j}{U}_{ij}(r'_{ij},\vec{p}_i,\vec{p}'_j)$$ $U_i(\vec{r}_i,t)$ is the interaction caused by external force, $U_{ij}(r'_{ij},\vec{p}_i,\vec{p}'_j)$ is the interaction caused by non-conservative force. The motion equation of the i-particle is $$\dot{x}_{i\sigma}={{\partial{H}_0}\over{\partial{p}_{i\sigma}}}~~~~~~~~~~~~~~~~~~~\dot{p}_{i\sigma}=-{{\partial{H}_0}\over{\partial{x}_{i\sigma}}}+Q'_{i\sigma}=F'_{0i\sigma}+Q'_{i\sigma}$$ $$Q'_{i\sigma}=F_{ei\sigma}(\vec{r}_i,t)+F'_{i\sigma}(\vec{r}_i,\vec{r}'_j,\vec{p}_i,\vec{p}'_j)+G'_{i\sigma}(\vec{r}_i,\vec{r}'_j,\vec{p}_i,\vec{p}'_j)$$ In the formula, $F'_{0i\sigma}$ is conservative force $Q'_{i\sigma}$ is total of non-conservation force, $F_{ei\sigma}$ is external force, $F'_{i\sigma}$ is total retarded non-conservation force, $G'_{i\sigma}$ is radiation damping forces. Because acceleration of acceleration of a particle is caused by other particles, we can write $G'_{i\sigma}=\sum_{j\neq{i}}{G}'_{ij\sigma}$ in general, $G'_{ij\sigma}$ contains the $j$ -particle’s inference on the acceleration of $i$ -particle’s particle. We can express $\vec{r}'$, $r'$, $\vec{v}'$ and $\vec{a}'$ by $\vec{r}$, $r$, $\vec{v}$ and $\vec{a}$ as shown in Eq.(35)-(38). Because when $t\rightarrow-t$, we have $\vec{v}\rightarrow-\vec{v}$, $\vec{a}\rightarrow\vec{a}$, so that the retarded Lorentz forces and the radiation damping forces can not keep unchanged under time reversal, and irreversibility would appear in theory. These will further be shown in detail later.
Two basic hypotheses can be regarded as the foundation of classical statistical physics as shown below.
1\. The interaction forces between charged micro-particles in the macro-systems are the retarded Lorentz forces and the radiation damping forces showing in Eq.(9). For neutral atoms and molecular, the interactions forces can be considered as the retarded Lorentz forces between electromagnetic dipole moments and quadrupoles.
2\. A classical statistical system can be described by the normalized distribution function of ensemble probability density $\rho=\rho(x_{i\sigma},p_{i\sigma},t)$. The average value of the physical quantity $u$ in the ensemble is $$\bar{u}=\int{u}(x_{i\sigma},p_{i\sigma},t)\rho(x_{i\sigma},p_{i\sigma},t)\prod^{N}_{i=1}\prod^{3}_{\sigma=1}{d}x_{i\sigma}{d}p_{i\sigma}$$
We will establish the dynamic equations of classical statistical physics based on these two hypotheses, then discuss the problems of equivalent and non-equivalent states below.\
\
[3. The basic dynamic equation of classical statistical physics ]{}\
After the retarded Lorentz force and radiation damping force are introduced, the time rate of change of the distribution function of ensemble probability density $\rho$ is $${{d\rho}\over{dt}}={{\partial\rho}\over{\partial{t}}}+\sum_{i\sigma}[{{\partial\rho}\over{\partial{x}_{i\sigma}}}\dot{x}_{i\sigma}+{{\partial\rho}\over{\partial{p}_{i\sigma}}}\dot{p}_{i\sigma}]={{\partial\rho}\over{\partial{t}}}+\sum_{i\sigma}[{{\partial\rho}\over{\partial{x}_{i\sigma}}}\dot{x}_{i\sigma}+{{\partial\rho}\over{\partial{p}_{i\sigma}}}(F_{ei\sigma}+F'_{0i\sigma}+F'_{i\sigma}+G'_{i\sigma})]$$ By using Eq.(6) and the continuity equation $${{\partial\rho}\over{\partial{t}}}+\sum_{i\sigma}[{{\partial(\rho\dot{x}_{i\sigma})}\over{\partial{x}_{i\sigma}}}+{{\partial(\rho\dot{p}_{i\sigma})}\over{\partial{p}_{i\sigma}}}]=0$$ Eq.(11) can be written as $${{d\rho}\over{dt}}=-\rho\sum_{i\sigma}({{\partial\dot{x}_{i\sigma}}\over{\partial{x}_{i\sigma}}}+{{\partial\dot{p}_{i\sigma}}\over{\partial{p}_{i\sigma}}})=-\rho\sum_{i\sigma}({{\partial{F}'_{i\sigma}}\over{\partial{p}_{i\sigma}}}+{{\partial{G}'_{i\sigma}}\over{\partial{p}_{i\sigma}}})$$ From Eq.(4) we can obtain $${{\partial{F}'_{ij\sigma}}\over{\partial{p}_{i\sigma}}}=0$$ Put Eq.(14) into Eq.(12), we get $${{d\rho}\over{dt}}={{\partial\rho}\over{\partial{t}}}+\sum_{i\sigma}[{{p_{i\sigma}}\over{m_i}}{{\partial\rho}\over{\partial{x}_{i\sigma}}}+(F_{ei\sigma}+F'_{0i\sigma}+F'_{i\sigma}+G'_{i\sigma}){{\partial\rho}\over{\partial{p}_{i\sigma}}}]=-\rho\sum_{i\sigma}{{\partial{G}'_{i\sigma}}\over{\partial{p}_{i\sigma}}}$$ But $\partial{G}'_{i\sigma}/\partial{r}_{i\sigma}\neq{0}$ in general, for example, for a charged oscillating dipole, we have $x=A\sin(\omega{t}+\delta)$, $p=m\dot{x}=Am\omega\cos(\omega{t}+\delta)$, $\dot{a}=-A\omega^3\cos(\omega{t}+\delta)=-\omega^2{p}/{m}$, so we have $\partial{G}/\partial{p}=-\kappa\omega^2/{m}\neq{0}$ when the retarded effect is not considered. Eq.(15) can also be written as $${{\partial\rho}\over{\partial{t}}}+\sum_{i\sigma}[{{p_{i\sigma}}\over{m_i}}{{\partial\rho}\over{\partial{x}_{i\sigma}}}+(F_{ei\sigma}+F'_{0i\sigma}+F'_{i\sigma}){{\partial\rho}\over{\partial{p}_{i\sigma}}}]=-{{\partial(G'_{i\sigma}\rho)}\over{\partial{p}_{i\sigma}}}$$
The Eq.(15) or (16) are just the motion equations the distribution function of ensemble probability density $\rho$ satisfy after the retarded Lorentz forces and radiation damping forces are introduced, it can be regarded as the basic equation of the classical statistical mechanics. When $F'_{i\sigma}=G'_{i\sigma}=0$, the equation becomes the current Liuve equation. Because $F'_{i\sigma}$ and $G'_{i\sigma}$ can not keep unchanged under time reversal, Eq.(15) and (16) also can’t be unchanged under time reversal.\
\
[4. The statistical distribution of equivalent states ]{}\
Let’s first discuss the definition of equilibrium states and the hypothesis of equal probability in statistical physics. The equilibrium states in thermodynamics have strict definition. The equilibrium states are the states that the system’s macro-natures do not change with time without external influence. But in classical statistical theory, the equilibrium states have no independent and strict definition. For the equilibrium states of isolated systems, the definition depends on the equal probability hypothesis. The hypothesis is that for the equilibrium states of isolated systems, the ensemble probability density is a constant between the curved surfaces of energies $E$ and $E+\Delta{E}$ in phase space. So in current theory, the equilibrium states of isolated systems are equivalent to the states of equal probability. For the equilibrium states of non-isolated systems, there is no strict definition now. Some documents consider them as the states that the average values of physical quantities do not change with time. But this kind of definition depends on what kinds of physical quantities are used. So it is improper, for the equilibrium is only the nature of system itself, it has nothing to do with other physical quantities. As for the hypothesis of equal probability, it is a priori hypothesis and can’t be proved in theory. The reason for the hypothesis to exist may be based on so-called “ the law of non-difference”. The hypothesis had got much criticism since it was put forward. In order to provide the hypothesis a rational base, Boltzmann raised the ergodic theory, proving that as long as a system was ergodic, the hypothesis of equal probability would be tenable. However, the study shows that the evolutions of systems can’t be ergodic generally $^{(2)}$. So the ergodic hypothesis can’t be used as the foundation of classical statistical physics and we have to look for other way out.
For this purpose, let’s discuss the definition of equilibrium states in statistical physics at first. We define the equilibrium states of isolated systems as the states that there are no external forces acting on the systems and the ensemble probabilities do not change with time. This definition coincides with the definition of thermodynamics of equilibrium states. According to the definition, for isolated systems we have $\vec{F}_{ei}(\vec{r},t)=0$. When the equilibrium states are reached, we have $d\rho{/}{d}t=0$ and $\partial\rho{/}\partial{t}=0$. So in the light of Eq.(15) in equilibrium states, we have $$\sum_{i\sigma}{{\partial{G}'_{i\sigma}}\over{\partial{p}_{i\sigma}}}=0~~~~~~~~~~\sum_{i\sigma}[{{p_{i\sigma}}\over{m_i}}{{\partial\rho}\over{\partial{x}_{i\sigma}}}+(F_{ei\sigma}+F'_{0i\sigma}+F'_{i\sigma}+G'_{i\sigma}){{\partial\rho}\over{\partial{p}_{i\sigma}}}]=0$$ They are just the equations that the ensemble probability density functions satisfy in the equilibrium states. There are many kinds of equilibrium states of isolated systems as shown below.
1\. The particles are free in the systems with $\vec{F}'_{0i}=\vec{F}'_i=\vec{G}'_i=0$ and the ensemble probability density satisfies $\partial\rho{/}\partial{x}_{i\sigma}=0$ and $\partial\rho{/}\partial{p}_{i\sigma}=0$. In this case, the ensemble probability density function $\rho=$ constant, i.e., $\rho$ has nothing to do with $x_{i\sigma}$, $p_{i\sigma}$ and $t$. This is just the state of equal probability or the so-called micro-canonical ensemble in the current theory.
2\. Particles in systems are only acted by conservative forces, $\vec{F}'_{0i}\neq{0}$, $\vec{F}'_i=\vec{G}'_i=0$, and $\partial\rho{/}\partial{x}_{i\sigma}\neq{0}$£¬ $\partial\rho{/}\partial{p}_{i\sigma}\neq{0}$. The retarded effect is not considered. In this case, we have $${{\partial\rho}\over{\partial{x}_{i\sigma}}}{{p_{i\sigma}}\over{m_i}}+{{\partial\rho}\over{\partial{p}_{i\sigma}}}{F}_{i\sigma}={{\partial\rho}\over{\partial{x}_{i\sigma}}}{{\partial{H}_0}\over{\partial{p}_{i\sigma}}}-{{\partial\rho}\over{\partial{p}_{i\sigma}}}{{\partial{H}_0}\over{\partial{x}_{i\sigma}}}=0$$ In this kinds of equilibrium states, the ensemble probability density functions do not change with time, but the probabilities are different in the different points of phase space, or the equivalent states are with unequal probabilities. It is easy to be verified that the solution of Eq.(18) is $$\rho=Ae^{-\beta{E}}=e^{-\psi-\beta{E}}$$ Here $A$, $\beta$ and $\psi$ are constants. $E=H_0$ is the total energy of the system. In fact, according to Eq.(19), we have $${{\partial\rho}\over{\partial{x}_{i\sigma}}}=-\beta\rho{{\partial{H}_0}\over{\partial{x}_{i\sigma}}}~~~~~~~~~~~{{\partial\rho}\over{\partial{p}_{i\sigma}}}=-\beta\rho{{\partial{H}_0}\over{\partial{p}_{i\sigma}}}$$ Put it into Eq.(18), we can prove that it is the solution of Eq.(18). Eq.(19) is just the canonical ensemble in the current statistical theory. It is known that this result has nothing to do with whether or not the retarded Lorentz forces and the damping radiation forces are introduced. Based on the current Liouville’s equation and the definition of equilibrium states of isolated systems, we can get it directly. But unfortunately, this simple method is neglected by the current theory. Perhaps it is just this neglect, the hypothesis of equal probability became necessary and possible to exist.
If the particles in the system are identical, and the force acted on each particle only relative to its coordinate, or the particles are independent each other without interactions between them, we have $E=\sum{E}_i$, $E_i=T_i+U_i$. Here $T_i$ is particle’s kinetic energy, $U_i$ is potential energy, Eq.(19) becomes $$\rho=Ae^{-\beta(E_1+\cdot\cdot\cdot{E}_i+\cdot\cdot\cdot+{E}_N)}=Ae^{-\beta{E}_1}\cdot\cdot\cdot{e}^{-\beta{E}_i}\cdot\cdot\cdot{e}^{-\beta{E}_N}$$ The formula is just the distribution of near-independent subsystem. By the integral over while space and the normalization of the $\rho$ function, we get $$\int^{+\infty}_{-\infty}\rho{d}\Omega=\int^{+\infty}_{-\infty}{A}^{1/{N}}{e}^{-\beta{E}_1}{d}\Omega_1\cdot\cdot\cdot\int^{+\infty}_{-\infty}{A}^{1/{N}}{e}^{-\beta{E}_i}{d}\Omega_i\cdot\cdot\cdot\int^{+\infty}_{-\infty}{A}^{1/{N}}{e}^{-\beta{E}_N}{d}\Omega_N=1$$ Because the particles are identical, the forms of their energies $E_i$ are the same, so Eq.(22) can be written as $$\int^{+\infty}_{-\infty}\rho{d}\Omega=(\int^{+\infty}_{-\infty}{A}^{1/{N}}{e}^{-\beta{E}_i}{d}\Omega_i)^N=1~~~~~~~or~~~~~~~~\int^{+\infty}_{-\infty}{A}^{1/{N}}{e}^{-\beta{E}_i}{d}\Omega_i=1$$ Let $A^{1/N}=e^{-\alpha}$, the formula can be written as the form of sum $$\sum_{i}{f}_i\Delta\Omega_i=\sum_{i}{e}^{-\alpha-\beta{E}_i}\Delta\Omega_i=1~~~~~~~~~~~~f_i\Delta\Omega_i=e^{-\alpha-\beta{E}_i}\Delta\Omega_i$$ This is just the distribution law of the Maxwell-Boltzmann. or the distribution law of most probable value.
If the particles are free without potential forces, we have $U_i=0$ and $E_{i}\rightarrow{m}v^2/{2}$. So the formula above becomes the Maxwell distribution law of velocities. $$fd\Omega=Be^{-{{mv^2}\over{2KT}}}{d}xdydzdv_x{d}v_y{d}v_z$$ Here $KT=1/\beta$, $B=e^{-\alpha}$. In this case, the distribution function has nothing to do with space coordinates, showing that the distribution of particles in space is uniform.
When all particles have the same speed in system, according to Eq.(25), we have $f=$ constant and the system becomes micro-canonical ensemble. So micro-canonical ensemble represents the systems composed of free particles, in which particle’s speeds are the same and the space distribution is even. This kind of systems, of course, has no significance in practice.
On other hand, if there are two systems with the numbers of particles $N_1$ and $N_2$, energies $E_1$ and $E_2$ individually, we can also get the grand canonical ensemble as done in the current theory.
So it is obvious that we can get all kinds of equivalent distributions from canonical ensemble, including micro-canonical ensemble. But we can’t obtain canonical ensemble from micro-canonical ensemble. The situation is just opposite comparing with traditional theory. According to the current theory, canonical ensemble is deduced from micro-canonical ensemble, representing the equilibrium states of small systems to contact with big heat sources. So canonical ensemble describes the equilibrium states of non-isolated systems according to traditional statistical theory. But according to the Liouville’s equation, canonical ensemble describes the equilibrium states of isolated systems. So micro-canonical ensemble contradicts the result of the Liouville’s equation. On the other hand, micro-canonical ensemble is only one of equilibrium states. The other equilibrium states are not with equal probabilities. So it is improper to consider micro-canonical ensemble as the foundation of equilibrium state theory. For the systems only acted by conservative forces without considering retarded effects, the canonical ensemble is more foundational. For the general statistical systems, the revised Liouville’s equation is fundamental. It can be seen from discussions above that it is more simple and rational to discuss the problems of equilibrium states from the Liouville’s equation directly, no matter on physical concepts or mathematical calculation.
3\. When the non-conservative forces are considered and the system satisfies $$\sum_{i\sigma}[{{\partial\rho}\over{\partial{x}_{i\sigma}}}{{p_{i\sigma}}\over{m_i}}+{{\partial\rho}\over{\partial{p}_{i\sigma}}}(F_{0i\sigma}+F_{i\sigma})]=0$$ This is other equilibrium state. There are other equilibrium states.
For non-isolated systems, we can definite the equilibrium states as the states that the forces acted on the systems and the ensemble probabilities functions do not change with time. For example, the equilibrium state of a system located in the gravitational field of earth. The distribution function can still be described by Eq.(19), but in this case total energy $E$ contains potential energy. In all theses equilibrium states, ensemble probability densities change with the coordinates of phase space, but do not change with time.
Therefore, it is clear that we can deal with all the problems of statistical physics well from the revised Liouville’s equation directly. It is unnecessary for us to introduce some extra hypotheses such as the equal probability hypothesis. At present, chaos theory is also involved when we discuss the problems of foundation of statistical mechanics. Based on conservative interaction, thought chaos theory may be relative to the origin of random nature of statistical system or the hypothesis of ensemble, it has no effect on the foundational form of motion equation of statistical mechanics and has nothing to do with the origin of irreversibility in macro-systems. So it is impossible to us to solve the fundamental problems of statistical physics depending on chaos theory. In fact, it is enough for us in principle to solve all problems in statistical physics based on the revised Liouville’s equation. By the method provided in the paper, the united description of equilibrium states and non-equilibrium states without using the concepts of ergoden, coarse graining and mixing currents and so on, which are easy to cause dispute.\
\
[5. The statistical distributions of non-equivalent states and BBGKY series equations ]{}\
For non-equivalent states, by the direct integral of Eq.(15), we have $$\rho(t)=\exp[-\sum_{i\sigma}\int{{\partial{G}'_{i\sigma}}\over{\partial{p}_{i\sigma}}}dt]=\exp[-\sum^3_{\sigma=1}\sum^N_{i=1}\sum^N_{j\neq{1}}\int{{\partial{G}'_{ij\sigma}[\vec{r}_i,\vec{r}'_j,\vec{p}_i,\vec{p}'_j,\vec{a}'_j(\vec{r}_i,\vec{p}'_i,\vec{r}'_j,\vec{p}'_j)]}\over{\partial{p}_{i\sigma}}}dt]$$ In order to let the integral is possible, the retarded quantities should be expressed by the non-retarded quantities or opposite. It is known that the relation between retarted time and particle’s distance is $$r_{ij}=c(t-t')=\sqrt{[x_i(t)-x'_j(t')]^2+[y_i(t)-y'_j(t')]^2+[z_i(t)-z'_j(t')]^2}$$ Suppose the function relations $x_i=x_i(t)$ and $x'_j=x'_j(t')$ are known, it can be obtained from the formula above $$t=f[x'_j(t'),y'_j(t'),z'_j(t'),t']~~~~~~~~~~~or~~~~~~~~~~t'=f'[x_i(t),y_i(t),z_i(t),t]$$ From the relations, we have $$dt={{\partial{f}}\over{\partial{t'}}}{d}t'+{{\partial{f}}\over{\partial{x}'_j}}{{dx'_j}\over{dt'}}{d}t'+{{\partial{f}}\over{\partial{y}'_j}}{{dy'_j}\over{dt'}}{d}t'+{{\partial{f}}\over{\partial{z}'_j}}{{dz'_j}\over{dt'}}{d}t'=({{\partial{f}}\over{\partial{t}}}+\vec{\nu}'_i\cdot\nabla'{f})dt'$$ Put it into Eq.(27), we get the probability function expressing by retarded time $$\rho(t)=\exp[-\sum^3_{\sigma=1}\sum^N_{i=1}\sum^N_{j\neq{i}}\int{{\partial{G}'_{ij\sigma}(\vec{r}_i,\vec{r}'_j,\vec{p}_i,\vec{p}'_j)}\over{\partial{p}_{i\sigma}}}({{\partial{f}}\over{\partial{t}'}}+\vec{\nu}'_i\cdot\nabla'{f})dt']$$ On the other hand, by using relations $x_i=x_i(t)$, $y_i=y_i(t)$, $z_i=z_i(t)$ and Eq.(29), we have $$r'_{ij}(t,t')=c[t-f'(x_i,y_i,z_i,t)]=r'_{ij}(t)$$ $$\vec{v}'_j=\vec{v}'_j[f'(x_i,y_i,z_i,t)]=\vec{\nu}'_j(t)~~~~~~~~~~~~~\vec{a}'_j=\vec{a}'_j[f'(x_i,y_i,z_i,t)]=\vec{a}'_j(t)$$ Put them into Eq.(27), we get the probability function expressing by non-retarded time $$\rho(t)=\exp[-\sum_{i\sigma}\int{{\partial{G}'_{i\sigma}}\over{\partial{p}_{i\sigma}}}dt]$$
When the particle’s speed $v<<{c}$, the retarded distance $r'_{ij}(t')$ in the retarded time $t'$ can be replaced approximately by non-retarded distance $r_{ij}(t)$, i.e., we can let $$t'=t-r'_{ij}(t')/{c}\rightarrow{t}-r_{ij}(t)/{c}$$ $$r'_{ij}(t,t')=\mid\vec{r}_i(t)-\vec{r}'_j(t')\mid=\mid\vec{r}_i(t)-\vec{r}'_j[t-r'_{ij}(t')/{c}]\mid\rightarrow\mid\vec{r}_i(t)-\vec{r}'_j[t-r_{ij}(t)/{c}]\mid=r'_{ij}(t)$$ It is noted that $r'_{ij}(t)\neq{r}_{ij}(t)$, for $r'_{ij}(t)$ is the approximate retarded distance, but $r_{ij}(t)$ is not the retarded distance. In this case, we can develop retarded quantities into series in light of small quantity ${r}_{ij}/{c}$. By relation $\vec{v}_j=d\vec{r}_j/{d}t$, we get $^{(4)}$£º $$\vec{r}'_{ij}(t,t')\simeq\vec{r}_i(t)-\vec{r}'_j(t-r_{ij}\prime{c})=\vec{r}_i(t)-\vec{r}_j(t)+{{r_{ij}(t)}\over{c}}\vec{v}_j(t)-{{r^2_{ij}(t)}\over{2c^2}}\vec{a}_j+{{r^3_{ij}(t)}\over{6c^3}}\dot{\vec{a}}_j+\cdot\cdot\cdot$$ $$=\vec{r}_{ij}(t)+{{r_{ij}(t)}\over{c}}\vec{v}_j(t)-{{r^2_{ij}(t)}\over{2c^2}}\vec{a}_j+{{r^3_{ij}(t)}\over{6c^3}}\dot{\vec{a}}_j+\cdot\cdot\cdot$$ $$r'_{ij}(t')\simeq{r}_{ij}(t)\{1+{{\vec{r}_{ij}(t)\cdot\vec{\nu}_j(t)}\over{cr_{ij}(t)}}-{{\vec{r}_{ij}(t)\cdot\vec{a}_j(t)}\over{2c^2}}+{{\vec{r}_{ij}(t)\cdot\dot{\vec{a}}_j(t)}\over{6c^3}}{r}_{ij}(t)+\cdot\cdot\cdot\}$$ $$=r_{ij}\{1+{{v_{jn}}\over{c}}-{{a_{jn}{r}_{ij}}\over{2c^2}}+{{\dot{a}_{jn}{r}^2_{ij}}\over{6c^3}}+\cdot\cdot\cdot\}$$ $$\vec{v}'_j(t')\simeq\vec{v}_j(t)-{{r_{ij}(t)}\over{c}}\vec{a}_j(t)+{{r^2_{ij}(t)}\over{2c^2}}\dot{\vec{a}}_j(t)+\cdot\cdot\cdot~~~~~~~~~~~~~\vec{a}'_j(t')\simeq\vec{a}_j(t)-{{r_{ij}(t)}\over{c}}\dot{\vec{a}}_j(t)+\cdot\cdot\cdot$$ From Eq.(36) and (37), it is obvious that $\vec{r}'_{ij}(t')\neq\vec{r}_{ji}(t')$, $r'_{ij}(t')\neq{r}'_{ji}(t')$. The relations below are useful in later calculations. $${1\over{r'_{ij}}}\simeq{1\over{r_{ij}}}\{1-{{v_{jn}}\over{c}}+{{a_{jn}}\over{2c^2}}(1-{{2v_{jn}}\over{c}})r_{ij}-{{\dot{a}_{jn}}\over{6c^3}}(1-{{2v_{jn}}\over{c}})r^2_{ij}+\cdot\cdot\cdot\}$$ $${1\over{r'^2_{ij}}}\simeq{1\over{r^2_{ij}}}\{1-{{2v_{jn}}\over{c}}+{{3v^2_{jn}}\over{c^2}}+{{a_{jn}}\over{c^2}}(1-{{3v_{jn}}\over{c}})r_{ij}-{{\dot{a}_{jn}}\over{3c^3}}(1-{{3v_{jn}}\over{c}})r^2_{ij}+\cdot\cdot\cdot\}$$ $${1\over{r'^3_{ij}}}\simeq{1\over{r^2_{ij}}}\{1-{{3v_{jn}}\over{c}}+{{6v^2_{jn}}\over{c^2}}+{{3a_{jn}}\over{2c^2}}(1-{{5v_{jn}}\over{c}})r_{ij}-{{\dot{a}_{jn}}\over{2c^3}}(1-{{5v_{jn}}\over{c}})r^2_{ij}+\cdot\cdot\cdot\}$$
The BBGKY series equations are discussed now. The volume element of phase space of the i-particle is written as $d\Omega_i=\Pi_{\sigma=1}{d}x_{i\sigma}{d}p_{i\sigma}$ and the reduced distribution functions of ensemble probability densities are $$f_s(x_{l\sigma},p_{l\sigma}\cdot\cdot\cdot{x}_{s\sigma},p_{s\sigma},t)=\int\rho(x_{l\sigma},p_{l\sigma}\cdot\cdot\cdot{x}_{n\sigma},p_{n\sigma},t)d\Omega_{S+1}\cdot\cdot\cdot{d}\Omega_N$$ In the current theory, by considering the nature of identical particles, the BBGKY series equations are $${{\partial{f}_S}\over{\partial{t}}}=-\sum^{3}_{\sigma=1}\sum^{S}_{i=1}({{p_{i\sigma}}\over{m_i}}{{\partial{f}_S}\over{\partial{x}_{i\sigma}}}+(F_{ei\sigma}{{\partial{f}_S}\over{\partial{p}_{i\sigma}}})-\sum^{3}_{\sigma=1}\sum^{S}_{i=1}\sum^S_{j\neq{i}}F_{ij\sigma}{{\partial{f}_S}\over{\partial{p}_{i\sigma}}}$$ $$-(N-S)\sum^3_{\sigma=1}\sum^S_{i=1}\int{F}_{iS+1\sigma}{{\partial{f}_{S+1}}\over{\partial{p}_{i\sigma}}}d\Omega_{s+1}$$ The equations are equivalent to the Liouville’s equation. Considering the fact that $F_{ei\sigma}$ only relative to the $i$ -particle’s coordinates but $F'_{0i\sigma}$, $F'_{i\sigma}$ and $G'_{i\sigma}$ are relative to both the $i$ and $j$ particle’s coordinates after the retarded Lorentz forces and radiation damping forces, we can get new BBGKY series equations in the same way $${{\partial{f}_S}\over{\partial{t}}}+\sum^{3}_{\sigma=1}\sum^{S}_{i=1}({{p_{i\sigma}}\over{m_i}}{{\partial{f}_S}\over{\partial{x}_{i\sigma}}}+F_{ei\sigma}{{\partial{f}_S}\over{\partial{p}_{i\sigma}}})+\sum^{3}_{\sigma=1}\sum^{S}_{i=1}\sum^S_{j\neq{i}}(F'_{0ij\sigma}+F'_{ij\sigma}){{\partial{f}_S}\over{\partial{p}_{i\sigma}}}$$ $$+(N-S)\sum^3_{\sigma=1}\sum^S_{i=1}\int(F'_{0iS+1\sigma}+F'_{iS+1\sigma}){{\partial{f}_{S+1}}\over{\partial{p}_{i\sigma}}}d\Omega_{s+1}$$ $$+\sum^3_{\sigma=1}\sum^S_{i=1}\sum^S_{j\neq{i}}{{\partial(G'_{ij\sigma}{f}_S)}\over{\partial{p}_{i\sigma}}}+(N-S)\sum^3_{\sigma=1}\sum^S_{i=1}\int{{\partial(G'_{iS+1\sigma}{f}_{S+1})}\over{\partial{p}_{i\sigma}}}d\Omega_{s+1}=0$$ In the formula, $F'_{0iS+1,\sigma}$, $F'_{iS+1,\sigma}$ and $G'_{iS+1,\sigma}$ represents the $j=S+1$ items. Because $N-1\simeq{N}$, the first equation with $S=1$ is $${{\partial{f}_1}\over{\partial{t}}}+{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_1+\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{f}_1+N\int\vec{F}'_{012}\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=-N\int\vec{F}'_{12}\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{r}_2{d}^3\vec{p}_2-N\int\nabla_{\vec{p}_1}\cdot(\vec{G}'_{ij}{f}_2)d^3\vec{r}_2{d}^3\vec{p}_2$$ In the formula, $\vec{F}'_{012}$ is the conservative retarded force and $\vec{F}'_{12}$ is the non-conservative retarded force. By using Eq.(36)—-(41), $\vec{F}_{12}$ tcan be expressed by the non-retarded quantities as $$\vec{F}_{0ij}={{q_i{q}_j\vec{r}'_{ij}}\over{r'^3_{ij}}}={{q_i{q}_j\vec{r}_{ij}}\over{r^3_{ij}}}+\vec{K}_{ij}=-\nabla_{\vec{r}_i}\cdot{U}(r_{ij})+\vec{K}_{ij}$$ $$\vec{K}_{ij}={{q_i{q}_j\vec{r}_{ij}}\over{r^3_{ij}}}K_{ij1}+{{q_i{q}_j\vec{v}_j}\over{cr^2_{ij}}}K_{ij2}+{{q_i{q}_j\vec{a}_j}\over{c^2r_{ij}}}K_{ij3}+{{q_i{q}_j\dot{\vec{a}}_j}\over{c^3}}K_{ij4}$$ $$K_{ij1}=-{{3v_{jn}}\over{c}}+{{6v^2_{jn}}\over{c^2}}+{{3a_{jn}}\over{2c^2}}(1-{{5v_{jn}}\over{c}})r_{ij}-{{\dot{a}_{jn}}\over{2c^3}}(1-{{5v_{jn}}\over{c}})r^2_{ij}+\cdot\cdot\cdot$$ $$K_{ij2}=1-{{3v_{jn}}\over{c}}+{{3a_{jn}{r}_{ij}}\over{2c^2}}-{{\dot{a}_j{r}^2_{ij}}\over{2c^3}}$$ $$K_{ij3}=-{1\over{2}}+{{3v_{jn}}\over{2c}}~~~~~~~K_{ij4}={1\over{6}}-{{v_{jn}}\over{2c}}$$ It is obvious that after the retarded quantities are expressed by the non-retarded quantities, the conservative forces would become the non-conservative forces. In the same way, $\vec{G}'_{ij}$ can also be expressed by the non-retarded quantities. By using Eq.(46), Eq.(45) can be written as $${{\partial{f}_1}\over{\partial{t}}}+{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_1+\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{f}_1-N\int\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=-N\int(\vec{K}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{r}_2{d}^3\vec{p}_2-N\int\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2)d^3\vec{r}_2{d}^3\vec{p}_2$$ The left side of the equation is the result by the Liouville’s equation. The right side is new by the paper’s revision. Similarly, the second equation of the BBGKY series with $S=2$ can be written as $${{\partial{f}_2}\over{\partial{t}}}+{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_2+{{\vec{p}_2}\over{m}}\cdot\nabla_{\vec{r}_2}{f}_2+\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{f}_2+\vec{F}_{e2}\cdot\nabla_{\vec{p}_2}{f}_2-\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{F}_1}{f}_2$$ $$-\nabla_{\vec{r}_2}{U}(r_{12})\cdot\nabla_{\vec{p}_2}{f}_2-N\int[\nabla_{\vec{r}_1}{U}(r_{13})\cdot\nabla_{\vec{p}_1}{f}_3+\nabla_{\vec{r}_2}{U}(r_{23})\cdot\nabla_{\vec{p}_2}{f}_3]d^3\vec{r}_3{d}^3\vec{p}_3$$ $$=-\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2)-\nabla_{\vec{p}_2}\cdot(\vec{G}'_{21}{f}_2)+\vec{K}_{12}\cdot\nabla_{\vec{p}_1}{f}_2+\vec{K}_{21}\cdot\nabla_{\vec{p}_2}{f}_2$$ $$-N\int[(\vec{K}_{13}+\vec{F}'_{13})\cdot\nabla_{\vec{p}_1}{f}_3+(\vec{K}_{23}+\vec{F}'_{23})\cdot\nabla_{\vec{p}_2}{f}_3]{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$-N\int[\nabla_{\vec{p}_1}(\vec{G}'_{13}{f}_3)+\nabla_{\vec{p}_2}(\vec{G}'_{23}{f}_3)]d^3\vec{r}_3{d}^3\vec{p}_3$$ The definition of $\vec{K}_{ij}$ can be seen in Eq.(47). We will discuss the motion equations of hydromechanics mechanics by using them below.\
\
[6. The motion equations of hydromechanics mechanics ]{}\
How to reduce the motion equations of hydromechanics from statistical mechanics is still a unsolved problem now. Because the current statistical mechanics is based on the Liouville’s equation, it is only suitable for the equivalent states of conservative systems or the ideal fluids. The dissipative phenomena of real fluids just as heat conduction and viscosity and so on can only be explained rationally by introducing non-conservative forces. Let’s discuss this problem now. By writing the right items of Eq.(50) into the form of hydromechanics and adding them into the current results, we can get the results after the retarded Lorentz forces and radiation damping forces are introduced. According to the current theory, we definite the normalization of functions $f_1(\vec{r}_1.\vec{p}_1,t)$ and $f_2(\vec{r}_1.\vec{p}_1,\vec{r}_2,\vec{p}_2,t)$ as $$V_0=\int{f}_1(\vec{r}_1,\vec{p}_1,t)d^3\vec{r}_1{d}^3\vec{p}_1$$ $$V^2_0=\int{f}_2(\vec{r}_1,\vec{p}_1,\vec{r}_2,\vec{p}_2,t){d}^3\vec{r}_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ In the formula $\rho_0$ is macro-mass density, $\vec{V}$ is velocity of fluid, $u_k$ are dynamic energy density, $u_{v}$ is potential energy density $^{(5)}$£º $$\rho_0(\vec{r}_1,t)={{mN}\over{V_0}}\int{f}_1(\vec{r}_1,\vec{p}_1,t)d^3\vec{p}_1~~~~~~~~\vec{V}(\vec{r}_1,t)={{N}\over{\rho_0{V}_0}}\int\vec{p}_1{f}_1(\vec{r}_1,\vec{p}_1,t)d^3\vec{p}_1$$ $$u_k(\vec{r}_1,t)={{N}\over{2m\rho_0{V}_0}}\int(\vec{p}-m\vec{V})^2{f}_1(\vec{r}_1,\vec{p}_1,t)d^3\vec{p}_1$$ $$u_{v}(\vec{r}_1,t)={1\over{2\rho_0}}({N\over{V_0}})^2\int{U}(\vec{r}_1,\vec{r}'_2){f}_2(\vec{r}_1,\vec{p}_2,\vec{r}'_2,\vec{p}'_2,t)d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$
The equilibrium equation of mass density or the continuity equation is discussed firstly. Producing Eq.(50) is produced by $mN/{V}_0$, then the integral about $d^3\vec{p}_1$ is carried out The right side of the equation can be written as $$-{{N^2}\over{\rho_0{V}_0}}\int(\vec{K}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2-{{N^2}\over{\rho_0{V}_0}}\int\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ According to Eq.(14), we have $\nabla_{\vec{p}_1}\cdot\vec{F}'_{12}=0$. Because $\vec{K}_{12}$ has nothing to do with $\vec{p}_1$, we also have $\nabla_{\vec{p}_1}\cdot\vec{K}_{12}=0$. By using the boundary conditions of probability density functions, Eq.(50) is zero. So the continuity equation of hydromechanics is the same as the current form with $${{\partial\rho_0}\over{\partial{t}}}+\nabla\cdot\rho_0\vec{V}=0$$
In order to get the motion equation of hydromechanics, we product Eq.(50) by $N\vec{p}_1/{V}_0$ and take the integral about $d^3\vec{p}_1$, then get from left side of Eq.(50) $$\rho_0({{\partial\vec{V}}\over{\partial{t}}}+\vec{V}\cdot\nabla\vec{V})+\nabla\cdot(\vec{\vec{P}}_k+\vec{\vec{P}}_{\nu})=\rho_0\vec{\Gamma}_1$$ In the formula, $\vec{\Gamma}_1$, $\vec{\vec{P}}_k$ and $\vec{\vec{P}}_v$ are the average external force, kinetic energy and potential energy tensors of unit masses individually $^{(5)}$£º $$\vec{\Gamma}_1={N\over{\rho_0{V}_0}}\int{f}_1\vec{F}_{e1}{d}^3\vec{p}_1~~~~~~~~~\vec{\vec{P}}_k={N\over{V_0}}\int{f}_1{{(\vec{p}_1-m\vec{V})(\vec{p}_1-m\vec{V})}\over{m}}{d}^3\vec{p}_1$$ $$\vec{\vec{p}}_{\nu}=-{1\over{2}}({N\over{V_0}})^2\int^1_0{d}\lambda\int{{\vec{r}''_{12}\vec{r}''_{12}}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{d}^3{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ For the right side of Eq.(50), because $\nabla_{\vec{p}_1}\cdot(\vec{K}_{12}+\vec{F}'_{12})=0$, we get $$-{N^2\over{V_0}}\int\vec{p}_1(\vec{K}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2-{N\over{V_0}}\int\vec{p}_1\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={N^2\over{V_0}}\int{f}_2(\vec{K}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}\vec{p}_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2+{N^2\over{V_0}}\int{f}_2\vec{G}'_{12}\cdot\nabla_{\vec{p}_1}\vec{p}_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={N^2\over{V_0}}\int{f}_2(\vec{K}_{12}+\vec{F}'_{12})d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2+{{N^2}\over{V_0}}\int{f}_2\vec{G}'_{12}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ On the other hand, by using Eq.(36)—(41), Eq.(5) can be expressed by the non-retarded quantities $$\vec{F}'_{ij}={{q_i{q}_j\vec{r}_{ij}}\over{r^3_{ij}}}{Q}_{ij1}+{{q_i{q}_j\vec{v}_{ij}}\over{cr^2_{ij}}}{Q}_{ij2}+{{q_i{q_j}\vec{a}_{ij}}\over{c^2{r}_ij}}{Q}_{ij3}+{{q_i{q}_j\dot{\vec{a}}_{ij}}\over{c^3}}{Q}_{ij4}$$ Let $R_{ijk}=K_{ijk}+Q_{ijk}$, we can write $\vec{K}_{12}+\vec{F}'_{12}$ as $$\vec{K}_{12}+\vec{F}'_{12}={{q_1{q}_2\vec{r}_{12}}\over{r^3_{12}}}{R}_{121}+{{q_1{q}_2\vec{v}_2}\over{cr^2_{12}}}{R}_{122}+{{q_1{q}_2\vec{a}_2}\over{c^2{r}_{12}}}{R}_{123}+{{q_1{q}_2\dot{\vec{a}}_2}\over{c^3}}{R}_{124}$$ In the formula, $R_{ijk}$ does not contain retarded quantities again. By the relation $q_1{q}_2\vec{r}^3_{12}=-\nabla_{\vec{r}_1}{U}(r_{12})$, we have $$-{N^2\over{V_0}}\int(\vec{K}_{12}+\vec{F}'_{12}){f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{N^2}\over{V_0}}\int\nabla_{\vec{r}_1}{U}(r_{12})R_{121}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2-{{N^2}\over{V_0}}\int{{q_1{q_2}\vec{v}_2}\over{cr^2_{12}}}R_{122}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^2}\over{V_0}}\int{{q_1{q}_2\vec{a}_2}\over{cr^2_{12}}}R_{123}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2-{{N^2}\over{V_0}}\int{{q_1{q_2}\vec{a}_2}\over{cr^2_{12}}}R_{124}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ By using formula $^{(5)}$ $$\int{f}(\vec{r}_1,\vec{r}_2)\nabla_{r_1}{U}(r_{12}){d}^3\vec{r}_2=\nabla_{r_1}\cdot\{-{1\over{2}}\int^1_0{d}\lambda\int{d}^3\vec{r}''_{12}{{\vec{r}''_{12}\vec{r}''_{12}}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{f}(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{r}_1-\lambda\vec{r}''_{12})\}$$ we have $${{N^2}\over{V_0}}\int\nabla_{\vec{r}_1}{U}(r_{12}){R}_{121}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=-\nabla_{\vec{r}_1}\cdot\vec{\vec{P}}_s$$ $$\vec{\vec{p}}_s={{N^2}\over{2V_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{12}\vec{r}''_{12}}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{R}_{121}[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]$$ $$\times{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]{d}^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ We call $\vec{\vec{P}}_s$ as dissipative energy tensor. Let $$\vec{\Gamma}_2=-{{q_1{q}_2{N}^2}\over{c\rho_0{V}_0}}\int({{\vec{v}_2}\over{r^2_{12}}}{R}_{122}+{{\vec{a}_2}\over{cr_{12}}}{R}_{123}+{{\dot{\vec{a}}_2}\over{c^2}}{R}_{124}){f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ represent the average dissipative force acted on unite mass and $$\vec{\Gamma}_3={{N^2}\over{\rho_0{V}_0}}\int{f}_2\vec{G}'_{12}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ represent the average radiation damping force of unite mass, the motion equation of hydromechanics is $$\rho_0({{\partial\vec{V}}\over{\partial{t}}}+\vec{V}\cdot\nabla\vec{V})+\nabla\cdot(\vec{\vec{P}}_k+\vec{\vec{P}}_v+\vec{\vec{P}}_s)=\rho_0(\vec{\Gamma}_1+\vec{\Gamma}_2+\vec{\Gamma}_3)$$ Producing Eq.(50) by $N(\vec{p}_1-m\vec{V})^2/{2}mV_0$ and taking the integral about $d^3\vec{p}_1$, we can get the equivalent equation of kinetic energy. Form the left side of the equation, we have $${{\partial(\rho_0{u}_k)}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}_k\vec{V}+\vec{J}_k)=-\vec{\vec{P}}_k:\nabla\vec{V}$$ Here $\vec{J}_k=\vec{J}_{k1}+\vec{j}_{k2}$ with $$\vec{J}_{k1}={N\over{V_0}}\int{{\vec{p}-m\vec{V}}\over{m}}{{(\vec{p}_1-m\vec{V})^2}\over{2m}}{f}_1{d}^3\vec{p}_1$$ $$\vec{J}_{k2}=\int^1_0{d}\lambda{{\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{p}_1-m\vec{V})}\over{2mr''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t)d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ Form the left side of the equation, we get $$-{{N^2}\over{2mV_0}}\int(\vec{p}_1-m\vec{V})^2\nabla_{\vec{p}_1}\cdot[(\vec{K}+\vec{F}'_{12})f_2]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N}\over{2mV_0}}\int(\vec{p}_1-m\vec{V})^2\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2)d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{N^2}\over{mV_0}}\int{f}_2(\vec{K}_{12}+\vec{F}'_{12})\cdot(\vec{p}_1-m\vec{V})d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$+{{N^2}\over{mV_0}}\int{f}_2\vec{G}'_{12}\cdot(\vec{p}_1-m\vec{V})d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ By using Eq.(65), the first item of the formula above can be written as $${{q_1{q}_2{N}^2}\over{mV_0}}\int{f}_2({{\vec{r}_{12}}\over{r^3_{12}}}{R}_{121}+{{\vec{\nu}_2}\over{cr^2_{12}}}{R}_{122}+{{\vec{a}_2}\over{c^2{r}_{12}}}{R}_{123}+{{\dot{\vec{a}_2}}\over{c^3}}{R}_{124})\cdot(\vec{p}_1-m\vec{V}){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ By using Eq.(67) again, the first item of the formula above can be written as $$\nabla\cdot\vec{J}_{k3}={{N^2}\over{mV_0}}\int{f}_2{R}_{121}{{q_1{q}_2\vec{r}_{12}}\over{r^3_{12}}}\cdot(\vec{p}_1-m\vec{V})d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\vec{J}_{k3}={{N^2}\over{2mV_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{p}_1-m\vec{V})}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{R}_{121}[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]$$ $$\times{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ $\vec{J}_{k3}$ can be called as the dissipative fluid of kinetic energy of unite mass. The second, third and forth items in Eq.(77) and the last item in Eq.(76) can be put together and written as $$\rho_0\sigma_k={{q_1{q}_2{N}^2}\over{cmV_0}}\int{f}_2({{\vec{\nu}_2}\over{r^2_{12}}}Q_{122}+{{\vec{a}_2}\over{cr_{12}}}Q_{123}+{{\dot{\vec{a}}_2}\over{c^2}}Q_{124}+{{cm}\over{q_1{q_2}}}\vec{G}'_{12})\cdot(\vec{p}_1-m\vec{V})d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ We can call $\sigma_k$ as the dissipative kinetic energy generation of unite mass. In the current theory, there is no this item. It appears only when the retarded Lorentz forces and damping radiation forces are introduced. Let $\vec{J}_k=\vec{J}_{k1}+\vec{J}_{k2}+\vec{J}_{k3}$, thus Eq (73) can be written as $${{\partial(\rho_0{u}_k)}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}_k\vec{V}+\vec{J}_k)=-\vec{\vec{p}}_k:\nabla\vec{V}+\rho_0\sigma_k$$
Producing Eq.(51) by $N^2{U}(r_{12})/{V}^2_0$ and taking the integral about $d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$, the equivalent equation of potential energy can be obtained. Form the left side of the equation, we can get $${{\partial(\rho_0{u}_{\nu})}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}_{\nu}\vec{V}+\vec{J}_{\nu})=-\vec{\vec{p}}_{\nu}:\nabla\vec{V}$$ Here $\vec{J}_v=\vec{J}_{v1}+\vec{J}_{v2}$ with $$\vec{J}_{\nu{1}}={1\over{2}}({N\over{V_0}})^2\int{{\vec{p}-m\vec{V}}\over{m}}U(r_{ij})f_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\vec{J}_{\nu{2}}=-\int^1_0{d}\lambda{{\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{p}_1-\vec{p}_2)}\over{4mr''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t){d}^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ The right side of the equation is $${{N^2}\over{V^2_0}}\int{U}(r_{12})\vec{K}_{12}\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2+{{N^2}\over{V^2_0}}\int{U}(r_{12})\vec{K}_{21}\cdot\nabla_{\vec{p}_2}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^3}\over{V^2_0}}\int{U}(r_{12})[(\vec{K}_{13}+\vec{F}'_{13})\cdot\nabla_{\vec{p}_1}{f}_3+(\vec{K}_{23}+\vec{F}'_{23})\cdot\nabla_{\vec{p}_2}{f}_3]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$-{{N^2}\over{V^2_0}}\int{U}(r_{12})\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2-{{N^2}\over{V^2_0}}\int{U}(r_{12})\nabla_{\vec{p}_2}\cdot(\vec{G}'_{21}{f}_2){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-N\int{U}(r_{12})[\nabla_{\vec{p}_1}\cdot(\vec{G}'_{13}{f}_3)+\nabla_{\vec{p}_2}\cdot(\vec{G'_{23}{f}_3})]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ As mentioned before, we have $\nabla_{\vec{p}_i}\cdot\vec{K}_{ij}=0$, $\nabla_{\vec{p}_i}\cdot\vec{K}'_{ij}=0$. By considering the boundary condition, all integrals are zero. So the potential energy items have no reversion after retarded Lorentz forces and the radiation damping forces are introduced.
We have to consider the contributions of non-conservative dissipative energies. It is known in electromagnetic theory that when the common coordinates and momentums are used to describe particle’s motion, interaction energy does not contain magnetic field’s action. The retarded interaction between the $i$ -particle and the $j$ -particle is $$U'_{ij}(r'_{ij},\vec{p}'_j)={{q_i{q}_j}\over{r'_{ij}(1-\nu'_{jn}\prime{c})}}={{q_i{q}_j}\over{r'_{ij}}}(1+{{\nu'_{jn}}\over{c}}+{{\nu'^2_{jn}}\over{c^2}})$$ $U'_{ij}$ is asymmetry about the indexes $i$ and $j$ after the retarded effect is considered, so the non-conservative and retarded total interaction can be written as $$U'(r'_{ij},\vec{p}'_j)={1\over{2}}\sum^N_{i=1}\sum^N_{j\neq{1}}{{q_i{q}_j}\over{r'_{ij}}}({{\nu'_{jn}}\over{c}}+{{\nu'^2}\over{c^2}})={1\over{2}}\sum^N_{i=1}\sum^N_{j\neq{1}}{{q_i{q}_j}\over{r_{ij}}}$$ $$\times\{{{v_{jn}}\over{c}}-{{a_{ij}{r}_{ij}}\over{c^2}}(1-{{v_{jn}}\over{c}})+{{\dot{a}_{jn}{r}^2_{ij}}\over{2c^3}}(1+{{4v_{jn}}\over{3c}})-{{3\vec{v}_j\cdot\vec{a}_j{r}_{ij}}\over{2c^3}}+{{2\vec{v}_j\cdot\dot{\vec{a}}_j{r}^2_{ij}}\over{3c^4}}\}$$ Producing the second equation of the BBGKY series equation by $N^2{U}'(r'_{12},\vec{p}_1,\vec{p}'_2)/{2}V^2_0$ and taking the integral about $d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$, we get $${{\partial}\over{\partial{t}}}{{N^2}\over{2V^2_0}}\int{U'}f_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2+{{N^2}\over{2mV^2_0}}\int{U'}(\vec{p}_1\cdot\nabla_{\vec{r}_q}+\vec{p}_2\cdot\nabla_{\vec{r}_2}){f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$+{{N^2}\over{2V^2_0}}\int{U'}(\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}+\vec{F}_{e2}\cdot\nabla_{\vec{p}_2})f_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^2}\over{2V^2_0}}\int{U'}[\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{p}_1}+\nabla_{\vec{r}_2}{U}(r_{12})\cdot\nabla_{\vec{p}_2}]f_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^3}\over{2V^2_0}}\int{U'}[\nabla_{\vec{r}_1}{U}(r_{13})\cdot\nabla_{\vec{p}_1}+\nabla_{\vec{r}_2}{U}(r_{23})\cdot\nabla_{\vec{p}_2}]f_3{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d^3}\vec{r}_3{d}^3\vec{p}_3$$ $$=-{{N^2}\over{2V^2_0}}\int{U'}[\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2)+\nabla_{\vec{p}_2}\cdot(\vec{G}'_{21}{f}_2)]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$+{{N^2}\over{2V^2_0}}\int{U'}(\vec{K}_{12}\cdot\nabla_{\vec{p}_1}+\vec{K}_{21}\cdot\nabla_{\vec{p}_2})f_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^3}\over{2V^2_0}}\int{U'}[(\vec{K}_{13}+\vec{F}'_{13})\cdot\nabla_{\vec{p}_1}f_3+(\vec{K}_{23}+\vec{F}'_{23})\cdot\nabla_{\vec{p}_2}f_3]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$-{{N^3}\over{2V^2_0}}\int{U'}[\nabla_{\vec{p}_1}\cdot(\vec{G}'_{13}{f}_3)+\nabla_{\vec{p}_2}\cdot(\vec{G}'_{23}{f}_3)]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ The calculation is verbose and we only give out the result of the first item in the right side of Eq.(86) $$U_1={{q_1{q}_2}\over{2c}}({{v_{2n}}\over{r_12}}+{{v_{1n}}\over{r_{21}}})={{q_1{q}_2}\over{2c}}({{\vec{r}_{12}\cdot\vec{v}_1}\over{r^2_{12}}}+{{\vec{r}_{21}\cdot\vec{v}_1}\over{r^2_{21}}})={{q_1{q}_2}\over{2cm}}{{\vec{r}_{12}\cdot(\vec{p}_2-\vec{p}_1)}\over{r^2_{12}}}$$ Similar to Eq.(56), we definite $$u_s={1\over{2\rho_0}}{{N^2}\over{V^2_0}}\int{U}_1{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\vec{J}_{s1}={1\over{2}}{{N^2}\over{V^2_0}}\int{{{\vec{p}_1-m\vec{V}}\over{m}}}{U}_1{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ as the energy density and energy fluid density of non-conservative interactions of unite mass and have $$\rho_0{u}_s\vec{V}={1\over{2}}({N\over{V_0}})^2\int\vec{V}{U}_1{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\nabla_{\vec{r}_i}\cdot(\rho_0{u}_s\vec{V}+\vec{J}_{s1})=\nabla_{\vec{r}_1}\cdot\{{1\over{2}}{{N^2}\over{V^2_0}}\int{{\vec{p}_1}\over{m}}{U}_1{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2\}$$ $$={1\over{2}}{{N^2}\over{V^2_0}}\int({U}_1{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_2+{f}_2{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{U}_1){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ By considering the boundary condition, we can get $${{N^2}\over{2mV^2_0}}\int{U}_1\vec{p}_2\cdot\nabla_{\vec{r}_2}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=-{{N^2}\over{2mV^2_0}}\int{f}_2\vec{p}_2\cdot\nabla_{\vec{r}_2}U_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ Therefore, we have $${{N^2}\over{2mV^2_0}}\int{U}_1(\vec{p}_1\cdot\nabla_{\vec{r}_1}+\vec{p}_2\cdot\nabla_{\vec{r}_2}){f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=\nabla_{\vec{r}_1}\cdot(\rho_0{u}_s\vec{V}+\vec{J}_{s1})-{1\over{2}}{{N^2}\over{V^2_0}}\int{f}_2{{\vec{p}_1-\vec{p}_2}\over{m}}\cdot\nabla_{\vec{r}_1}U_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ For Eq.(88), because $${{\partial}\over{\partial{x}_1}}{{(x_1-x_2)(p_{2x}-p_{1x})}\over{r^2_{12}}}={{p_{2x}-p_{1x}}\over{r^2_{12}}}-{{(x_1-x_2)^2(p_{2x}-p_{1x})}\over{r^4_{12}}}$$ by considering symmetry, we let $$\nabla_{\vec{r}_1}{{\vec{r}_{12}\cdot(\vec{p}_2-\vec{p}_1)}\over{r^2_{12}}}\simeq{{\vec{p}_2-\vec{p}_1}\over{r^2_{12}}}-{{\vec{p}_2-\vec{p}_1}\over{3r^2_{12}}}=-{{2(\vec{p}_1-\vec{p}_2)}\over{3r^2_{12}}}$$ The second item in the left sideof Eq.(94) can be written as $$-{{q_1{q}_2{N}^2}\over{4cmV^2_0}}\int{f}_2{{(\vec{p}_1-\vec{p}_2)}\over{m}}\nabla_{\vec{r}_1}{{\vec{r}_{12}\cdot(\vec{p}_2-\vec{p}_1)}\over{r^2_{12}}}d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{q_1{q}_2{N}^2}\over{6cm^2V^2_0}}\int{f}_2{{(\vec{p}_1-\vec{p}_2)^2}\over{r^2_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=-\rho_0\sigma_s$$ $\sigma_{s1}$ can be called as the production of dissipative energy. Because $$\nabla_{\vec{p}_1}{U}_1={{q_1{q}_2}\over{2cm}}\nabla_{\vec{p}_1}{{\vec{r}_{12}\cdot(\vec{p}_2-\vec{p}_1)}\over{r^2_{12}}}=-{{q_1{q}_2}\over{2cm}}{{\vec{r}_{12}}\over{r^2_{12}}}~~~~~~~~~~~~~\nabla_{\vec{p}_2}U_1={{q_1{q}_2}\over{2cm}}{{\vec{r}_{12}}\over{r^2_{12}}}$$ The third item in the left side of Eq.(88) can be written as $$-{{N^2}\over{2V^2_0}}\int{f}_2(\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}+\vec{F}_{e2}\cdot\nabla_{\vec{p}_2})U_1{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2={{q_1{q}_2{N}^2}\over{4cmV^2_0}}\int{f}_2{{\vec{r}_{12}\cdot(\vec{F}_{e1}-\vec{F}_{e2})}\over{r^2_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=-{{N^2}\over{4cmV^2_0}}\int{f}_2\nabla_{\vec{r}_1}{U}(r_{12})\cdot{r}_{12}(\vec{F}_{e1}-\vec{F}_{32}){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=\nabla\cdot\vec{J}_{s2}$$ It can be proved that Eq.(67) still hold when $f$ is a vector, so let $\vec{f}=f(\vec{F}_{e1}-\vec{F}_{e2})$, we have $$\vec{J}_{s2}={{N^2}\over{8cmV^2_0}}\int^{1}_{0}{d}\lambda\int{{\vec{r}''_{12}\vec{r}''_{12}\cdot{r}''_{12}[\vec{F}_{e1}(\vec{r}_1+(1-\lambda)\vec{r}''_{12},t)-\vec{F}_{e2}(\vec{r}_1-\lambda\vec{r}''_{12},t)]}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t)d^3r''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ $$={{N^2}\over{8cmV^2_0}}\int^1_0{d}\lambda\int\vec{r}''_{12}\vec{r}''_{12}\cdot[\vec{F}_{e1}(\vec{r}_1+(1-\lambda)\vec{r}''_{12},t)-\vec{F}_{e2}(\vec{r}_1-\lambda\vec{r}''_{12},t)]{{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}'_{12},\vec{p}_2,t)d^3r''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ By using Eq.(98) and (67) as well as relation $\nabla_{\vec{r}_1}{U}(r_{12})=-\nabla_{\vec{r}_2}{U}(r_{12})$, the fourth item in the left side of Eq.(87) is $$-{{N^2}\over{2V^2_0}}\int{U}_1[\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{p}_1}+\nabla_{\vec{r}_2}U(r_{12})\cdot\nabla_{\vec{p}_2}]{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=-{{N^2{q}_1{q}_2}\over{cmV^2_0}}\int{f}_2{{\vec{r}_{12}}\over{r^2_{12}}}\cdot\nabla_{\vec{r}_1}{U}(r_{12}){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=\nabla\cdot\vec{J}_{s3}$$ The transformation in the right side of Eq.(67) means that $\vec{r}_1\rightarrow\vec{r}_1+(1-\lambda)\vec{r}''_{12}$, $\vec{r}_2\rightarrow\vec{r}_1-\lambda\vec{r}''_{12}$, so we have $\vec{r}_{12}=\vec{r}_1-\vec{r}_2\rightarrow\vec{r}''_{12}$ and get $$\vec{J}_{s3}={{q_1{q}_2{N}^2}\over{2cmV^2_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{12}\cdot\vec{r}''_{12}\vec{r}''_{12}}\over{r''^2_{12}{r}''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ $$={{q_1{q}_2{N}^2}\over{2cmV^2_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{12}}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ By using Eq.(98) and relation $\nabla_{\vec{r}_1}{U}(r_{12})=-\nabla_{\vec{r}_2}{U}(r_{12})$ again, The fifth item in the left side of Eq.(87) becomes $$-{{N^3}\over{2V^2_0}}\int{U}_1[\nabla_{\vec{r}_1}{U}(r_{13})\cdot\nabla_{\vec{p}_1}+\nabla_{\vec{r}_2}{U}(r_{23})\cdot\nabla_{\vec{p}_2}]{f}_3{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$={{q_1{q}_2{N}^3}\over{4cmV^2_0}}\int{f}_3{{\vec{r}_{12}}\over{r^2_{12}}}\cdot[\nabla_{\vec{r}_1}{U}(r_{13})-\nabla_{\vec{r}_2}{U}(r_{23})]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3=\nabla\cdot(\vec{J}_{s3}+\vec{J}_{s4})$$ $$\vec{J}_{s4}=-{{q_1{q}_2{N}^2}\over{8cmV^2_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{13}\vec{r}''_{13}\cdot(\vec{r}_1+(1-\lambda)\vec{r}''_{13}-\vec{r}_2)}\over{r''^2_{13}(\vec{r}_1+(1-\lambda)\vec{r}''_{13}-\vec{r}_2)^2}}{{dU(r''_{13})}\over{dr''_{13}}}$$ $$\times{f}_3[\vec{r}_1+(1-\lambda)\vec{r}''_{13},\vec{p}_1,\vec{r}_2,\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{13},\vec{p}_3,t]d^3\vec{r}''_{13}{d}^3\vec{p}_1{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$\vec{J}_{s5}={{q_1{q}_2{N}^2}\over{8cmV^2_0}}\int^1_0{d}\lambda\int{{\vec{r}''_{23}\vec{r}''_{23}\cdot[\vec{r}_1-\vec{r}_2-(1-\lambda)\vec{r}''_{23}]}\over{r''^2_{23}[\vec{r}_1-\vec{r}_2-(1-\lambda)\vec{r}''_{23}]^2}}{{dU(r''_{23})}\over{dr''_{23}}}$$ $$\times{f}_3[\vec{r}_1,\vec{p}_1,\vec{r}_2+(1-\lambda)\vec{r}''_{23},\vec{p}_2,\vec{r}_2-\lambda\vec{r}''_{23},\vec{p}_3,t]d^3\vec{r}''_{23}{d}^3\vec{p}_1{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ Similar to Eq.(99), the first item in the right side of Eq.(87) is $$-{{N^2}\over{2V^2_0}}\int{U}_1[\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2)+\nabla_{\vec{p}_2}\cdot(\vec{G}'_{21}{f}_2)]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$=-{{N^2{q}_1{q}_2}\over{4cmV^2_0}}\int{f}_2{{\vec{r}_{12}\cdot{r}_{12}(\vec{G}'_{12}-\vec{G}'_{21})}\over{r^3_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=\nabla\cdot\vec{J}_{s6}$$ Similar to Eq.(98), we have $$\vec{J}_{s6}=-{{N^2}\over{4cmV^2_0}}\int^1_0{d}\lambda\int\vec{r}''_{12}\vec{r}''_{12}\cdot[\vec{G}'_{12}(\vec{r}_1+(1-\lambda)\vec{r}''_{12},t)-\vec{G}'_{21}(\vec{r}_1-\lambda\vec{r}''_{12},t)]{{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{r}''_{12}{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t)d^3{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ The second item in the right side of Eq.(87) is $${{N^2}\over{2V^2_0}}\int{U}_1(\vec{K}_{12}\cdot\nabla_{\vec{p}_1}+\vec{K}_{21}\cdot\nabla_{\vec{p}_2}){f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{N^2{q}_1{q}_2}\over{4cmV^2_0}}\int{f}_2{{\vec{r}_{12}\cdot{r}_{12}(\vec{K}_{12}-\vec{K}_{21})}\over{r^3_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=-\nabla\cdot\vec{J}_{s7}$$ $$\vec{J}_{s7}=-{{N^2}\over{4cmV^2_0}}\int^1_0{d}\lambda\int\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{K}_{12}-\vec{K}_{21}){{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2(\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t)d^3\vec{r}''{d}^3\vec{p}_1{d}^3\vec{p}_2$$ In the formula, we should let $\vec{r}_1\rightarrow\vec{r}_1+(1-\lambda)\vec{r}''_{12}$, $\vec{r}_2\rightarrow\vec{r}_1-\lambda\vec{r}''_{12}$ in the function $\vec{K}_{12}$ and $\vec{K}_{21}$. Similarly, the third item is $$-{{N^2}\over{2V^2_0}}\int{U}_1[(\vec{K}_{13}+\vec{F}'_{13})\cdot\nabla_{\vec{r}_1}+(\vec{K}_{23}+\vec{F}'_{23})\cdot\nabla_{\vec{p}_2}]{f}_3{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3$$ $$=-{{q_1{q}_2{N}^2}\over{4cmV^2_0}}\int{f}_2{{\vec{r}_{12}\cdot{r}_{12}(\vec{K}_{13}+\vec{F}'_{13}-\vec{K}_{23}-\vec{F}'_{23})}\over{r^3_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3=\nabla\cdot\vec{J}_{s8}$$ $$\vec{J}_{s8}={{N^2}\over{8cmV^2_0}}\int^1_0{d}\lambda\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{K}_{13}+\vec{F}'_{13}-\vec{K}_{23}-\vec{F}'_{23}){{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}''_1{d}^3\vec{p}_2$$ We should also let $\vec{r}_1\rightarrow\vec{r}_1+(1-\lambda)\vec{r}''_{12}$, $\vec{r}_2\rightarrow\vec{r}_1-\lambda\vec{r}''_{12}$ in $\vec{K}_{13}$, $\vec{K}_{23}$, $\vec{F}'_{13}$ and $\vec{F}'_{23}$. For the last item in the right side of Eq.(87), we have $$-{{N^3}\over{2V^2_0}}\int{U}_1[\nabla_{\vec{p}_1}\cdot(\vec{G}'_{13}{f}_3)+\nabla_{\vec{p}_2}\cdot(\vec{G}'_{23}{f}_3)]{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d^3}\vec{r}_3{d}^3\vec{p}_3$$ $$=-{{q_1{q}_2{N}^3}\over{4cmV^2_0}}\int{f}_2{{\vec{r}_{12}\cdot{r}_{12}(\vec{G}'_{13}+\vec{G}'_{23})}\over{r^3_{12}}}{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2{d}^3\vec{r}_3{d}^3\vec{p}_3=\nabla_{\vec{r}_1}\cdot\vec{J}_{s9}$$ $$\vec{J}_{s9}=-{{N^2}\over{4cmV^2_0}}\int^1_0{d}\lambda\int\vec{r}''_{12}\vec{r}''_{12}\cdot(\vec{G}'_{13}+\vec{G}'_{23}){{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_2,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ Let $\vec{J}_s=\vec{J}_{s1}+\vec{J}_{s2}+\vec{J}_{s3}+\vec{J}_{s4}+\vec{J}_{s5}+\vec{J}_{s6}+\vec{J}_{s7}+\vec{J}_{s8}+\vec{J}_{s9}$, Eq. (87) can be written as at last $${{\partial(\rho_0{u}_s)}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}_s\vec{V}+\vec{J}_s)=\rho_0\sigma_s$$
Last let’s discuss interaction caused by radiation damping forces. Suppose that the charges of particles are distributed with spherical symmetry, we have $$U''(\vec{r}_1,\vec{p}_1,t)={{2q^2\dot{\vec{a}_1}\cdot\vec{p}_1}\over{3c^2m}}$$ Similarly, producing eq.(50) by $NU''(\vec{r}_1,\vec{p}_1,t)/2V_0$ and taking integral about $d^3\vec{p}_1$, we get $${{\partial}\over{\partial{t}}}{{N^2}\over{2V^2_0}}\int{U}''{f}_1{d}^3\vec{p}_1+{{N^2}\over{2V^2_0}}\int{U}''{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_1{d}^3\vec{p}_1+{{N^2}\over{2V^2_0}}\int{U}''\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{f}_1{d}^3\vec{p}_1$$ $$-{{N^3}\over{2V^2_0}}\int{U}''\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=-{{N^2}\over{2V^2_0}}\int{U}''\nabla_{\vec{p}_1}\cdot(\vec{G}'_{12}{f}_2){d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$-{{N^3}\over{2V^2_0}}\int{U}''(\vec{k}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ Let energy density and energy fluid density of radiation damping force of unite mass are $$u_f={{N^2}\over{2\rho_0{V}^2_0}}\int{U}''{f}_1{d}^3\vec{p}_1$$ $$\vec{J}_{f1}={1\over{2}}{{N^2}\over{V^2_0}}\int{{\vec{p}_1-m\vec{V}}\over{m}}{U}''{f}_1{d}^3\vec{p}_1$$ We get $$\rho_0{u}_f\vec{V}={1\over{2}}{{N^2}\over{V^2_0}}\int\vec{V}{U}''{f}_2{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\nabla_{\vec{r}_1}\cdot(\rho_0{u}_f\vec{V}+\vec{J}_{f1})=\nabla_{\vec{r}_1}\cdot\{{1\over{2}}({N\over{V_0}})^2\int{{\vec{p}_1}\over{m}}{U}''{f}_1{d}^3\vec{p}_1\}$$ $$={1\over{2}}{{N^2}\over{V^2_0}}\int(U''{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{f}_1+f_1{{\vec{p}_1}\over{m}}\cdot\nabla_{\vec{r}_1}{U}'')d^3\vec{p}_1$$ Therefore, the second item on the left side of Eq.(116) can be written as $${{N^2}\over{2mV^2_0}}\int{U}''\vec{p}_1\cdot\nabla_{\vec{r}_1}{f}_1{d}^3\vec{p}_1=\nabla_{\vec{r}_1}\cdot(\rho_0{u}_f\vec{V}+\vec{J}_{f1})-{1\over{2}}{{N^2}\over{2mV^2_0}}\int{f}_1\vec{p}_1\cdot\nabla_{\vec{r}_1}{U}''{d}^3\vec{p}_1$$ Here $$\nabla_{\vec{r}_1}{U}''(\vec{r}_1,\vec{p}_1,t)={{2q^2}\over{3c^2m}}\nabla_{\vec{r}_1}(\dot{\vec{a}_1}\cdot\vec{p}_1)={{2q^2}\over{3c^2m}}[\vec{p}_1\times(\nabla_{\vec{r}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{r}_1})\dot{\vec{a}_1}]$$ Let $${{q^2{N}^2}\over{3c^2m^2V^2_0}}\int{f}_1\vec{p}_1\cdot[\vec{p}_1\times(\nabla_{\vec{r}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{r}_1})\dot{\vec{a}_1}]d^3\vec{p}_1=\rho_0\sigma_{f1}$$ By using boundary condition, the second item in the right side of Eq.(116) can be written as $${{N^2}\over{2V^2_0}}\int{f}_1\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{U}''{d}^3\vec{p}_1={{q^2{N}^2}\over{3c^2{m}^2{V}^2_0}}\int{f}_1\vec{F}_{e1}\cdot[\vec{p}_1\times(\nabla_{\vec{p}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{p}_1})\dot{\vec{a}}_1]d^3\vec{p}_1=\rho_0\sigma_{f2}$$ The forth item on the left side of Eq.(116) can also be written as $$\nabla\cdot\vec{J}_{f2}={{q^2N^2}\over{3c^2m^2V^2_0}}\int{f}_2\nabla_{\vec{r}_1}{U}(r_{12})\cdot\nabla_{\vec{p}_1}{U}''{d}\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{q^2{N}^2}\over{3c^2m^2V^2_0}}\int{f}_2\nabla_{\vec{r}_1}{U}(r_{12})\cdot[\vec{p}_1\times(\nabla_{\vec{p}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{p}_1})\dot{\vec{a}}_1]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$\vec{J}_{f2}=-{{q^2{N}^2}\over{6c^2mV^2_0}}\int^1_0{d}\lambda\int{d}^3{r}''_{12}{{\vec{r}''_{12}\vec{r}''_{12}\cdot[\vec{p}_1\times(\nabla_{\vec{p}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{p}_1})\dot{\vec{a}_1}]}\over{r''_{12}}}{{dU(r''_{12})}\over{dr''_{12}}}$$ $$\times{f}_2[\vec{r}_1+(1-\lambda)\vec{r}''_{12},\vec{p}_1,\vec{r}_1-\lambda\vec{r}''_{12},\vec{p}_2,t]d^3\vec{r}''_{12}{d}^3\vec{p}_1{d}^3\vec{p}_2$$ Similarly, the first and second items on the right side of Eq.(116) can be written as $$\rho_0\sigma_{f3}={{N^2}\over{2V^2_0}}\int{f}_2\vec{G}_{12}\cdot\nabla_{\vec{p}_1}{U}''{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{N^2}\over{3c^2m^2V^2_0}}\int{f}_2\vec{G}_{12}\cdot[\vec{p}_1\times(\nabla_{\vec{p}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{p}_1})\dot{\vec{a}}_1]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2=\sigma_{f3}$$ $$\rho_0\sigma_{f4}={{N^3}\over{2V^2_0}}\int{f}_2(\vec{K}_{12}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{U}''{d}^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ $$={{q^2N^3}\over{3c^2mV^2_0}}\int{f}_2(\vec{K}_{12}+\vec{F}'_{12})\cdot[\vec{p}_1\times(\nabla_{\vec{p}_1}\times\dot{\vec{a}})+(\vec{p}_1\cdot\nabla_{\vec{p}_1})\dot{\vec{a}}_1]d^3\vec{p}_1{d}^3\vec{r}_2{d}^3\vec{p}_2$$ Let $J_f=\vec{J}_{f1}+\vec{J}_{f2}$, $\sigma_f=\sigma_{f1}+\sigma_{f2}+\sigma_{f3}+\sigma_{f4}$, the energy equivalent equation of radiation damping forces is $${{\partial(\rho_0{u}_f)}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}_f\vec{V}+\vec{J})=\rho_0\sigma_f$$ At last, the energy transition equation of hydromechanics can be written as $${{\partial(\rho_0{u})}\over{\partial{t}}}+\nabla\cdot(\rho_0{u}\vec{V}+\vec{J})=-\vec{\vec{P}}:\nabla\vec{V}+\rho_0\sigma$$ In the formula, $u=u_k+u_v+u_s+u_f$, $\vec{J}=\vec{J}_k+\vec{J}_v+\vec{J}_s+\vec{J}_f$, $\vec{\vec{P}}=\vec{\vec{P}}_k+\vec{\vec{P}}_v+\vec{\vec{P}}_s+\vec{\vec{P}}_f$, $\sigma=\sigma_k+\sigma_s+\sigma_f$. How to obtain the transport parameters from Eq.(130) remains to be researched later.\
\
[7. The definition of non-equivalent entropy of general systems ]{}\
As well known, though the thermodynamics of equilibrium states had matured since the last stage of nineteenth century, the general theory of the non-equilibrium systems has not yet been established up to now. The key to establish the general thermodynamic theory for the non-equilibrium systems is to define the correct non-equilibrium entropy. But this is still an unsolved problem now. Though some special theories for special non-equilibrium processes have defined their non-equilibrium entropies, for example, the irreversible thermodynamics based on the hypothesis of local equilibrium $^{(6)}$, the extended thermodynamics $^{(7)}$, the rational thermodynamics $^{(8)}$, as well as the famous Boltzmann’s non-equilibrium entropy, none of them are with general significance.
In the equilibrium thermodynamic, the first law of thermodynamic or the law of energy conservation is $$dQ=dE+PdV=dE-\vec{F}\cdot{d}\vec{r}$$ In the formula, $Q$ is total heat, $E$ is total internal energy, $P$ is pressure, $V$ is volume and $F$ is external force. For a single component system, the entropy function $S$ is defined as $$TdS=dE+PdV=dE-\vec{F}\cdot{d}\vec{r}=dQ$$ Because the concept of absolute temperature is based on equilibrium states, Eq.(132) is only suitable for equilibrium states. According to irreversible thermodynamics based on the hypothesis of local equilibrium, a system can be divided into many small cells, and each cell can be regarded as a small equilibrium system in which local equilibrium temperature $T(\vec{r},t)$ can still be defined. Let $S_m(\vec{r},t)$, $E_m(\vec{r},t)$, $Q_m(\vec{r},t)$ and $E_m(\vec{r},t)$ represent entropy density, internal energy density, heat density and external force density individually, the total entropy $S$, total internal energy $E$, total heat $Q$ and total external force $F$ of a system can be written as $$S(t)=\int{S}_m(\vec{r},t){d}^3\vec{r}~~~~~~~~~~~~~~~~~~~E(t)=\int{E}_m(\vec{r},t){d}^3\vec{r}$$ $$Q(t)=\int{Q}_m(\vec{r},t){d}^3\vec{r}~~~~~~~~~~~~~~~~~~~\vec{F}(t)=\int\vec{F}_m(\vec{r},t){d}^3\vec{r}$$ The first law of thermodynamics and the definition of entropy can be written as $$dQ_m=dE_m-\vec{F}_m\cdot{d}\vec{r}$$ $$TdS_m=dQ_m=dE_m-\vec{F}_m\cdot{d}\vec{r}$$ As we known that if the interactions between micro-particles are known, the ensemble probability functions can be obtained. If the ensemble probability function is known, the heat density or internal energy density is also known. In this case, the local equilibrium entropy depends on the local equilibrium temperature. In the current theory of local equilibrium thermodynamics, the function form of local equilibrium temperature $T(\vec{r},t)$ can’t be determined by theory, but we can decide it through measurement point by point in principle by means of a small enough thermometer. Then we can determine the function form non-equilibrium entropy according to Eq.(136).
Because the concepts of temperature and entropy are defined based on the concept of equilibrium, when the hypothesis of local equilibrium can’t hold, there are argument about whether or not the non-equilibrium temperature and non-equilibrium entropy can exit at present, thought the concepts of external energy, heat, and force are meaningful. It should be seen that the concept of entropy was put forward only for the purports to describe irreversibility of macro-system’s evolution, so that the second law of thermodynamics can be expressed in the clear form of mathematics. So it is necessary for us to define the non-equilibrium entropy form this angle. The problem is how to define it, instead of whether or not it exists. For the definition of non-equilibrium entropy, two conditions should be satisfied. The first is that when a system reaches equilibrium state, the non-equilibrium entropy should identify with the equilibrium entropy defined in the equilibrium thermodynamics. The second is that the definition should satisfy the principle of entropy increment in the non-equilibrium processes. Conversely, if a function can satisfy these two conditions, it can be regarded as non-equilibrium entropy.
On the other hand, entropy is not a physical quantity that can be measured directly. T the concept of entropy depends on other quantities by means of them entropy is constructed. If the concepts to construct entropy are meaningful, entropy is also meaningful. Entropy is also a extensive quantity with additivity. So we can define non-equilibrium entropy density, just as we can define energy density, heat density and forces density in equilibrium states without considering equilibrium concept.
As for non-equilibrium temperature, the situation is different. Temperature is a quantity that can be measured directly. The concept of temperature depends on equilibrium closely. If equilibrium does not exist, there is no the concept of temperature, for we can not use thermometer to measure temperature in a non-equilibrium system in which the hypothesis of local equilibrium loses its meaning. Meanwhile, temperature is strength quantity. We can’t define temperature density. The concept of temperature always reflects the whole nature of big enough system. So we can say that the concept of non-equilibrium temperature is meaningless while the concept of local equilibrium does not hold.
In this way, we do not use the concept of non-equilibrium temperature in the general non-equilibrium systems. In order to define non-equilibrium entropy, similar to the local equilibrium entropy and by the simplest method, we introduce an unknown function $R(x,t)$ and define non-equilibrium entropy by following relation for a single component system $$RdS_m=dQ_m=dE_m-\vec{F}_m\cdot{d}\vec{r}$$ Because $R$ is not temperature, it can’t be determined by the point to point’s measurement in system. It is proved blow that we can decide the form of $R$ function by the method of statistical mechanics, and prove the entropy increment principle of general non-equivalent systems by the connection of statistical mechanics and thermodynamic. When systems reach equilibrium states, we have $R=T=$ constant. So the function $S_m$ defined in Eq.(137) satisfies two conditions mentioned above and can be used as the definition of non-equilibrium entropy of general systems. It can also be proved at last that the forms of non-equilibrium entropy and the function $R$ are not unique, so this is also a reason why $R$ can not be regarded as non-equilibrium temperature. Therefore, according to Eq.(137), we have $$S_{m1}(\vec{r},t_1)-S_{m0}(\vec{r},t_0)=\int^{C_1}_{C_0}{{dQ_m(\vec{r},t)}\over{R(\vec{r},t)}}=\int^{C_1}_{C_0}{{1}\over{R(\vec{r},t)}}{{dQ_m(\vec{r},t)}\over{dt}}{d}t=\int^{t_1}_{t_0}{W}(\vec{r},t)dt$$ Here $W(\vec{r},t)=\dot{Q}_m(\vec{r},t)/{R}(\vec{r},t)$, $\dot{Q}_m(\vec{r},t)=\partial{Q}_m{/}\partial{t}+\nabla{Q}_m\cdot{d}\vec{r}{/}{d}t$. It means that when the system evolves from the state $C_0$ at moment $t_0$, it reaches the state $C_1$ at moment $t_1$. By using Eq.(133), the non-equilibrium entropy’s increase is $$S_1(t_1)-S_0(t_0)=\int\int{{dQ_m(\vec{r},t)}\over{R(\vec{r},t)}}{d}^3\vec{r}=\int[S_{m1}(\vec{r},t_1)-S_{m0}(\vec{r},t_0)]d^3\vec{r}$$ Now let’s discuss how to decide the form of function $R$. Let $f_1(x_1,p_1,t)=f(x,p,t)$, satisfies Eq.(50). In the 6-dimension phase space, the non-equilibrium statistical entropy density is written as $S_N(\vec{r}_1\cdot\cdot\cdot\vec{r}_N,\vec{p}_1\cdot\cdot\cdot\vec{p}_N,t)$. In the 6-dimension phase space, the non-equilibrium statistical entropy density is written as $S_1(x_1,p_1,t)=S_p(x,p,t)$. We define their relation as $$S_p(\vec{r},\vec{p},t)=f^{-1}\int{f}_N(\vec{r}_1\cdot\cdot\cdot\vec{r}_N,\vec{p}_1\cdot\cdot\cdot\vec{p}_N,t)S_N(\vec{r}_1\cdot\cdot\cdot\vec{r}_N,\vec{p}_1\cdot\cdot\cdot\vec{p}_N,t)d\Omega_2\cdot\cdot\cdot\Omega_N$$ The total entropy is $$S(t)=\int{f}_N(\vec{r}_1\cdot\cdot\cdot\vec{r}_N,\vec{p}_1\cdot\cdot\cdot\vec{p}_N,t)S_N(\vec{r}_1\cdot\cdot\cdot\vec{r}_N,\vec{p}_1\cdot\cdot\cdot\vec{p}_N,t)d\Omega_1\cdot\cdot\cdot\Omega_N$$ $$=\int{f}(\vec{r},\vec{p},t)S_p(\vec{r},\vec{p},t)d^3\vec{r}{d}^3\vec{p}$$ Comparing Eq.(144) with (133), the relation between non-equilibrium thermodynamic entropy density $S_m$ and statistical entropy density $S_p$ is $$S_m(\vec{r},t)=\int{f}(\vec{r},\vec{p},t)S_p(\vec{r},\vec{p},t)d^3\vec{p}$$ For non-equivalent states, let $dS=dS_i+dS_e$. $dS_i$ is the entropy generation in the inside of the system, $dS_e$ is the entropy fluid from the outside to the inside of the system with relations ${^(6)}$ $${{dS_i}\over{dt}}=\int\sigma_m{d}^3\vec{r}~~~~~~~~~~~~~{{dS_e}\over{dt}}=-\int\vec{j}_m\cdot{d}\vec{\Sigma}$$ The equilibrium equation of entropy is $${{\partial{S}_m}\over{\partial{t}}}+\nabla\cdot\vec{j}_m=\sigma_m$$ Similar to the local equilibrium theory, for a single component system, the entropy fluid and entropy generation can also be written as $$\vec{j}_m=S_m\vec{V}+{1\over{R}}\vec{J}~~~~~~~~~~~~~~\sigma_m=\vec{J}\cdot\nabla{1\over{R}}-{1\over{R}}\vec{\vec{\Pi}}:\nabla\vec{V}$$ in the formula, $\vec{J}=\vec{J}_k+\vec{J}_v+\vec{J}_s+\vec{J}_f$ is the hot fluid discussed above, $\vec{\vec{\Pi}}$ is viscosity stress tensor. By using Eq.(50) and (141), we get $${{dS(t)}\over{dt}}=\int({{\partial{f}}\over{\partial{t}}}{S}_p+f{{\partial{S}_p}\over{\partial{t}}})d^3\vec{r}{d}^3\vec{p}=\int(-S_p{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{f}+S_p{K}+f{{\partial{S}_p}\over{\partial{t}}})d^3\vec{r}{d}^3\vec{p}$$ $$=\int\{\int[-\nabla_{\vec{r}}\cdot(S_p{{\vec{p}}\over{m}}{f})+f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{S}_p+S_p{K}+f{{\partial{S}_p}\over{\partial{t}}}]d^3\vec{p}\}d^3\vec{r}$$ In the formula $$K=-\vec{F}_{e1}\cdot\nabla_{\vec{p}_1}{f}_1-N\int(\vec{K}_{12}+\vec{F}'_{012}+\vec{F}'_{12})\cdot\nabla_{\vec{p}_1}{f}_s{d}^3\vec{r}_2{d}^3\vec{p}_s-N\int\nabla_{\vec{p}_1}\cdot(\vec{G}_{12}{f}_2){d}^3\vec{r}_2{d}^3\vec{p}_2$$ Eq.(146) can be written as $${{\partial{S}_m}\over{\partial{t}}}=\int[-\nabla_{\vec{r}}\cdot(S_p{{\vec{p}}\over{m}}{f})+f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{S}_p+S_p{K}+f{{\partial{S}_p}\over{\partial{t}}}]d^3\vec{p}$$ Comparing it withEq.(144), we have $$\vec{j}_m=\int{f}{{\vec{p}}\over{m}}S{d}^3\vec{p}~~~~~~~~~~~~~\sigma_m=\int(f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{S}_p+KS_p+f{{\partial{S_p}}\over{\partial{t}}})d^3\vec{p}$$ By using Eq.(145), we get£º $$S_m\vec{V}+{1\over{R}}\vec{J}=\int{f}{{\vec{p}}\over{m}}S_p{d}^3\vec{p}$$ $$\vec{J}\cdot\nabla{1\over{R}}-{1\over{R}}\vec{\vec{\Pi}}:\nabla\vec{V}=\int(f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{S}_p+KS_p+f{{\partial{S}_p}\over{\partial{t}}})d^3\vec{p}$$ Put Eq.(142) and $\vec{J}=\vec{J}_k+\vec{J}_v+\vec{J}_s+\vec{J}_f$ into Eq.(150), we can write $$\vec{J}(\vec{r},t)=\int\vec{J}_q(\vec{r},\vec{p},t){d}^3\vec{p}$$ Then replacing the integral about $d^3\vec{p}$, we get the form of the non-equilibrium statistical entropy $$S_p(\vec{r},\vec{p},t)={{G(\vec{r},\vec{p},t)}\over{R(\vec{r},t)}}$$ $$G(\vec{r},\vec{p},t)={1\over{f}}{{(\vec{p}-m\vec{V})}\over{(\vec{p}-m\vec{V})^2}}\cdot\vec{J}_q(\vec{r},\vec{p},t)$$ Putting Eq.(153) into Eq.(151), we get $${{\partial{R}}\over{\partial{t}}}+\vec{A}\cdot\nabla{R}=BR$$ In the formula $$\vec{A}={{\int{f}G\vec{p}{d}^3\vec{p}}\over{m\int{f}Gd^3\vec{p}}}~~~~~~~~~~~B={{\vec{\vec{\Pi}}:\nabla\vec{V}+\int(f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{G}+KG+f{{\partial{G}}\over{\partial{t}}})d^3\vec{p}}\over{\int{f}Gd^3\vec{p}}}$$ On the other hand, the viscosity stress tensor in Eq.(145) can be written as $\vec{\vec{\Pi}}=P\delta_{ij}+\varepsilon_{ij}$, here $P$ is pressure and it is known or can be calculated in principle if $f$ is known. $\varepsilon_{ij}$ is the viscosity tensor and can also be calculated if $f$ is known. So $\vec{\vec{\Pi}}$ can be regarded as known quantity. Therefore, as long as the probability function $f$ is known by means of statistical physics, we can achieve the form of $R$ function from Eq.(155) in principle. Thus the forms of non-equilibrium statistical entropy in the 6-dimention phase space can be determined by Eq.(153) and the form of non-equilibrium thermodynamic entropy can be determined by Eq.(142). The form of non-equilibrium statistical entropy in the $6N$ -dimention phase space can also be determined by Eq.(140) in principle.
The equilibrium states are discussed now. If we define the states with $R$ =constant as the equilibrium states, according to Eq.(155), the equilibrium condition is $B=0$ . From Eq.(156), by considering the fact that we have $\vec{V}$ =constant when a system reaches equilibrium states, the equilibrium condition becomes $$\int(f{{\vec{p}}\over{m}}\cdot\nabla_{\vec{r}}{G}+KG+f{{\partial{G}}\over{\partial{t}}})d^3\vec{p}=0$$ This means $\sigma_m=0$ comparing with Eq.(149) when $R$ =constant.
On the other hand, if $R$ has nothing to do with space coordinate $\vec{r}$ but relative to time $t$, we have $R=R(t)=T(t)$, $T$ is absolute temperature. Suppose $Q_m$ also has nothing to do with space coordinate $\vec{r}$ but relative to time $t$, i.e, $Q_m=Q_m(t)$, because $${{dQ_m}\over{dt}}={{\partial{Q}_m}\over{\partial{t}}}+\nabla{Q}_m\cdot{{d\vec{r}}\over{dt}}={{\partial{Q}_m}\over{\partial{t}}}$$ Put it into Eq.(139), we get $$S(t)-S_0=\int{1\over{T}}\int{{dQ_m}\over{dt}}d^3\vec{r}dt=\int{1\over{T}}{{\partial}\over{\partial{t}}}\int{Q}_m{d}^3\vec{r}dt=\int{1\over{T}}{{\partial{Q}}\over{\partial{t}}}dt=\int{{dQ}\over{T}}$$ This is just equilibrium entropy in equilibrium thermodynamics. Therefore, the temperature can be the functions of time in the processes of equilibrium states, as long as the systems are uniform. The processes are just the so-called quasi-stationary processes, but this point has not been noted clearly in the current thermodynamics. It is useful for us to calculate entropy function by using this nature.
Now let we prove the principle of entropy increment in the adiabatic processes. That is to prove the following relation for the non-equilibrium isolated systems $${{dS}\over{dt}}=\int{1\over{R}}{{dQ_m}\over{dt}}d^3\vec{r}>{0}$$ In principle, if the distribution function is known, we can prove it directly by the method of statistical mechanics, but this is impossible at present because of difficulty in mathematics. So we prove it by the method of connecting thermodynamics and statistical mechanics.
Firstly, because $Q_m(t)$ and $R(t)$ are the continuous functions of time, so $S(t)$ is also a continuous function of time. Next, suppose an isolated system is in an equilibrium state at time $t_0$. At time $t_0+dt$ a disturbing force is acted on the system so that the system becomes a non-equilibrium states. Then the disturbing force is removed immediately and the system would evolutes in the isolated state and reaches another equilibrium state at time $t_n$. It is impossible to have another equilibrium state between these two equilibrium states. If there exists the third equilibrium state, it means that the system would become non-equilibrium from equilibrium without disturbing force form outside. In this case, the second law of thermodynamics would be violated. Thus, what we should prove is that the entropy would never decrease during the non-equilibrium process between two equilibrium states. By using the method of reduction to absurdity, we will prove below that $\Delta{S}$ should be a monotonously increasing function during the whole time $t_0\rightarrow{t}_n$.
According to the theory of equilibrium thermodynamics, because the system is in the equilibrium states at initial and final states, we have $\Delta{S}=S(t_n)-S(t_0)>0$. If $\Delta{S}$ does not increase monotonously, when $\Delta{S}$ changes from $dS>0$ to $dS<0$ or from $dS<0$ to $dS>0$, it would appear a state with $dS=0$ or $dS/{d}t=0$ at a certain moment $t_i$ with $t_0<t_i{<}t_n$. According to equilibrium theory, it means that the system must be in the equilibrium states at moment $t_i$. However, this is impossible as mention above. On the other hand, because $\vec{F}_m=0$ for an isolated system, we have $\dot{Q}_m=\dot{E}_m$ and get according to Eq.(138) $${{dS}\over{dt}}=\int{{\dot{Q}_m}\over{R}}{d}^3\vec{r}=\int{{\dot{E}_m}\over{R}}{d}^3\vec{r}=0$$ There are two ways to make Eq.(161) possible. The first is $\dot{E}_m=0$ or $E_m$ =constant. However, this is impossible for $f\neq$ constant, so $E_m\neq$ constant in the non-equilibrium processes. The second way is that $\dot{E}_m{/}R$ is not a single valued function so that for any boundary condition $\vec{r}=\vec{r}_1$ and $\vec{r}=\vec{r}_2$, we always have $$\int^{\vec{r}_2}_{\vec{r}_1}{{\dot{E}_m}\over{R}}{d}^3\vec{r}=W(\vec{r}_2,t)-W(\vec{r}_1,t)=0$$ However, this condition can’t be satisfied generally, for the probability distribution function is a single valued function in general. So Eq.(161)and (162) can not hold in the non-equilibrium processes in general. Therefore, $dS/dt\neq{0}$ in non-equilibrium processes and non-equilibrium entropy $S$ must be a monotone function. On other hand, because $\Delta{S}=S(t_n)-S(t_0)>0$, so during the process from time $t_0$ to $t_n$, $S$ must be a monotonously increasing function. Thus, the principle of non-equilibrium entropy increment is proved for the non-equilibrium processes.
At last we discuss the uniqueness problem of the definition of non-equilibrium entropy. Because probability distribution function $f$ is unique, the form of function $R$ is unique, so the form of non-equilibrium entropy shown in (137) is also unique. On the other hand, if there exists other form’s non-equilibrium entropies $S'_m$, we can always define them as $$R'dS'_m=dQ_m+dY_m$$ If $Y_m$ is an unknown function, the definition is meaningless for there are three unknown functions and we can’t decide them by the method mentioned above. If $Y_m$ is the function of known functions, for example energy density, heat density and force density and so on, we can also determinate the forms of $S'_m$ and $R'$ by the same method. If $dY_m=0$ and $R'=T$ in equilibrium states, the form of $S'_m$ would be the same as equilibrium entropy in equilibrium state. If $Y_m$ and $R'$ are also the single valued functions and $\dot{Y}_m{/}R'\neq{0}$ in non-equilibrium states, we can also prove the principle of non-equilibrium entropy increment. If all conditions are satisfied, we can also consider $S'_m$ as the non-equilibrium entropy density. On the other hand, from Eq.(137) and (163) we have $$RdS=R'd(S'_m-U_m{/}R')$$ $$S_m(\vec{r},t)=\int{{R'}\over{R}}{{d(S'_m-Y_m{/}R')}\over{dt}}{d}t+S_{m0}=\int{{R'}\over{R}}(\dot{S}'_m-{{\dot{Y}_m}\over{R'}}+{{Y_m\dot{R}'}\over{r'^2}})dt+S_{m0}$$ In this way, both $S_m$ and $S'_m$ can be regarded as non-equilibrium entropy densities connecting by relation above. So the definition of non-equilibrium entropy is not unique. This is just the reason why we can define different non-equilibrium entropies in the current non-equilibrium thermodynamics. Because $R\neq{R}'$, so $R$ can’t be regarded as non-equilibrium temperature, for non-equilibrium temperature as a measurable physical quantity should be unique if it exists. Because non-equilibrium entropy is directly immeasurable, so it can be non-unique. In this way, non-equilibrium entropy is not a state function and its increment can also be non-unique, thought equilibrium entropy is a state function and its increment is unique.
Now we have completed the reform of classical statistical mechanics. By considering retarded electromagnetical interaction, we can also introduce asymmetry of time reversal into quantum theory. This problem will be discussed later.\
\
\
\
Reference\
1. Miao Dongsheng, Liu Huajie, Great Ease On Chaos, People’s University Publishing House, 262£¨1993£©.\
2. Wang Zhuxi, Introduction to Statistical Physics, People’s Education Publishing House, 34£¬152£¨1965£©.\
3. S.Chandrasekhar, Rev. Mod.Phys., 15£¬84 (1943).\
4. Cao Changqi, Electrodynamics, People’s Education Publishing House, 240£¨1979£©.\
5. Luo Liaofu, Theory of Non-equilibrium Statistics, Neimenggu University Publishing House, 355, 358£¨1990£©.\
6.De Groot and Mazur, Nonequilibrium Thermodynamics£¨1962£©. P.Glansdorff and I.Prigogine, Thermodynamics Theory of Structure, Stability and Fluctuations, £¨1971£©.\
7.S.Simons, J.phys. A: Math., Nucl. Gen., 6,1934(1973). G.Lebon, et al., J.phys. A, 13(1980), 275.D.Jou, J.Casas-Vazquez, G.Lebon, Rep. Prog. Phys. 51 1105(1988).\
8.C.Trusdell, Rational Thermodynamics,£¨1969£©. B.p.Coleman, J.Chem. Phys., 47, 597(1967). W.Noll,Arch. Rational Mech. Anal., 17, 85(1973).
|
---
abstract: 'A simple stochastic model which describes microtubule dynamics and explicitly takes into account the relevant biochemical processes is presented. The model incorporates binding and unbinding of monomers and random phosphate release inside the polymer. It is shown that this theoretical approach provides a microscopic picture of the dynamic instability phenomena of microtubules. The cap size, the concentration dependence of the catastrophe times and the delay before observing catastrophes following a dilution can be quantitatively predicted by this approach in a direct and simple way. Furthermore, the model can be solved analytically to a large extend, thus offering a valuable starting point for more refined studies of microtubules dynamics.'
author:
- Ranjith Padinhateeri
- 'Anatoly B. Kolomeisky'
- David Lacoste
title: The random release of phosphate controls the dynamic instability of microtubules
---
Introduction {#introduction .unnumbered}
============
Microtubules (MT) are involved in key processes of cell functions such as mitosis, cell morphogenesis and motility. The building blocks of microtubules are $\alpha \beta$-tubulin heterodimers which can associate either laterally or longitudinally [@Desai_Mitchison_MT:97]. In biological systems, microtubules display unusual non-equilibrium dynamic behaviors, which are relevant for cell functioning. One such behavior, termed treadmilling involves a flux of subunits from one polymer end to the other, and is created by a difference of critical concentrations of the two ends [@wilson:1998]. In another behavior, termed dynamic instability, microtubules undergo alternating phases of elongation and rapid shortening [@mitchison:1984]. The two behaviors, treadmilling and dynamic instability result from an interplay between the polymerization and the GTP hydrolysis.
The cap model provides a simple explanation for the dynamic instability: a growing microtubule is stabilized by a cap of unhydrolyzed units at its extremity, and when this cap is lost, the microtubule undergoes a sudden change to the shrinkage state, a so called catastrophe. The transitions between growth and shrinking can be described by a two state model with prescribed stochastic transitions [@Bayley:89; @hill:84]. This model has lead to a number of theoretical and experimental studies [@Verde:1992; @Leibler:93; @Leibler-cap:96], which have shown in particular the existence of a phase boundary between a bounded growth and an unbounded growth regimes. Although many features of microtubules dynamics can be captured in this way, this model remains phenomenological, because of the unknown dependence of the transition rates as function of external factors, such as tubulin concentration or temperature.
To go beyond phenomenological models, one needs to account for the main chemical reactions occurring at the level of a single monomer [@margolin:2006; @Wolynes:06]. These reactions can be assumed to occur between discrete states, and the corresponding transition rates can be observed experimentally. In this way, discrete models can be constructed, which capture remarkably well the main dynamical features of single actin or single microtubule filaments [@kolomeisky:06; @Antal-etal-PRE:07; @Ranjith2009; @Ranjith2010]. These discrete models have the additional advantage of being free from some of the limitations inherent to continuous models.
The question of the precise mechanism of hydrolysis in microtubules or actin has been controversial for many years despite decades of experimental work. In the vectorial model, hydrolysis occurs only at the unique interface between units bound to GTP/ATP and units bound to GDP/ADP, while in the random model, hydrolysis can occur on any unhydrolyzed unit of the filament leading to a multiplicity of interfaces at a given time. Between these two limits, models with an arbitrary level of cooperativity in the hydrolysis have been considered (see for instance [@wegner-1996; @kierfeld-2010] for actin and [@Leibler-cap:96] for microtubules). The idea that the filament dynamics depends on the mechanism of hydrolysis in its interior or more generally on the internal structure of the filament has been recently emphasized and it has been given the name of structural plasticity [@mitchison:2009]. As a practical recent illustration of that idea, the dynamical properties of microtubules can be tuned by incorporating in them GDP-tubulin in a controlled way [@valiron:2010].
In microtubules, many experimental facts point towards a mechanism of hydrolysis which is non-vectorial but random or cooperative. Studies of the statistics of catastrophes [@jason-dogterom:03; @Walker-1988; @voter:1991] already provided hints about this, but there are now more direct evidences. The observation of GTP-tubulin remnants inside a microtubule using a specific antibody [@perez:2008] is probably one of the most compelling evidences. With the development of microfluidic devices for biochemical applications, similar experiments probing the internal structure and the dynamics of single bio-filaments are becoming more and more accessible. Furthermore, it is now possible to record the dynamics of microtubule plus-ends at nanometer resolution [@schek:2007; @kerssemakers:2006], thus allowing essentially to detect the addition and departure of single tubulin dimers from microtubule ends. In view of all these recent developments, there is a clear need to organize all this information on microtubules dynamics with a theoretical model. Here, we propose a simple one dimensional non-equilibrium model, accounting for the hydrolysis occurring within the filament. We show that this model successfully explains for many known experimental observations with microtubules such as: the cap size, the dependence of the catastrophe time versus monomer concentration and the delay before a catastrophe following a dilution [@jason-dogterom:03; @Walker-1988; @voter:1991]. Our interpretation of this data confirms and goes beyond results obtained in a recent numerical and theoretical study of the dynamic instability of MT [@Brun-2009]. In vivo, the dynamics of microtubules is controlled by a variety of binding proteins, which typically modify the polymerization process. Here we focus on the physical principles which control the dynamic instability of microtubules in vitro in the absence of any microtubule associated proteins. Our model differs from previous attempts to address this problem, in that it is sufficiently simple to be analytically solvable to a large extend, while still capturing the main features of MT dynamics.
Model {#model .unnumbered}
=====
GTP hydrolysis is a two steps process: the first step, the GTP cleavage produces GDP-Pi and is rapid, while the second step, the release of the phosphate (Pi), leads to GDP-tubulin and is by comparison much slower. This suggests that many kinetic features of tubulin polymerization can be explained by a simplified model of hydrolysis, which takes into account only the second step of hydrolysis and treats tubulin subunits bound to GTP and tubulin subunits bound to GDP-Pi as a single specie [@kolomeisky:06; @Ranjith2009; @Ranjith2010]. This is the assumption which we make here. Therefore what we mean by random hydrolysis here is the random process of phosphate release, which as we argue, controls the dynamic instability of microtubules.
Our second main assumption has to do with the neglect of the protofilament structure of microtubules. Protofilaments are likely to be strongly interacting and should experience mechanical stresses in the MT lattice. We agree that modeling these effects is important to provide a complete microscopic picture of the transition from the growing phase to the shrinking phase, since this transition should involve protofilament curling near the MT ends [@van-buren-2005; @kulic-2010]. Here, we do not account for such effects, because as in Ref. [@Leibler-cap:96], we are interested in constructing a minimal dynamic model for microtubules, which would describe in a coarse-grained way the main aspects of the dynamics of this polymer.
We also assume that the filament contains a single active end and is in contact with a reservoir of subunits bound to GTP. The parameters of the model are as in Refs. [@kolomeisky:06; @Ranjith2009; @Ranjith2010]: the rate of addition of subunits $U$, the rate of loss of subunits bound to GTP, $W_T$, the rate of loss of subunits bound to GDP, $W_D$, and finally the rate of GTP hydrolysis $r$ assumed to occur randomly on any unhydrolyzed subunits within the filament. In Fig. \[fig-sketch\], all these possible transitions have been depicted. We have assumed that all the rates are independent of the concentration of free GTP subunits $C$ except for the on-rate [@jason-dogterom:03], which is $U=k_0 c$. All the rates of this model have been determined precisely experimentally except for $r$. The values of these rates are given in table \[table-rates\].
![(a): Representation of the various elementary transitions considered in the model with their corresponding rates, $U$ the on-rate of GTP-subunits, $W_T$ the off-rate of GTP-subunits, $W_D$ the off-rate of GDP-subunits and $r$ the hydrolysis rate for each unhydrolyzed unit within the filament. (b) Pattern for a catastrophe with $N$ terminal units in the GDP state. \[fig-sketch\] ](fig1-pnas)
[lcrr]{} On-rate of T subunits at + end &$k_0$ ($\mu$M$^{-1}s^{-1}$)& 3.2 & [@howard-book] Off-rate of T subunits from + end & $W_T $($s^{-1}$) & 24 & [@jason-dogterom:03] Off-rate of D subunits from + end & $W_D$($s^{-1}$) & 290 & [@howard-book] Hydrolysis rate (random model) & $r$ (s$^{-1}$)& 0.2 &
As a result of the random hydrolysis, a typical filament configuration contains many islands of unhydrolyzed subunits within the filament. The last island containing the terminal unit is called the cap.
Results and discussion {#results-and-discussion .unnumbered}
======================
In this section, we obtain the nucleotide content of the filament within a mean-field approximation (for earlier references on this model, see [@Ranjith2010; @kolomeisky:06; @wegner:86]). We denote by $i$ the position of a monomer within the filament, from the terminal unit at $i=1$. For a given configuration, we introduce for each subunit $i$ an occupation number $\tau_i$, such that $\tau_i=1$ if the subunit is bound to GTP and $\tau_i=0$ otherwise. In the reference frame associated with the end of the filament, the equations for the average occupation number are for $i=1$, =U (1- \_1 ) - W\_T \_1 (1-\_2) + W\_D \_2 (1-\_1) - r \_1 , \[recursion1\] and for $i >1$, & = & U \_[i-1]{} -\_i + W\_T \_1 (\_[i+1]{} - \_i )\
& + & W\_D (1-\_1) ( \_[i+1]{} - \_i ) - r \_i . \[recursioni\] In a mean-field approach, correlations are neglected, which means that for any $i,j$, $\langle \tau_i \tau_j \rangle$ is replaced by $\langle \tau_i \rangle \langle \tau_j \rangle$. At steady state, the left-hand sides of Eqs. \[recursion1\]-\[recursioni\] are both zero, which leads to recursion relations for the $\langle \tau_i \rangle$. Let us denote $\langle \tau_1 \rangle=q$ as the probability that the terminal unit is bound to GTP. The recursion relations have a solution of the form for $i\geq1$, =b, \[recursion\] where $b=(U-q(W_T+r))/(U-q W_T)$. Combining Eqs. \[recursion1\]-\[recursion\], one obtains $q$ explicitly as function of all the rates as the solution of a cubic equation which is given in the appendix of Ref. [@Ranjith2010]. The mean filament velocity (namely the average rate of change of the total filament length) is given by v= ( U - W\_T q - W\_D (1-q) ) d, \[velocity\] in terms of the monomer size $d$. At the critical concentration $c_c$, the mean velocity vanishes, which corresponds to the boundary between a phase of bounded growth for $c<c_c$ and a phase of unbounded growth for $c>c_c$ [@Ranjith2010]. The plot of this velocity versus concentration exhibits a kink shape near the critical concentration, which is not particularly sensitive to the mechanism of hydrolysis since it is present both in the vectorial and random model [@kolomeisky:06; @Ranjith2010]. This kink is well known from studies with actin [@hill:85] but has not been studied experimentally with microtubules except in Ref. [@carlier-hill-1984] in a specific medium containing glycerol.
The distribution of the nucleotide along the filament length has a well defined steady-state in the tip reference frame at arbitrary value of the monomer concentration $c$. Using Eq. \[recursion\], it follows that $\langle \tau_i \rangle= b^{i-1} q$, and therefore, the steady-state probability that the cap has exactly a length $l$, $P_l$, is $P_l=(\prod_{i=1}^l \langle \tau_i \rangle) (1-\langle \tau_{l+1} \rangle)$. This leads to the following expression: P\_l=b\^[l(l-1)/2]{} q\^l ( 1 - b\^l q ), \[SS proba\] and, the corresponding average cap size is : l = \_[l 1]{} l P\_l = \_[l 1]{} b\^[l(l-1)/2]{} q\^l.
In figure \[fig-cap\], we show how this average cap size varies as function of the free tubulin concentration. The average cap becomes longer than approximatively one subunit above the critical concentration, $c_c$ defined above, and which is about 7$\mu$M for the parameters of table \[table-rates\] used here. At concentrations significantly larger than this value, the cap grows more slowly, as $\sqrt{\pi U/2r}$ as $U \rightarrow \infty$ [@Antal-etal-PRE:07; @Leibler-cap:96]. In the range of concentration \[0:100 $\mu$M\], the cap stays smaller than about 47 subunits, which represents only 3.6 layers (or 28 nm). This estimate indicates that the cap is below optical resolution in the range of tubulin concentration generally used, which could explain the difficulty for observing it experimentally.
![Average cap size in number of subunits as function of the free tubulin concentration $c$ in $\mu$M. The line is the mean-field analytical solution and the filled squares are simulation points. \[fig-cap\] ](fig2-pnas)
A long standing view in the literature is that the cap could be as small as a single layer, as shown by experiments based on a chemical detection of the phosphate release [@wilson:2002]. This view has been recently challenged by two experiments, in which the length fluctuations of microtubules were probed at the nanoscale, [@schek:2007; @kerssemakers:2006]. The interpretation of these experiments still generate debates [@howard:2009; @odde:2008]. In any case, taken together these two experimental studies reported a highly variable MT plus-end growth behavior, which suggests that the cap size is a fluctuating quantity, larger than one layer but smaller than about 5 layers. We note that such a range is compatible with our prediction and agrees with the estimation obtained from dilution experiments [@voter:1991]. Furthermore, our stochastic model naturally incorporates a fluctuating cap size. Even if the cap is indeed below optical resolution, we note that this does not rule out the possibility that it could be observed with the technique of Ref. [@perez:2008].
In figure \[fig-cap\], we also compare the predictions of the mean-field approximation with an exact simulation of the dynamics. We find that mean-field theory provides an excellent approximation of the exact solution when the free tubulin concentration is above the critical concentration, which corresponds to the conditions of most experiments [@jason-dogterom:03; @Walker-1988]. Deviations can be seen between the exact solution and its mean-field approximation in figure \[fig-cap\] but only below the critical concentration. Many other quantities of interest follow from the determination of the nucleotide content of a given subunit, namely $\langle \tau_i \rangle$, such as the length fluctuations of the filament [@Ranjith2010] or the islands distribution of hydrolyzed or non-hydrolyzed subunits [@Antal-etal-PRE:07; @kierfeld-2010]. These predictions should prove particularly useful in testing this model against experiments, since the island distribution of unhydrolyzed units or “remnants” will become accessible in future experiments similar to that of [@perez:2008] but carried out in in vitro conditions.
Frequency of catastrophes and rescues vs. concentration {#frequency-of-catastrophes-and-rescues-vs.-concentration .unnumbered}
-------------------------------------------------------
One difficulty in bridging the gap between a model of the dynamic instability and experiments, lies in a proper definition of the event which is called a catastrophe, since the number of reported catastrophes is affected by several factors depending on the experimental conditions, such as for instance the experimental resolution of the observation [@schek:2007].
Although a catastrophe manifests itself experimentally as an abrupt reduction of the total filament length, we choose to define it from the nucleotide content of the terminal region. Following closely Ref. [@Brun-2009], we define a shrinking configuration as one in which the last $N$ units of the filament are all in the GDP state (irrespective of the state of the other units) as shown in figure \[fig-sketch\]. The remaining configurations (with an unhydrolyzed cap of any size or when the number of hydrolyzed subunits at the end is less than $N$) are assumed to belong to the growing phase. In such a two states description of the dynamics (with a growing and a shrinking phase), which is implicitly assumed in the analysis of most experiments, the catastrophe frequency $f_c(N)$ is the inverse of the average time spent in the growing phase, while the rescue frequency $f_r(N)$ is the inverse of the average time spent in the shrinking phase. It follows from this that the catastrophe frequency $f_c(N)$ can be obtained as the probability flux out of the growing state divided by the probability to be in the growing state. For instance for $N=1$, this flux condition is \[catastrophe frequencyN=1\] f\_c(1) q = (W\_T+r) P\_1 + r \_[j 2]{} P\_j, where the terms on the right proportional to $P_1$ correspond to a transition of the terminal unit from the GTP to the GDP state, which can occur either through hydrolysis or depolymerization of that unit, while the last term corresponds to hydrolysis of the terminal unit from cap states of length larger or equal than 2. We have derived the general expression of $f_c(N)$ in the case of an arbitrary $N$ as shown in Supporting Information (SI) Methods, and we have checked these results by comparing them with stochastic simulations using the Gillespie algorithm [@gillespie:77].
In the case of the vectorial model, the last term in Eq. \[catastrophe frequencyN=1\] is absent and the catastrophe frequency is non-zero only below the critical concentration. The fact that catastrophes are observed in [@jason-dogterom:03] significantly above the critical concentration indicates that this data is incompatible with a vectorial mechanism. For this reason, we only discuss here the predictions of the random model.
The catastrophe time $T_c(N)=1/f_c(N)$ is shown as function of growth velocity for $N=2$ in figure \[fig-N\]a, and as a function of the concentration of free subunits, $c$, for $N=1$ in figure \[fig-N\]b. The growth velocity is simply proportional to the concentration of free subunits. For both plots, one sees that below the critical concentration which is in the range of 5-10 $\mu$M, the catastrophe time is zero as expected since there is no stable filament in that region of concentration. Note that $T_c(N)$ behaves linearly as function of $c$ for $N=2$ but it behaves non-linearly for $N=1$. Since the experimental data of [@jason-dogterom:03] shows a linear dependence, this comparison indicates that the data can be explained with the model for $N=2$ but not for $N=1$. The same observation has been made in Ref. [@Brun-2009], where the same data has been analyzed. Note however, in comparing this work with this reference the following differences: first, the model of Ref. [@Brun-2009] neglects rescues and assumes that the duration of a catastrophe once started is zero while the present model includes rescues, and takes into account the finite rate of loss of GDP units. Secondly the results of Ref. [@Brun-2009] corresponds to the regime of high concentration of free subunits while the present model holds at any concentration even in the proximity or below the critical concentration. Thirdly the present approach leads to analytical results with the assumption that the filament has no protofilament structure while the results of Ref. [@Brun-2009] are numerical but that model includes a protofilament structure. Our analytical derivation of the catastrophe time confirms that the case $N=1$ differs in an essential way from the $N \geq 2$ case at high concentration. Indeed, the catastrophe time reaches a plateau when the concentration goes to infinity for $N=1$, while it goes to infinity for $N \geq 2$. This trend is already apparent in the figure \[fig-N\].
In figure \[fig-N\], we have used a value for the rate of hydrolysis $r=0.2$, which is higher than that estimated in Ref. [@Leibler-cap:96] (there the estimate was 0.002). The reason is that the hydrolysis rate is a global factor which controls the amplitude of the catastrophe time, basically $T_c(N)$ scales for an arbitrary $N$ as $1/r^N$. The value $r=0.002$ leads to a reasonable estimate for $T_c(N)$ for $N=1$ (albeit with the wrong dependence on concentration), but if we take seriously as we do here, the observation that only the definition with $N=2$ is compatible with the measured concentration dependence of the catastrophe time, then $r$ must have a significantly larger value than expected, and $0.2$ is the value that is needed for $T_c$ in order to match the experimental data. Finally, we also note that the scaling of $T_c(N)$ as a power law of $r$ means that large values of $N$ (such as $N>2$) can be excluded given the observed range of catastrophe times.
We also show the distribution of catastrophe times calculated with the parameters given in table \[table-rates\], for $N=1$ and $N=2$ in figure \[fig:distrib-catastrophe2\]. These distributions in both cases are essentially exponential (except at a very short time which is probably inaccessible in practice in the experiments), in agreement with the observations reported in Ref. [@jason-dogterom:03] with free filaments.
![(left)The distribution of catastrophe time (N=1) for different concentration values. $C=9\mu M$(filled squares) and $C=12\mu M$(open circles). (right) The distribution of catastrophe time (N=2) for different concentration values $C=9\mu M$(filled squares) and $C=12\mu M$(open circles). The distributions are normalized. \[fig:distrib-catastrophe2\]](fig4-pnas)
One advantage of our microscopic model is that it can explain and predict different related aspects of the dynamic instability of microtubules. Specifically, it also allows to predict the statistics of rescue events when the polymer switches from the shrinking phase back into the growing phase. Assuming that the system reached a steady-state behavior, the frequency of rescues $f_{r}(N)$ can be calculated using flux conditions similar to the ones used to obtain $f_c(N)$ (see SI Methods for more details). The corresponding expression is rather simple and it can be written as \[fr\] f\_[r]{}(N)=U+W\_[D]{} b\^[N]{} q. We have carried out a complete numerical test of this frequency of rescues using stochastic simulations, which is shown in SI Fig. 1.
Our model predicts that rescue events should be observable under typical cellular conditions and in experiments. However, surprisingly there is a very limited experimental information on rescues. The analysis of Eq. (\[fr\]) might shed some light on this issue. At low concentrations of GTP monomers in the solution, when the rate $U$ is small, the average time before the rescue event, $T_{r} \simeq 1/U$, might be very large. As a result, it might not be observable in experiments since the polymer with $L$ monomers could collapse faster ($T_{collapse} \simeq L/W_{D}$) before any rescue event could take place. At large $U$, rescues are more frequent given that the polymer is in the shrinking state. But the frequency of the catastrophes is very small under these conditions, the microtubule is almost always in the growing phase. Therefore in these conditions, rescues are not observed [@jason-dogterom:03].
First passage time of the cap and dilution experiments {#first-passage-time-of-the-cap-and-dilution-experiments .unnumbered}
------------------------------------------------------
In dilution experiments, the concentration of free tubulin is abruptly reduced to a small value, resulting in catastrophes within seconds, independent of the initial concentration [@Walker-1991; @voter:1991]. This observation is an evidence that the cap is short and independent of the initial concentration. The idea that the cap is short is also supported by the observation that cutting the end of a microtubule typically with a laser results in catastrophe. As we shall see below, all these well-known experimental facts about microtubules can be explained by the present model.
Here, we are interested in the time until the first catastrophe appears following the dilution. For simplicity, we take the definition of catastrophe introduced in the previous section for $N=1$, which means that a catastrophe starts as soon as the cap has disappeared (as shown in the previous section, one could extend this result to the more general case of an arbitrary $N$). Let us then introduce $F_k(t)$ the distribution of the first passage time $T_k$ for an initial condition corresponding to a cap of length $k$, and a filament in contact with a medium of arbitrary concentration. As explained in SI Methods, it is possible to calculate analytically $F_k(t)$, by a method recently used in the context of polymer translocation [@PK-KironeJStatMech-2010]. After numerically inverting the Laplace transform of $F_k(t)$, one obtains the distribution $F_k(t)$ which is shown as solid lines in figure \[fig:DFPT\] for the particular case of $k=2$. As can be seen in this figure, the predicted distributions agree very well with the results obtained from the stochastic simulation in this case.
From the distribution $F_k(t)$ we obtain its first moment, the mean first passage time of the cap $\langle T(k) \rangle$. As shown in SI Methods, we find that T(k) = \_[j=0]{}\^[k-1]{} y\^j , \[full Tk\] where $y=\sqrt{W_T/U}$, $\bar{y}=2\sqrt{U W_T}/r$, $n=(U+W_T)/r$, and the functions $J_n(y)$ are Bessel functions. The dependance of $\langle T(k) \rangle$ as a function of the initial size of the cap $k$ is shown in figure \[fig:Tk\]: at small $k$, $\langle T(k) \rangle$ is essentially linear in $k$ as would be expected at all $k$ in the vectorial model of hydrolysis [@Ranjith2009], while here it saturates at large values of $k$ (the value of this plateau can be calculated analytically but only for $U=0$ see SI Methods). To understand this saturation, consider a cap which is initially infinitively large, then after a time of order $1/r$, the cap abruptly becomes of a finite much smaller size as a result of the hydrolysis of one unit at a random position within the filament. This feature will always happen irrespective of the monomer concentration, and indeed in figure \[fig:Tk\], $\langle T(k) \rangle$ has a plateau for $k \rightarrow \infty$ for all values of the monomer concentration. We note that such a behavior of $\langle T(k) \rangle$ as function of $k$ has similarities with the case of non-compact exploration investigated in [@condamin:2007], while the vectorial model of hydrolysis would correspond in the language of this reference to the case of compact exploration.
Let us now turn to a practical use of this quantity for characterizing the dynamic instability. In the previous section, we calculated the catastrophe time $T_c$. We expect that this quantity is an average of $\langle T(k) \rangle$, and indeed we find for the case of $N=1$ that $T_c$ is bounded by $\langle T(1) \rangle$ and $\langle T(20) \rangle$ (the choice of $20$ is purely illustrative) as shown in figure \[fig-N\]. The characteristic time observed in dilution experiments is another average of $\langle T(k) \rangle$. More precisely, let us denote $\langle T(k) \rangle_{post}$ as the first passage time in post-dilution conditions given that the initial length of the cap is $k$. The dilution time $T_{dilution}$ is then the average of $\langle T(k) \rangle_{post}$ with respect to the steady-state probability distribution of the initial conditions before the dilution occurs. In other words, T\_[dilution]{}=\_[k]{} T(k) \_[post]{} P\_k([predilution]{}), \[dilution\] where $P_k(pre-dilution)$ is the stationary probability given in Eq. \[SS proba\] in pre-dilution conditions.
In the case that the final medium after dilution is very dilute, one can assume that the final free tubulin concentration is zero, which allows to simplify the general expression given in Eq. \[full Tk\] as explained in SI Methods. Using Eq. \[dilution\], one obtains the dilution time for the parameters of the table \[table-rates\] which is shown in figure \[fig-dilution\].
![Dilution time (s) as function of free tubulin concentration (in $\mu$M) before dilution in the case that the post-dilution tubulin concentration is zero. Solid line is the mean-field prediction based on Eq. \[dilution\] and the symbols are simulation points. As found experimentally, the dilution time is essentially independent of the concentration of tubulin in the pre-dilution state, and the time to observe the first catastrophe is of the order of seconds or less. \[fig-dilution\] ](fig7-pnas)
The figure confirms that the dilution time can be as short as a fraction of seconds in this case. It is straightforward to extend this calculation to the case of an arbitrary value of the post-dilution medium ([*i.e*]{} for the case of a dilution of arbitrary strength) using the general expression derived in Eq. \[full Tk\]. As the amplitude of the dilution is reduced (by increasing the post-dilution concentration), the dilution time increases as well but the general sigmoidal shape remains, with in particular a plateau at concentrations above the critical concentration. The presence of these plateaux means that the dilution time is essentially independent of the concentration of the monomers in pre-dilutions conditions as observed experimentally. Note that the height of these plateaux scale with the hydrolysis rate. For instance, to explain the dilution times reported in [@Walker-1991], one needs to use a smaller value of $r$ as given in the table because of the use of the $N=1$ definition of catastrophe. Alternatively, just as in the calculation of the catastrophe frequencies, it is possible to keep the expected large value of $r$ provided the $N=2$ definition of catastrophe is chosen. Thus, complementary information can be obtained from the catastrophe frequencies and the dilution times.
Conclusion {#conclusion .unnumbered}
==========
In this work, we have explained several important features about microtubules dynamics using a model for the random release of phosphate within the filament. The results of our mean-field approach are analytical to a large extend. With this approach we could recover some well known features of MT dynamics such as the mean catastrophe time and its distribution or the delays following a dilution, but we have also investigated much less studied aspects concerning the cap size, the role of the definition of catastrophes (via the parameter $N$) and the first passage time of the cap. The theoretical model and ideas presented in this paper for the case of microtubules could also apply to other biofilaments such as actin or Par-M, for which the random hydrolysis model may be relevant as well. Furthermore, although the model describes a priori only single free filaments dynamics, it is also potentially useful for understanding constrained filaments, in the broader context of force generation and force regulation by ensembles of biofilaments. For this reason, it would be interesting to study extensions of the model to account for the various effects of MAPs on microtubules, which should shed light on the behavior of microtubules in more realistic biological conditions. We hope that this theoretical work will stimulate further experimental and theoretical studies of these questions.
We thank F. Perez, F. Nedelec and M. F. Carlier for inspiring discussions. We also would like to thank K. Mallick for pointing to us Ref. [@PK-KironeJStatMech-2010], and M. Dogterom for providing us with the data of Ref. [@jason-dogterom:03]. RP acknowledges support through IYBA, from Department of Biotechnolgy, India.
[100]{} Desai A, Mitchison TJ (1997) Microtubule polymerization dynamics. *Annual Review of Cell and Developmental Biology* 13:83–117.
Margolis RL, Wilson L (1998) Microtubule treadmilling: what goes around comes around. *BioEssays* 20:830.
Mitchison T, Kirschner M (1984) Dynamic instability of microtubule growth. *Nature* 312:237–242.
Bayley P, Schilstra M, Martin S (1989) [A simple formulation of microtubule dynamics: quantitative implications of the dynamic instability of microtubule populations in vivo and in vitro]{}. *J Cell Sci* 93:241–254.
Hill TL (1984) [Introductory analysis of the GTP-cap phase-change kinetics at the end of a microtubule]{}. *Proc. Natl. Acad. Sci. USA.* 81:6728–32.
Verde F, Dogterom M, Stelzer E, Karsenti E, Leibler S (1992) Control of microtubule dynamics and length by cyclin a- and cyclin b-dependent kinases in xenopus egg extracts. *The Journal of Cell Biology* 118:1097–1108.
Dogterom M, Leibler S (1993) Physical aspects of the growth and regulation of microtubule structures. *Phys. Rev. Lett.* 70:1347–1350.
Flyvbjerg H, Holy TE, Leibler S (1996) Microtubule dynamics: Caps, catastrophes, and coupled hydrolysis. *Phys. Rev. E* 54:5538–5560.
Margolin G, Gregoretti IV, Goodson HV, Alber MS (2006) Analysis of a mesoscopic stochastic model of microtubule dynamic instability. *Phys. Rev. E* 74:041920.
Zong C, Lu T, Shen T, Wolynes PG (2006) Nonequilibrium self-assembly of linear fibers: microscopic treatment of growth, decay, catastrophe and rescue. *Physical Biology* 3:83–92.
Stukalin EB, Kolomeisky AB (2006) [ATP Hydrolysis Stimulates Large Length Fluctuations in Single Actin Filaments]{}. *Biophys. J.* 90:2673–2685.
Antal T, Krapivsky PL, Redner S, Mailman M, Chakraborty B (2007) Dynamics of an idealized model of microtubule growth and catastrophe. *Phys. Rev. E.* 76:041907.
Ranjith P, Lacoste D, Mallick K, Joanny JF (2009) [Nonequilibrium Self-Assembly of a Filament Coupled to ATP/GTP Hydrolysis]{}. *Biophys. J.* 96:2146–2159.
Ranjith P, Mallick K, Joanny JF, Lacoste D (2010) [Role of ATP hydrolysis in the Dynamics of a single actin filament]{}. *Biophys. J.* 98:1418–1427.
Pieper U, Wegner A (1996) [The end of a polymerizing actin filament contains numerous ATP-subunit segments that are Disconnected by ADP-subunits resulting from ATP hydrolysis]{}. *Biochemistry* 35:4396.
Li X, Lipowsky R, Kierfeld J (2010) Coupling of actin hydrolysis and polymerization: Reduced description with two nucleotide states. *Europhys. Lett.* 89:38010.
Kueh HY, Mitchison TJ (2009) [Structural Plasticity in Actin and Tubulin Polymer Dynamics]{}. *Science* 325:960–963.
Valiron O, Arnal I, Caudron N, Job D (2010) [GDP]{}-[T]{}ubulin incorporation into growing microtubules modulates polymer stability. *J. Biol. Chem.* 285:17507.
Janson ME, de Dood ME, Dogterom M (2003) [Dynamic instability of microtubules is regulated by force]{}. *J. Cell Biol.* 161:1029–1034.
Walker RA, [et al.]{} (1988) [Dynamic instability of individual microtubules analyzed by video light microscopy: rate constants and transition frequencies.]{} *J Cell Biol* 107:1437–1448.
Voter W, O’Brien E, Erickson H (1991) Dilution-induced disassembly of microtubules: relation to dynamic instability and the [GTP]{} cap. *Cell Motil Cytoskeleton.* 18:55.
Dimitrov A, [et al.]{} (2008) [Detection of GTP-Tubulin Conformation in Vivo Reveals a Role for GTP Remnants in Microtubule Rescues]{}. *Science* 322:1353–1356.
Schek HT, Gardner MK, Cheng J, Odde DJ, Hunt AJ (2007) Microtubule assembly dynamics at the nanoscale. *Curr. Biol.* 17:1445.
Kerssemakers JWJ, [et al.]{} (2006) Assembly dynamics of microtubules at molecular resolution. *Nature* 442:709.
Brun L, Rupp B, Ward JJ, Nédélec F (2009) [A theory of microtubule catastrophes and their regulation]{}. *Proc Natl Acad Sci USA* 106:21173–21178.
Van Buren V, Cassimeris L, Odde D (2005) [Mechanochemical model of microtubule structure and self-assembly kinetics]{}. *Biophys J* 89:2911–2926.
Mohrbach H, Johner A, Kuli ć IM (2010) Tubulin bistability and polymorphic dynamics of microtubules. *Phys. Rev. Lett.* 105:268102.
Keiser T, Schiller A, Wegner A (1986) Nonlinear increase of elongation rate of actin filaments with actin monomer concentration. *Biochemi* 25:4899–4906.
Pantaloni D, Hill TL, Carlier MF, Korn ED (1985) [A model for actin polymerization and the kinetic effects of ATP hydrolysis]{}. *Proc. Natl. Acad. Sci. USA.* 82:7207–7211.
Carlier MF, Hill TL, Chen Y (1984) Interference of [GTP]{} hydrolysis in the mechanism of microtubule assembly: An experimental study. *Proc. Natl. Acad. Sci. USA* 81:771.
Panda D, Miller HP, Wilson L (2002) Determination of the size and chemical nature of the stabilizing cap at microtubule ends using modulators of polymerization dynamics. *Biochemistry* 41:1609–1617 PMID: 11814355.
Howard J, Hyman AA (2009) Growth, fluctuation and switching at microtubule plus ends. *Nature reviews* 10:569.
Gardner MK, Hunt AJ, Goodson HV, Odde DJ (2008) Microtubule assembly dynamics: New insights at the nanoscale. *Curr. Opin. Cell. Biol.* 20:64.
Gillespie DT (1977) Exact stochastic simulation of coupled chemical reactions. *J. Phys. Chem.* 81:2340.
Walker RA, Pryer NK, Salmon ED (1991) [Dilution of individual microtubules observed in real time in vitro: evidence that cap size is small and independent of elongation rate.]{} *The Journal of Cell Biology* 114:73–81.
Krapivsky PL, Mallick K (2010) Fluctuations in polymer translocation. *Journal of Statistical Mechanics: Theory and Experiment* 2010:P07007.
Condamin S, Benichou O, Tejedor V, Voituriez R, Klafter J (2007) First-passage times in complex scale-invariant media. *Nature* 450:06201.
Howard J (2001) *Mechanics of Motor Proteins and the Cytoskeleton* (Sinauer Associates, Inc., Massachusetts).
\
[**[Supporting Information]{}**]{}
\[AFPT\] Distribution of first passage time of the cap in the random model
==========================================================================
Let us denote by $F_k(t)$ the probability distribution of the first passage time of the GTP-tip (also called cap in the main text), for a cap which is initially of length $k$. This quantity obeys the following backward master equation, for $k \geq 1$, = U (F\_[k+1]{}- F\_k ) + W\_T ( F\_[k-1]{} - F\_k ) + r ( \_[j=0]{}\^[k-1]{} F\_j - k F\_k ). These equations are supplemented by the boundary condition $F_0(t)=\delta(t)$. We will assume that the random walk followed by the cap is recurrent, which means here that the disappearance of the cap is certain, whatever the time it takes. That condition means that for all $k \geq 0$, \_0\^F\_k(t) dt=1. We will make use of the Laplace transform of $F_k(t)$ defined by [F]{}\_k(s)= \_0\^e\^[-s t]{} F\_k(t) dt. With this definition, the equations above take the following form, again for $k \geq 1$ (s+W\_T+kr+U) [F]{}\_k = U [F]{}\_[k+1]{} + W\_T [F]{}\_[k-1]{} + r \_[j=0]{}\^[k-1]{} [F]{}\_j, \[general recursion\] with in addition the conditions ${\tilde F}_0(s)=1$ and for all $k \geq 0$, ${\tilde F}_k(s=0)=1$, which follows from the normalization condition and the definition of the Laplace transform above.
For the applications of this first passage time distribution to dilution experiments, we are interested mainly in calculating it using post-dilution conditions. In the case of a dilution, the concentration of the free monomers following dilution is in general small. Let us discuss separately the particular case where the concentration of the medium after dilution is zero in which case $U=0$, and the general case of a dilution into a medium of prescribed concentration corresponding to $U \neq 0$.
Particular case of $U=0$
------------------------
In this particular case, the recursion equations given in Eq. \[general recursion\] are easy to solve. The solution is [F]{}\_k(s)=1- ( 1 + \_[m=1]{}\^[k-1]{} \_[j=1]{}\^m ), for $k \geq 1$ with the convention that a sum over an index which ends at 0 is void. The mean first passage time $T(k)$, is the first moment of $F_k(t)$ and thus satisfies $\langle T(k) \rangle=-d {\tilde F}_k/ds_{s=0}$. It follows that for $k \geq 1$, T(k) = ( 1 + \_[m=1]{}\^[k-1]{} \_[j=1]{}\^m ). \[Tk\]
In this particular case of $U=0$, it is possible to derive an asymptotic form of this mean first passage time for $k \rightarrow \infty$, namely $\langle T \rangle=\lim_{k \rightarrow \infty} \langle T(k) \rangle$ . Indeed in this case, the sum can be written in terms of hypergeometric functions [@Antal-etal-PRE:07; @abramowitz], and it reads T = F(1;+2;).
The expression of the mean-first passage time given in Eq. \[Tk\] can be used to obtain the delay before the appearance of the first catastrophe as explained in the main text. In this case, we find T\_[dilution]{}=\_[k 1]{} \_k b\^[k(k-1)/2]{} q\^k, where \_k= \_[j=1]{}\^[k-1]{} . This dilution time is shown in Fig. 7 of the main text.
General case of $U \neq 0$
--------------------------
The solution to this general case is more involved but it can be obtained using Bessel functions (for a solution of a similar recursion see [@PK-KironeJStatMech-2010]). In a first step, we transform the recursion of Eq. \[general recursion\] using the difference variable $K_k(s)={\tilde F}_k(s) - {\tilde F}_{k+1}(s)$, which leads to (s+W\_T+(k+1)r+U) K\_k(s) = U K\_[k+1]{}(s) + W\_T K\_[k-1]{}(s). \[difference eq\] Then, we introduce the change of variable $K_k(s)=y^k g_k(s)$ and we choose $y=\sqrt{W_T/U}$ in such a way that Eq. \[difference eq\] takes the simpler form: g\_[k+1]{}(s)+g\_[k-1]{}(s)= g\_k(s). The solution to this equation can be obtained by comparing with the well-known identity J\_[+1]{}(x)+J\_[-1]{}(x) = J\_(x), for Bessel functions. Thus, the solution has the form g\_k(s)=C J\_[(s+U+W\_T+(k+1)r)/r]{}(|[y]{}), where $C$ is a constant and $\bar{y}=2 \sqrt{U W_T}/r$. The boundary condition given above for ${\tilde F}_0(s)$ leads to the following condition g\_0(s)=, which fixes the constant $C$. In the end, one obtains g\_k(s)=, which satisfies in addition the required condition at $s=0$ namely that for all $k \ge 0$, $g_k(s=0)=0$. With this expression, one obtains the Laplace transform of the first passage distribution of the cap, ${\tilde F}_k(s)$ from [F]{}\_k(s)=1- \_[j=0]{}\^[k-1]{} y\^j g\_j(s). \[general Fk\] Although it is not immediately apparent, it can be checked that the particular case discussed above is indeed recovered by taking the limit $U \rightarrow 0$ of the general case. After using $\langle T(k) \rangle=-d {\tilde F}_k/ds_{s=0}$ together with Eq. \[general Fk\], one obtains the general expression for the mean first passage time of the cap $\langle T(k) \rangle$ given in the main text, which reads T(k) = \_[j=0]{}\^[k-1]{} y\^j , \[full Tk\] where $n=(U+W_T)/r$.
\[catastrophe\] Catastrophes and rescues for arbitrary $N$
==========================================================
Catastrophes are associated with stochastic transitions between growing and shrinking dynamic phases. The microtubule is in the growing phase when it is found in one of polymer configurations with the unhydrolyzed cap of any size or when the number of already hydrolyzed monomers at the end is less than $N$. We define $R_{k,l}$ as a probability to be in the polymer configuration with $l$ T monomers at the end that are preceded by $k$ D monomers (irrespective of the state of the other subunits), $Q_{k,l}$ as a probability to be in the polymer configuration with $l$ D monomers at the end that are preceded by $k$ T monomers, and finally $S_{k,l}$ is a probability that the last $l$ monomers at the end are hydrolyzed except for the one subunit at position $k$ counting from the end of the polymer. Formally these definitions can be also written as, \[probabilities\] R\_[k,l]{} Prob(…\_[k]{},\_[l]{}), & Q\_[k,l]{} Prob(…\_[k]{},\_[l]{}), &\
S\_[k,l]{} Prob(…\_[l]{}). & & Note that the probability $P_{l}$ to have the unhydrolyzed cap of exactly $l$ monomers can be expressed as $P_{l}=R_{1,l}$, while the probability to be found in the growing phase is \[Pgr\] P\_[gr]{}=\_[l=1]{}\^ R\_[1,l]{}+ \_[l=1]{}\^[N-1]{} Q\_[1,l]{}.
The simple mean-field theory assumes that the state of the monomer in the microtubule is independent of its neighbors, and it also estimates that the probability to find T or D monomer $k$ sites away from the polymer is equal to $b^{k-1}q$ or $(1- b^{k-1}q)$ respectively, with the parameter $b$ given by b=. The probabilities defined in Eq. (\[probabilities\]) can be easily calculated yielding, \[RQS\] R\_[k,l]{}= b\^[l(l-1)/2]{} q\^[l]{} \_[j=l]{}\^[l+k-1]{} (1-b\^[j]{}q), Q\_[k,l]{}= b\^[k(2l+k-1)/2]{} q\^[k]{} \_[j=1]{}\^[l]{} (1-b\^[j-1]{}q), & &\
S\_[k,l]{}= b\^[k-1]{} q \_[j=1]{}\^[k-1]{} (1-b\^[j-1]{}q) \_[j=k+1]{}\^[l]{} (1-b\^[j-1]{}q). & & Then the probability to be found in the growing phase is \[Pgr1\] P\_[gr]{}=q+\_[k=1]{}\^[N-1]{} b\^[k]{}q \_[j=1]{}\^[k]{}(1-b\^[j-1]{}q).
The frequency of catastrophes $f_{c}(N)$ in steady-state conditions can be found from the fact that the total flux out of the growing phase, $f_{c} P_{gr}$, must be equal to the flux to the shrinking phase, leading to the following equation, f\_[c]{} (N) P\_[gr]{}= W\_[T]{} R\_[N,1]{} + r \_[k=1]{}\^[N]{} S\_[k,N]{}. Using Eqs. (\[RQS\]) and (\[Pgr1\]), it can be shown that \[fc\] f\_[c]{}(N)=. For $N=1$, we obtain a simple expression for the frequency of catastrophes, f\_[c]{}(1)=W\_[T]{}(1-b q) +r, while for $N=2$ it gives f\_[c]{}(2)=. A limiting behavior of the frequency of catastrophes for general $N$ can be analyzed. For low concentrations of free GTP monomers in the solution, corresponding to $u \rightarrow 0$, we have $q \rightarrow 0$ and $b \rightarrow 1+r/w_{T}$, producing f\_[c]{}(N) r+ . For large $N$ and small hydrolysis rates ($r/W_{T} \ll 1$) the expression for the frequency of catastrophes is even simpler, f\_[c]{}(N) r+ . Another limit of interest corresponds to large concentrations ($U \gg 1$), where $q \rightarrow 1$ and $b \rightarrow 1$, leading to $ f_{c} (N) \rightarrow 0$ for all values of $N \ge 2$, while for $N=1$ we have $f_{c} (1) \rightarrow r$.
This method of analyzing catastrophes can be also extended to calculating frequency of rescue events $f_r(N)$. The probability to find the microtubule in the shrinking phase is equal to P\_[sh]{}=1-P\_[gr]{}= 1-q-\_[k=1]{}\^[N-1]{} b\^[k]{}q \_[j=1]{}\^[k]{}(1-b\^[j-1]{}q). The total flux out of this state is given by f\_[r]{}(N) P\_[sh]{}= U P\_[sh]{} +W\_[D]{} Q\_[1,N]{}, which leads to the following equation f\_[r]{}(N)=U+. This expression can be further simplified to obtain the final result, f\_[r]{}(N)= U+ W\_[D]{} b\^[N]{} q. \[rescue freq\] For all values of $N$ in the limit of $U \rightarrow 0$ it yields $f_{r} \simeq U$, while for large $U$ we have $f_{r} \simeq U+W_{D}$.
In addition, the average time before the catastrophe or before the rescue can be easily obtained by inverting the corresponding expressions for frequencies, namely, $T_{c}(N) = 1/f_{c}(N)$ and $T_{r}(N) = 1/f_{r}(N)$.
Numerical test of the predictions for rescues frequency
-------------------------------------------------------
We show here a comparison between the theoretical mean-field prediction for the rescue frequency given by Eq. \[rescue freq\] and results from stochastic simulations in figure \[fig:rescue-time\]. In the conditions of this figure, the filaments are sufficiently long and thus they do not collapse before rescue events occur. The theoretical mean-field predictions agree well with the simulations at concentrations of free monomers larger than the critical concentration. Deviations are observed at low concentrations near the critical concentration, in a way which has similarities with the deviations observed in Fig. 2.
![Rescue time (N=1) as function of concentration. The solid line is obtained from the mean-field theory. The data points (filled squares) are obtained from the simulations. \[fig:rescue-time\]](fig-SI)
[1]{}
Antal T, Krapivsky PL, Redner S, Mailman M, Chakraborty B (2007) Dynamics of an idealized model of microtubule growth and catastrophe. *Phys. Rev. E.* 76:041907.
Abramamowitz M, Stegun IA (1972) *[Handbook of Mathematical Functions]{}* (Dover, New York).
Krapivsky PL, Mallick K (2010) Fluctuations in polymer translocation. *Journal of Statistical Mechanics: Theory and Experiment* 2010:P07007.
|
Frequency generation in optical systems is the main underlying process in a series of key applications, including all-optical signal processing, optical amplification, and wavelength multiplexing. One of the most facile approaches to achieve this functionality is via optical-wave interaction in nonlinear media. In the case of media with cubic nonlinearity, the simplest such interaction is four-wave mixing (FWM), a nonlinear process in which two photons combine and generate a pair of photons with different frequencies. Due to its simplicity and effectiveness, FWM has been at the center of intense research, from the early days of nonlinear fiber optics [@sba74apl; @hjk78jap] to the recent studies of FWM in ultra-compact silicon (Si) devices [@crd03oe; @drc04oe; @fys05oe; @edo05oe; @fts06n; @lzf06oe; @pco06ol; @yft06ptl; @krs06oe; @fts07oe; @lov10np; @zpm10np; @klo11oe; @lkr12np; @dog12oe; @lpa07oe; @opd09aop]. Silicon photonic nanowire waveguides (Si-PNWs) are particularly suited to achieve highly efficient FWM, as Si has extremely large cubic nonlinearity over a broad frequency domain. Equally important in this context, due to the deep-subwavelength size of the cross-section of Si-PNWs, the parameters quantifying their optical properties depend strongly on wavelength and waveguide size [@lpa07oe; @opd09aop]. As a result, one can easily control the strength and phase-matching of the FWM. These ideas have inspired intense research in chip-scale devices based on FWM in Si waveguides, with optical parametric amplifiers [@fts06n; @lzf06oe; @lov10np], frequency converters [@yft06ptl; @krs06oe; @fts07oe; @zpm10np; @klo11oe; @lkr12np; @dog12oe], sources of quantum-correlated photon pairs [@la06ol], and optical signal regenerators [@sft08np] being demonstrated.
One of the main properties of Si-PNWs, which makes them particularly suitable to achieve efficient FWM, is that by properly designing the waveguide geometry one can easily engineer the dispersion to be either normal or anomalous within specific spectral domains. More specifically, Si-PNWs with relatively large cross-section have normal dispersion, which precludes phase matching of the FWM. This drawback can be circumvented by scaling down the waveguide size to a few hundred of nanometers as then the dispersion becomes anomalous. The price one pays for this small cross-section is that the device operates at reduced optical power. An alternate promising approach to achieve phase-matched FWM in the normal dispersion regime is to employ quasi-phase-matching (QPM) techniques, i.e. to cancel the linear and nonlinear phase mismatch of the interacting waves by periodically varying the waveguide cross-section. This technique has been recently used for *cw* optical beams [@dog12oe], yet in many cases of practical importance it is desirable to achieve FWM in the pulsed regime. In addition, at large power *cw* beams are strongly depleted by optical losses, which results in the detuning of the FWM.
In this Letter we show that efficient QPM FWM of optical pulses can be achieved in Si-PNWs whose width varies periodically along the waveguide. In this work we focus on the QPM FWM of pulses that propagate in the normal dispersion regime, as in this case one cannot apply alternative phase shifting methods based on nonlinearly induced phase-shifts. Our analysis of the FWM in long-period Bragg Si-PNWs is based on a theoretical model introduced in [@cpo06jqe], which fully describes optical pulse propagation and the influence of free-carriers (FCs) on the optical field dynamics (see also [@dog12oe; @plo09ol; @ldj13ol]):
\[tm\] $$\begin{aligned}
\label{uzt}i\frac{\displaystyle \partial u}{\displaystyle \partial z}&+\sum\limits_{n= 1}^{n=4}
\frac{\displaystyle i^{n}\beta_{n}(z)}{\displaystyle n!}\frac{\displaystyle \partial^{n}
u}{\displaystyle \partial t^{n}}= -i\left[\frac{\displaystyle c\kappa(z)}{\displaystyle 2nv_{g}(z)}\alpha_{\mathrm{fc}}(z)+\alpha\right]u \nonumber \\
-&\frac{\displaystyle \omega \kappa(z)}{\displaystyle n v_{g}(z)}\delta n_{\mathrm{fc}}(z)u
-\gamma(z)\left[1+i\tau(z)\frac{\partial}{\partial t}\right]\vert u\vert^{2} u, \\
\label{dens}\frac{\displaystyle \partial N}{\displaystyle \partial t} &= -\frac{\displaystyle
N}{\displaystyle t_{c}} + \frac{\displaystyle 3 \Gamma^{\prime\prime}(z)}{\displaystyle
4\epsilon_{0}\hbar A^{2}(z)v_{g}^{2}(z)} |u|^{4},\end{aligned}$$
where $u(z,t)$ and $N(z,t)$ are the pulse envelope and FC density, respectively, $t$ is the time, $z$ is the distance along the waveguide, $\beta_{n}=d^{n}\beta/d\omega^{n}$ is the $n$th order dispersion coefficient, $\kappa(z)$ is the overlap between the optical mode and the (Si) active area of the waveguide, $v_{g}(z)$ is the group-velocity, $\delta n_{\mathrm{fc}}(z)$ \[$\alpha_{\mathrm{fc}}(z)$\] are $N$-dependent FC-induced index change (losses) [@sb87jqe], and $\alpha$ is the waveguide loss ($\alpha=0$ unless otherwise stated). The nonlinear coefficient, $\gamma$, is given by $\gamma(z)=3\omega \Gamma(z)/4\epsilon_{0}A(z)v_{g}^{2}(z)$, and the shock time scale is $\tau(z)=\partial\ln \gamma(z)/\partial \omega$, where $A(z)$ and $\Gamma(z)$ are the cross-sectional area and the effective third-order susceptibility, respectively. The system is integrated numerically by using a split-step Fourier method [@opd09aop]. Also, in this study we set $t_{c}=$ .
![(a) Schematics showing a periodically width-modulated Si-PNW and the configuration of a pulsed seeded degenerate FWM set-up. Dispersion maps of dispersion coefficients: (b) $\beta_1$, (c) $\beta_2$, and (d) $\beta_4$.[]{data-label="geom_prop"}](Figure_1.eps){width="8cm"}
The optical waveguide considered here consists of a Si core with constant height, $h=$ , and periodically modulated width, $w(z)$, buried in $\mathrm{SiO_{2}}$. We assume a sinusoidal dependence, $w(z)=w_0+\Delta w \sin(2\pi z/\Lambda)$, where $w_{0}$, $\Delta w$, and $\Lambda$ are the average width, amplitude of the width modulation, and its period, respectively, but more intricate profiles $w(z)$ can be readily investigated by our method. As illustrated in Fig. \[geom\_prop\](a), we consider the case of degenerate FWM, in which two photons at the pump frequency, $\omega_{p}$, interact with the nonlinear medium and generate a pair of photons at signal ($\omega_{s}$) and idler ($\omega_{i}$) frequencies. This FWM process is most effective when $$\left| 2(\beta_p-\gamma^{\prime} P_p)-\beta_s-\beta_i\right|=K_{g}, \label{Dbeta}$$ where $K_{g}=2\pi/\Lambda$ is the Bragg wave vector, $P_{p}$ is the pump peak power, and $\beta_{p,s,i}(\omega)$ are the mode propagation constants evaluated at the frequencies of the co-propagating pulses. Note that in Eq. all width-dependent quantities are evaluated at $w=w_{0}$. If $\Delta\omega\equiv\omega_{s}-\omega_{p}=\omega_{p}-\omega_{i}\ll\omega_{p}$, Eq. can be cast to a form that makes it more suitable to find the wavelengths of the quasi-phase-matched pulses by expanding in Taylor series the functions $\beta_{p,s,i}(\omega)$, around $\omega_{p}$. Keeping the terms up to the fourth-order, Eq. becomes: $$\label{DbetaTaylor}
\left\vert2\gamma^{\prime}
P_p+\beta_{2,p}\Delta\omega^2+\beta_{4,p}\Delta\omega^4/12\right\vert=K_{g}.$$
![(a), (b) Wavelength diagrams defined by the phase-matching conditions and , respectively. Solid (dashed) lines correspond to the signal (idler) and green (blue) lines to $\Lambda=$ ($\Lambda=$ ). Dash-dot lines correspond to $\lambda_{p}=\lambda_{s}=\lambda_{i}$ and vertical dotted lines mark $\beta_{2}(\lambda)=0$. $z$-dependence of $\Delta n_{\mathrm{eff}}$, (c), and $\beta_2$, and $\gamma^{\prime}$, (d), shown for one period, $\Lambda$. In (c) and (d) the lines correspond to $\Delta w=$ (-$\cdot$-), $\Delta w=$ (- - -), and $\Delta w=$ (—). In all panels $w_0=$ .[]{data-label="Pha_match"}](Figure_2.eps){width="8cm"}
The dispersive properties of the Si-PNW, summarized in Fig. \[geom\_prop\], define the spectral domain, in which efficient FWM can be achieved. The width dependence of the dispersion coefficients and other relevant waveguide parameters, i.e. $\gamma$, $\kappa$, and $\tau$, was obtained by using a method described in detail in [@dog12oe; @ldj13ol]. Importantly, with a proper choice of the operating wavelength or waveguide width, the photonic wire can have both normal and anomalous GVD. The wavelengths, for which the FWM is quasi-phase-matched and determined from Eqs. and , are plotted in Figs. \[Pha\_match\](a) and \[Pha\_match\](b), respectively. These results show that, as expected, for relatively small $\Delta\omega$, Eqs. and lead to similar predictions, whereas they disagree for large $\Delta\omega$. Interestingly enough, Fig. \[Pha\_match\](a) shows that for certain $\lambda_{p}$’s FWM can be achieved at more than one pair of wavelengths, $(\lambda_{s},\lambda_{i})$, meaning that optical bistability could readily be observed in this system. The corresponding $z$-dependence over one period of the variation of the effective modal refractive index, $\Delta n_{\mathrm{eff}}$, $\beta_{2}$, and $\gamma^{\prime}$, is presented in Figs. \[Pha\_match\](c) and \[Pha\_match\](d). The wavelength conversion efficiency and parametric amplification gain are determined from the pulse spectrum. Thus, we launch into the waveguide pulses whose temporal profile, $u(0,t)=\sqrt{P_p}[\exp(-t^2/2T_0^2)+\sqrt{\xi}\exp(-t^2/2T_0^2-i\Delta\omega t)]$, is the superposition of a pump pulse and a weak signal, whose frequency is shifted by $\Delta\omega$. The ratio $\xi=P_{s}/P_{p}$ is set to and in the cases of wavelength conversion and parametric amplification, respectively, so that in the latter case the signal is too weak to affect the pump. We also assume that the signal and pump have the same temporal width, $T_{0}$, and, unless otherwise stated, the same group-velocity, $v_{g}$.
A generic example of pulse evolution in a uniform and Bragg Si-PNW, where the latter is designed such that condition holds, is presented in Fig. \[const\_modul\]. We considered a pulse with $T_{0}=$ , $P_p=$ , $P_s=$ , $\lambda_p=$ , and $\lambda_s=$ , so that one expects an idler pulse to form at $\lambda_i=$ . The waveguide parameters are $w_0=$ , $\Delta w=$ , $\Lambda=$ , $\beta_{2,p}=$ , $\beta_{4,p}=$ , and $\gamma^{\prime}_{p}=$ . The evolution of the temporal pulse profile, shown in Figs. \[const\_modul\](a) and \[const\_modul\](b), suggests that the pulse propagates with a group-velocity, $v_{g}$, slightly larger than $v_{g}(\omega_{p})$. Indeed, the pulse propagates in the normal dispersion regime and its average frequency is smaller than $\omega_{p}$, which means that $v_{g}>v_{g}(\omega_{p})$. In the case of the Bragg waveguide, additional temporal oscillations of the pulse are observed. This effect is traced to the periodic variation $v_{g}(z)$, which is due to the implicit dependence of $v_{g}$ on a periodically varying width $w(z)$.
![Left (right) panels show the evolution of an optical pulse in a uniform (quasi-phase-matched Bragg) waveguide (see the text for the values of the pulse and waveguide parameters). Top, second, and third row panels show the $z$-dependence of the temporal pulse profile, its spectrum, and FC density, respectively. (g) Input (green) and output pulse spectra corresponding to the uniform (blue) and Bragg (red) waveguides. In inset, the signal and pump regions of the spectra. (h) Variation $N(z)$, for uniform (—) and Bragg ($\cdots$) waveguides. In inset, dependence $\Delta n_{\mathrm{fc}}(z)$, for $\Delta w=$ (brown), $\Delta w=$ (blue), and $\Delta
w=$ (red).[]{data-label="const_modul"}](Figure_3.eps){width="8cm"}
Due to its specific nature, it is more suitable to study the FWM in the frequency domain. In particular, the differences between the evolution of the pulse spectra in uniform and Bragg waveguides, illustrated by Figs. \[const\_modul\](c) and \[const\_modul\](d), respectively, underline the main physics of pulsed FWM in Si-PNWs. Specifically, it can be seen that, in the Bragg waveguide, the idler energy builds up at a much higher rate as compared to the case of the uniform Si-PNW, an indication of a much more efficient FWM interaction \[see also Fig. \[const\_modul\](g)\]. In both cases, however, we observe a gradual decrease of the the pulse peak power, induced by the linear and nonlinear losses associated to the generated FCs. Note that the dispersion length $L_{d}=T_{0}^{2}/\vert\beta_{2}\vert\approx$ so that the dispersion-induced pulse broadening is negligible.
For the Bragg waveguide one can also observe a series of oscillations of the FC density with respect to $z$, which are due to the periodic variation with $z$ of $\gamma^{\prime\prime}$. Specifically, the oscillatory $z$-variation of $N(z)$ results in a quasi-periodic variation of the effective modal index, $\Delta n_{\mathrm{fc}}(z)$, which adds to the periodic variation of $n_{\mathrm{eff}}$ due to the waveguide-width modulation. Note, however, that for the power values used in this analysis the former effect is an order of magnitude weaker than the latter one \[compare Fig. \[Pha\_match\](c) with the inset in Fig. \[const\_modul\](h)\].
![(a), (b) CE $\eta(z)$, calculated for different $\alpha$ and $\Delta w$, respectively. $\alpha=0$ in (b). (c) CE calculated for different $T_{0}$, for Bragg (—) and uniform ($\cdots$) waveguides. Pulse and waveguide parameters in (a)–(c) are the same as in Fig. \[const\_modul\]. (d) FWM gain vs. $z$ (the values of pulse and waveguide parameters are given in the text).[]{data-label="CE_Gain"}](Figure_4.eps){width="8cm"}
A comparative study of the conversion efficiency (CE), $\eta(z)=10\log [E_{i}(z)/E_{s}(0)]$, and FWM gain, $G(z)=E_{s}(z)/E_{s}(0)$, in a Bragg vs. a uniform Si-PNWs is summarized in Fig. \[CE\_Gain\]. The energies of the idler, $E_{i}$, and signal, $E_{s}$, were calculated by integrating the power spectrum over a frequency domain containing the corresponding pulse. These results clearly show that the Bragg grating induces a dramatic increase of the CE. Although the CE decreases with the waveguide loss, the CE enhancement between the uniform and Bragg waveguides only slightly varies with $\alpha$. Importantly, the power decay leads to the detuning of the FWM and, after a certain distance, to the degradation of its efficiency. As expected, the CE enhancement increases with $\Delta w$, reaching for $\Delta
w=$ . The CE also depends on $T_{0}$, as per Fig. \[CE\_Gain\](c). Indeed, one expects that the CE increases with $T_{0}$ since the Bragg waveguide is designed to phase-match the carrier frequencies of the pulses, so that spectrally narrower pulses are better phase-matched.
The dependence of the FWM gain on the amplitude of the width modulation is shown in Fig. \[CE\_Gain\](d). To avoid large losses due to two-photon absorption, the device is operated at mid-IR frequencies. Thus, the pulse has $T_{0}=$ , $P_p=$ , $P_s=$ , $\lambda_p=$ , and $\lambda_s=$ , meaning that the idler is formed at $\lambda_i=$ . The waveguide parameters were $w_0=$ , $\Lambda=$ , $\beta_{2,p}=$ , $\beta_{4,p}=$ , and $\gamma^{\prime}_{p}=$ . The increased FWM efficiency in Bragg Si-PNWs is clearly demonstrated by these numerical experiments namely, a transition from negative to positive net gain is observed when $\Delta w$ increases from zero to . When $\Delta w$ further increases beyond a certain value, $\Delta w\approx$ , the variation over one period of $\beta$ becomes large enough to greatly degrade the phase matching of the interacting pulses, resulting in a steep decrease of the FWM gain.
In our analysis so far we have designed the waveguide so that the pump and signal have the same group-velocity, meaning that optimum FWM is then achieved. In Fig. \[walkoff\], which also considers mid-IR pulses, we present the CE determined in two cases when this condition does not hold, i.e. when the walk-off $\delta=\vert 1/v_{g,p}-1/v_{g,s}\vert\neq0$, and for two different values of the pump-signal time delay, $T_{d}$. The main conclusion that can be drawn from these results is that when $\delta\neq0$, FWM occurs only over a certain distance, which is related to the time necessary for the pump and signal pulses to pass through each other. In Fig. \[walkoff\](a) this propagation section corresponds to the region where one can observe a series of intensity fringes, which are due to the frequency beating between the two pulses. Also, the CE increases rapidly as $T_{d}$ decreases because for large $T_{d}$ the pump decays more before it begins to interact with the signal, i.e. the FWM becomes more detuned. This suggests that the CE should increase with $\delta$ as well, in agreement with the results plotted in Fig. \[walkoff\](b) for $T_{d}=4T_{0}$.
![(a) Pulse evolution for $\lambda_s=$ and $\lambda_i=$ . (b) CE dependence on $z$. Green and blue lines correspond to the pulse in (a) and $\lambda_s=$ and $\lambda_i=$ , respectively. The other parameters in (a) and (b) are $w_0=$ , $\Lambda=$ , $\lambda_p=$ , $T_{0}=$ , $P_p=$ , and $P_s=$ .[]{data-label="walkoff"}](Figure_5.eps){width="8cm"}
In conclusion, we showed that efficient pulsed FWM can be achieved in long-period Bragg silicon waveguides, which can be used for pulse amplification and to enhance the wavelength-conversion efficiency, as compared to uniform waveguides. These new ideas can be applied to a multitude of photonic devices, including photonic crystal fibers and sub-micrometer optical waveguides whose modal frequency dispersion is primarily determined by the waveguide dispersion. Equally important, by using more complex grating profiles, e.g. multi-period [@dog12oe] or chirped gratings [@ncs08prl; @yf10pra], one can design photonic devices with enhanced functionality, including ultra-broadband sources of entangled photons and highly efficient autoresonant optical parametric amplifiers.
The work of S. L. was supported through a UCL Impact Award. R. R. G. acknowledges support from the Columbia Optics and Quantum Electronics IGERT.
[99]{}
R. H. Stolen, J. E. Bjorkholm, and A. Ashkin, **24**, 308 (1974).
K. O. Hill, D. C. Johnson, B. S. Kawasaki, and R. I. MacDonald, **49**, 5098 (1978).
R. Claps, V. Raghunathan, D. Dimitropoulos, and B. Jalali, **11**, 2862 (2003).
D. Dimitropoulos, V. Raghunathan, R. Claps, and B. Jalali, **12**, 149 (2004).
H. Fukuda, K. Yamada, T. Shoji, M. Takahashi, T. Tsuchizawa, T. Watanabe, J. Takahashi, and S. Itabashi, **13**, 4629 (2005).
R. Espinola, J. Dadap, R. M. Osgood, Jr., S. McNab, and Y. Vlasov, **13**, 4341 (2005).
M. A. Foster, A. C. Turner, J. E. Sharping, B. S. Schmidt, M. Lipson, and A. L. Gaeta, **441**, 960 (2006).
Q. Lin, J. Zhang, P. M. Fauchet, and G. P. Agrawal, **14**, 4786 (2006).
N. C. Panoiu, X. Chen, and R. M. Osgood, **31**, 3609 (2006).
K. Yamada, H. Fukuda, T. Tsuchizawa, T. Watanabe, T. Shoji, and S. Itabashi, **18**, 1046 (2006).
Y.-H. Kuo, H. Rong, V. Sih, S. Xu, M. Paniccia, and O. Cohen, **14**, 11721 (2006).
M. A. Foster, A. C. Turner, R. Salem, M. Lipson, and A. L. Gaeta, **15**, 12949 (2007).
X. Liu, R. M. Osgood, Y. A. Vlasov, and W. M. J. Green, **4**, 557 (2010).
S. Zlatanovic, J. S. Park, S. Moro, J. M. C. Boggio, I. B. Divliansky, N. Alic, S. Mookherjea, and S. Radic, **4**, 561 (2010).
B. Kuyken, X. Liu, R. M. Osgood, R. Baets, G. Roelkens, and W. Green, Opt. Exp. **19**, 20172 (2011).
X. Liu, B. Kuyken, G. Roelkens, R. Baets, R. M. Osgood, and W. M. J. Green, **6**, 667 (2012).
J. B. Driscoll, N. Ophir, R. R. Grote, J. I. Dadap, N. C. Panoiu, K. Bergman, and R. M. Osgood, **20**, 9227 (2012).
Q. Lin, O. J. Painter, and G. P. Agrawal, **15**, 16604 (2007).
R. M. Osgood, N. C. Panoiu, J. I. Dadap, X. Liu, X. Chen, I-W. Hsieh, E. Dulkeith, W. M. J. Green, and Y. A. Vlassov, **1**, 162 (2009).
Q. Lin, and G. P. Agrawal, **31**, 3140 (2006).
R. Salem, M. A. Foster, A. C. Turner, D. F. Geraghty, M. Lipson, and A. L. Gaeta, **2**, 35 (2008).
X. Chen, N. C. Panoiu, and R. M. Osgood, **42**, 160 (2006).
N. C. Panoiu, X. Liu, and R. M. Osgood, **34**, 947 (2009).
S. Lavdas, J. B. Driscoll, H. Jiang, R. R. Grote, R. M. Osgood, and N. C. Panoiu, **38**, 3953 (2013).
R. A. Soref and B. R. Bennett, **23**, 123 (1987).
M. B. Nasr, S. Carrasco, B. E. A. Saleh, A. V. Sergienko, M. C. Teich, J. P. Torres, L. Torner, D. S. Hum, and M. M. Fejer, **100**, 183601 (2008).
O. Yaakobi and L. Friedland, **82**, 023820 (2010).
|
---
abstract: |
A $t$-nearly platonic graph is a finite, connected, regular, simple and planar graph in which all but exactly $t$ numbers of its faces have the same length. It is proved that there is no 2-connected $1$-nearly platonic graph. In this paper, we prove that there is no $1$-nearly platonic graph.\
**Keywords:** planar graph, regular graph, nearly platonic graph.\
**2010 Mathematics Subject Classification:** 05C07, 05C10.
author:
- |
[ Mahdi Reza Khorsandi and Seyed Reza Musawi]{}\
[ Faculty of Mathematical Sciences, Shahrood University of Technology,]{}\
[P.O. Box 36199-95161, Shahrood, Iran.]{}\
[khorsandi@shahroodut.ac.ir and r\_ musawi@shahroodut.ac.ir ]{}
title: 'Absence of $1$-Nearly Platonic Graphs'
---
Introduction
============
Throughout this paper, all graphs we consider are finite, simple, connected, planar, undirected and non-trivial graph. Suppose that $G=(V,E)$ is a graph with the vertex set $V$ and the edge set $E$. We recall some of the essential concepts, for more details and other terminologies see [@Diestel; @Bondy-Murty; @West-2001].
A graph is said to be planar, or embeddable in the plane, if it can be drawn in the plane such that each common point of two edges is a vertex. This drawing of a planar graph $G$ is called a planar embedding of $G$ and can itself be regarded as a graph isomorphic to $G$. Sometimes, we call a planar embedding of a graph as *plane graph*. By this definition, it is clear that we need some matters of the topology of the plane. Immediately, after deleting the points of a plane graph from the plane, we have some maximal open sets (=regions) of points in the plane that is called as faces of the plane graph. There exist exactly one unbounded region that we call it as *outerface* of the plane graph and other faces (if there exist some bounded regions) is called as *internal face*. The frontiers of each region is called as the boundary of the corresponding face. The boundary of a face is the set of the points corresponding to some vertices and some edges. In the graph-theoretic language, the boundary of a face is a closed walk. A face is said to be incident with the vertices and edges in its boundary, and two faces are adjacent if their boundaries have an edge in common. We denote the boundary of a face $F$ by $\partial(F)$. An *outerplanar graph* is a planar graph that its outerface is incident with all vertices.
\[rem\] [@West-2001 Proposition 6.1.20] Every simple outerplanar graph with at least four vertices has at least two nonadjacent vertices of degree at most 2.
[@Bondy-Murty Proposition 10.5] Let $G$ be a planar graph, and let $f$ be a face in some planar embedding of $G$. Then $G$ admits a planar embedding whose outerface has the same boundary as $f$ .
A graph $G$ is called $k$-regular when the degrees of all vertices are equal to $k$. A regular graph is one that is $k$-regular for some $k$. Let $G=(V,E)$ be a graph with the vertex set $V$ and the edge set $E$. We will show the number of vertices of $G$ by $n=|V|$, the number of edges of $G$ by $m=|E|$ and the number of faces of $G$ by $f$. The *Euler’s formula* states that if $G$ is a connected planar graph, then: $$m-n=f-2$$
Let $G$ be a graph and $S\subseteq V(G)$. Then $\langle S\rangle$, the induced subgraph by $S$, denotes the graph on $S$ whose edges are precisely the edges of $G$ with both ends in $S$. Also, $G-S$ is obtained from $G$ by deleting all the vertices in $S$ and their incident edges. If $S=\{x\}$ is a singleton, then we write $G-x$ rather than $G-\{x\}$.
The *length* of a face in a plane graph $G$ is the total length of the closed walk(s) in $G$ bounding the face. A cut-edge belongs to the boundary of only one face, and it contributes twice to its length (see [@West-2001 Example 6.1.12]). $G$ is called $k$-connected (for $k\in \mathbb{N}$) if $|V(G)|>k$ and $G-X$ is connected for every set $X\subseteq V(G)$ with $|X|<k$. If $|V(G)|>1$ and $G-F$ is connected for every set $F\subseteq E(G)$ of fewer than $\ell$ edges, then $G$ is called $\ell$-edge-connected.
\[2e\] [@West-2001 Proposition 6.1.13] If $\ell(F_i)$ denotes the length of the face $F_i$ in a plane graph $G$, then $2m =\underset{i}{\sum} \ell(F_i)$ .
In a 2-edge-connected plane graph, all facial boundaries are cycles and each edge lies in the boundary of two faces.
Platonic solids are a well-known five-membered family of 3-dimensional polyhedra. There is no reliable information about their date of birth, and different opinions have been taken [@Atiyah-Sutcliffe; @Lloyd]. However, they are attractive for mathematicians and others, in terms of some symmetries that they have. In the last two centuries, many of authors have paid attention to the polyhedra and they have extended it to convex and concave polytopes in the different dimensions [@Grunbaum]. Older scientists, such as Kepler and Plato, describe the properties of Platonic solids that we know that they are not right, but nowadays, with the new advances in a variety of sciences such as physics, chemistry, biology, etc., we observe some applications of polyhedra and especially platonic solids (see [@Atiyah-Sutcliffe] and [@Weyl-1952 Figure \[radiolarians\]]).
But what matters from the combinatorial point of view is that a convex polyhedron can be embedded on a sphere, and then we can map it on a plane so that the images of lines on the sphere do not cut each other in the plane. In this way, we have corresponded a polyhedron on the sphere with a planar graph in the plane. Steinitz’s theorem (see [@Grunbaum p. 235]) states that a graph $G$ with at least four vertices is the network of vertices and edges of a convex polyhedron if and only if $G$ is planar and $3$-connected. In 1967, Grunbaum considered 3-regular and connected planar graphs and he got some results. For example, for a $3$-regular connected planar graph and $k\in\{2,3,4,5\}$, it is proved that if the length of all faces but $t$ faces is divisible by $k$ then $t\ge2$ and if $t=2$ then two exceptional faces have not a common vertex [@Grunbaum]. In 1968, in his Ph.D thesis, Malkevitch proved the same results for 4 and 5-regular 3-connected planar graphs [@Malkevitch-2]. Several papers are devoted to the study of this topic, but all of them have considered the planar graphs such that the lengths of all faces but some exceptional faces are a multiple of $k$ and $k\in \{2,3,4,5\}$ (see [@Crowe; @Jendrol-Jucovic; @Jendrol; @Hornak-Jendrol]).
Recently, Keith et al. in [@Keith-Froncek-Kreher-1], defined a $t$-nearly platonic graph to be a finite $k$-regular simple planar graph in which all faces, with the exception of $t$ numbers of the faces, have the same length. They proved that there is no 1-nearly platonic graph. However, their proof is only valid for 2-connected graphs (see [@Keith-Froncek-Kreher-2]). In this paper, we prove that there is no 1-nearly platonic graph. This is a strength of the Theorem 1 in [@Keith-Froncek-Kreher-2] and completes the proof of the Theorem 6 in [@Keith-Froncek-Kreher-1] about 1-nearly platonic graph.
Absence of 1-nearly platonic graphs
===================================
A $k$-regular simple connected planar graph is a $(k;d_1^{f_1}d_2^{f_2}\cdots d_{\ell}^{f_{\ell}})$-graph if it has $f_i$ faces of degree $d_i$, $i=1,2,\cdots,\ell$, where $f=f_1+f_2+\cdots+f_{\ell}$.
A $t$-nearly platonic graph is a finite, connected, regular, simple and planar graph in which all but exactly $t$ numbers of its faces have the same length. For simplicity, we show a $t$-nearly platonic graph by $t$-NPG or $k$-regular $t$-NPG to emphasize the valency of graph.
If the graph $G$ is a $k$-regular $t$-NPG, then by planarity of $G$, it is obvious that $k\in\{1,2,3,4,5\}$. For $k=1$ we have $G=K_2$ with one face and for $k=2$ we have $G=C_n$ with two faces with the same lengths, so we have nothing to say. Hence, from now on we assume that $k\in\{3,4,5\}$ and so $n\ge4$, $m\ge6$ and $f\ge4$. A $k$-regular $1$-NPG can be written as $(k;d_1^{f-1}d_2^{1})$-graph such that $d_1,d_2\ge3$, $d_1\ne d_2$ and the unique face with the length $d_2$ is called the exceptional face of the $1$-NPG. Keith et al. [@Keith-Froncek-Kreher-1] proved that there is no 2-connected $(k;d_1^{f-1}d_2^{1})$-graph. In other words:
\[th.2.3\][@Keith-Froncek-Kreher-2 Theorem 1] There is no finite, planar, 2-connected regular graph that has all but one face of one degree (length) and a single face of a different degree (length).
In a planar graph, we call a vertex or an edge as an e-vertex or an e-edge, respectively, if it lies on the boundary of the outerface, otherwise, we call them as an i-vertex or an i-edges.
\[rem111\]
- In each planar graph, all edges passing through an i-vertex are i-edges.
- In a 2-connected planar graph, precisely two edges passing through an e-vertex are e-edges and others are i-edges.
In [@Keith-Froncek-Kreher-1 Figure 1], a special instance in planar graphs is called an inflorescence. We say that the vertex $x$ root an inflorescence if it must be adjacent to a vertex $y$ within a face not on the boundary of it, since the boundary vertices have already known their neighbours. $y$ must also be adjacent to some other vertices within this face. But this makes the edge $xy$ a cut-edge, and makes the vertices $x$ and $y$ cut-vertices. This is illustrated in Figure \[inflorescence\]. We will use this result many times in drawing the planar graph. Indeed, if a vertex root an inflorescence, then the graph has a cut-vertex and so it is not a 2-connected graph.
\[i-vertex\] Suppose that $G$ is a $2$-connected planar graph such that all vertices are of degree $3$ except only one vertex on the boundary of its outerface that has the degree $2$ and all internal faces have the same length. Then we have:
- The length of each internal face is 3, 4 or 5.
- The boundary of the each face of $G$ is an induced cycle.
- Each e-vertex has exactly two neighbours on the outerface.
- If two internal faces have a common vertex, then they have exactly one edge and two adjacent vertices in common.
**(i)** Let $n$ be the number of vertices, $m$ be the number of edges, $f$ be the number of faces, $l$ the length of external face and $d$ be the length of each internal face of $G$. By enumerating the edges of $G$ in two ways, we have $2m=3n-2$, $2m=l+(f-1)d$ and by Euler’s formula, $f-1=m-n+1=(n+1)/2$. Therefore, we have $3n-2=l+d(n+1)/2$ and so $d=6-(2l+10)/(n+1)<6$, as desired.\
**(ii)** First, we consider the internal faces of $G$. By part (i), the length of each internal face is 3, 4 or 5.\
**Case 1:** The lengths of internal faces are 3. Obviously, the boundary of each internal face is an induced cycle $C_3$.\
**Case 2:** The lengths of internal faces are 4. The boundary of each internal face is a cycle $C_4$. Let $yuvwy$ be the boundary of an internal face. If $yuvwy$ is not an induced cycle, then there exist exactly one chord $yv$ or $uw$. By symmetry, we consider the chord $yv$. Now, either the vertex $u$ belongs to the interior of the triangle $ywvy$ or the vertex $w$ belongs to the interior of the triangle $yuvy$. Again by symmetry, suppose that $u$ belongs to the interior of triangle $ywvy$. Since $u$ is an i-vertex, $u$ root an inflorescence, a contradiction to 2-connectivity of the graph. Therefore, $yuvwy$ is an induced cycle, as desired.\
**Case 3:** The lengths of internal faces are 5. The boundary of each internal face is a cycle $C_5$. Let $xyuvwx$ be the boundary of an internal face. If $xyuvwx$ is not an induced cycle, then there exist two chords. By symmetry, assume that $yv$ is a chord. Now, we consider two subcases.\
**Subcase 1.** The vertex $u$ belongs to the interior of the square $xyvwx$. In this subcase, $u$ root an inflorescence, a contradiction.\
**Subcase 2.** The vertices $x$ and $w$ belongs to the interior of the square $yuvy$. Since $y$ and $v$ is already of degree 3, the path $xyvw$ is the part of the boundary of a pentagonal face and so the vertices $x$ and $w$ have a common neighbour, say $z$. But the vertex $z$ root an inflorescence in the triangle $xwzx$, a contradiction. Therefore, $yuvwy$ is an induced cycle. In this 3 cases we show that the boundaries of all internal faces are induced cycles, as desired.
Now, we consider the boundary of the outerface. Assume that the cycle $x_1x_2x_3\cdots x_lx_1$ is the boundary of the outerface such that $\deg(x_l)=2$ and the degrees of all other vertices of $G$ are $3$. By the contrary, suppose that $\langle\{x_1,x_2,\cdots ,x_l\}\rangle\ne C_l$. Hence there exist a chord $x_ix_j$ such that $1\le i< j\le l$ and $x_ix_j\notin\{x_ix_{i+1}:i=1,2,\cdots,l-1\}\cup\{x_1x_l\}$. Note that $j\ne l$. Otherwise, $x_i\in N(x_l)$ and $x_ix_l\notin\{x_{l-1}x_{l},x_1x_l\}$ and so $i\notin\{1,l-1\}$. This implies that $\deg(x_l)\ge3$, a contradiction. Also, we have $j\ne i+2$. Otherwise, we consider the cycle $x_ix_{i+1}x_{i+2}x_i$ and we see that the vertex $x_{i+1}$ root an inflorescence, a contradiction. Therefore, we have $1\le i\le j-3\le l-4$.
Let $S$ be the set of all vertices lying on the cycle $x_ix_{i+1}x_{i+2}\cdots x_jx_i$ and its interior. The subgraph $H=\langle S \rangle$ is a $2$-connected planar graph such that $\deg(x_i)=\deg(x_j)=2$, the degrees of all other vertices of $H$ are $3$, the length of the outerface is at least $4$ and the internal faces have the same length.
We consider two copies of $H$ and we construct the new graph $H'$ by matching the vertices of degree $2$ from one copy to their respective vertices from the other copy of $H$. The graph $H'$ is a $3$-regular $2$-connected planar graph such that all internal faces have the same length equal to 3, 4 or 5, while the length of the outerface of $H'$ is at least $6$, therefore, $H'$ is a $2$-connected $1$-NPG, that contradicts the Theorem \[th.2.3\]. Hence, the boundary of the outerface is an induced cycle.\
**(iii)** Since the outerface is a cycle, each e-vertex has at least two neighbours on this cycle. If the third neighbour of an e-vertex lies on the outerface, then the cycle has a chord and so it is not an induced cycle, a contradiction with the part **(ii)**.\
**(iv)** By part (i), the length of each internal face is 3, 4 or 5.\
**Case 1:** The lengths of internal faces are 3. Assume that two triangles $uu_1u_2u$ and $uv_1v_2u$ are two different internal faces and $u$ lies on both of them. If $\{u_1,u_2\}\cap\{v_1,v_2\}=\emptyset$, then $\deg(u)\ge 4$, a contradiction, and so $\{u_1,u_2\}\cap\{v_1,v_2\}\ne \emptyset$. We set $u_1=v_1$, that is, two triangles have an edge and two adjacent vertices in common. We have $u_2\ne v_2$, otherwise, two triangles are the same, a contradiction.\
**Case 2:** The lengths of internal faces are 4. Assume that two squares $uu_1u_2u_3u$ and $uv_1v_2v_3u$ are the boundaries of two internal faces and $u$ lies on both of them. If $\{u_1,u_3\}\cap\{v_1,v_3\}=\emptyset$, then $\deg(u)\ge 4$, a contradiction. We set $u_1=v_1$, that is, two squares have an edge and two adjacent vertices in common. We show that $\{u_2,u_3\}\cap \{v_2,v_3\}=\emptyset$. Since $u_2\notin N(u)=\{u_1,u_3,v_3\}$, we have $u_2\ne v_3$. If $u_2=v_2$, then the vertex $u_1$ lies inside the square $uu_3u_2v_3u$. It is an $i$=vertex with $\deg(u_1)=3$. In this case, either the graph has a chord $u_1u_3$ or $u_1v_3$, or $u_1$ root an inflorescence, a contradiction. Similarly $u_3\notin\{v_2,v_3\}$.\
**Case 3:** The lengths of internal faces are 5. Assume that two pentagons $uu_1u_2u_3u_4u$ and $uy_1y_2y_3y_4u$ are the boundaries of two internal faces and $u$ lies on both of them. If $\{u_1,u_4\}\cap\{v_1,v_4\}=\emptyset$, then $\deg(u)\ge 4$, a contradiction. We set $u_1=y_1$, that is, two pentagons have an edge and two adjacent vertices in common. We show that $\{u_2,u_3,u_4\}\cap\{v_2,v_3,v_4\}=\emptyset$. Since $u_2\in N(u_1)=\{u,u_2,v_2\}$, we have $u_2\notin\{v_3,v_4\}$. If $u_2=v_2$, then the vertex $u_1$ lies inside the hexagon $uu_4u_3u_2v_3v_4u$. It is an $i$=vertex with $\deg(u_1)=3$. In this case, either the graph has a chord or $u_1$ root an inflorescence, a contradiction. Similarly $u_4\notin\{v_2,v_3,v_4\}$ and $\{u_2,u_3,u_4\}\cap\{v_2,v_4\}=\emptyset$. Finally, if $u_3=v_3$, then $\deg(u_3)=4$, a contradiction.
The parts (ii) and (iii) of the Theorem \[i-vertex\] play an important role in the construction of graphs in the following theorem.
\[th3.8\] There is no $2$-connected planar graph such that:
- All vertices are of degree $3\le k\le5$ except only one vertex on outerface that has the degree $k_0$ and $2\le k_0\le k-1$.
- All internal faces have the same length.
Assume to the contrary that there exists a $2$-connected planar graph, $G$, such that the length of all internal faces of $G$ is $d$ and the length of the outerface is $\ell$. The graph $G$ has a vertex $x$ on its outerface, with $\deg(x)=k_0$ and the degree of all other vertices is $k$, where $2\le k_0\le k-1$. Since $G$ is a $2$-connected graph, the boundary of each face is a cycle, thus the number of vertices and the number of edges lying on the outerface is equal to $\ell$ and also, $d,\ell\ge3$. By Lemma \[rem\], $G$ is not an outerplanar graph and so $\ell\le n-1$.\
We have some relations between the parameters of $G$: $$\begin{aligned}
\label{f1}
2m&=(n-1)k+k_0 , \\ \label{f2}
2m&=(f-1)d+\ell\end{aligned}$$ Now, by Euler’s formula we have: $$\begin{aligned}
\label{f3}
f-1=m-n+1=\frac{1}{2}[(n-1)(k-2)+k_0]\end{aligned}$$ Hence, by (\[f1\]), (\[f2\]), and (\[f3\]) we conclude that: $$\begin{aligned}
\label{f4}
(n-1)(2k+2d-dk)=k_0(d-2)+2\ell\end{aligned}$$ Since $k_0(d-2)+2\ell\ge8$, by equality (\[f4\]), we see $2k+2d-dk>0$, that is, $\frac{2}{d}+\frac{2}{k}>1$ which implies that $k=3, d\in\{3,4,5\}$ or $k\in\{4,5\}, d=3$. In each case, we have $k\ge 3$ and so $n\ge4$. Thus, we have $8$ cases to check:\
**Case 1.** If $k=3, d=3, k_0=2$, then by (\[f4\]), we have $3n-5=2\ell\le 2(n-1)$ and so $n\le3$, a contradiction.
**Case 2.** If $k=3, d=4, k_0=2$, then $x$ has two neighbours. Let $N(x)=\{x_1,y_1\}$. By Theorem \[i-vertex\](iii), the edges $xx_1$ and $xy_1$ are e-edges. They lie on an internal face, namely, the square $xx_1z_1y_1x$. If the outerface is incident to $z_1$, then $G=C_4$, a contradiction. Hence, the vertex $z_1$ is an i-vertex and the edges $x_1z_1$ and $y_1z_1$ are i-edges. $x_1z_1$ is belonging to another squares, say $x_1z_1z_2x_2x_1$. The vertex $z_1$ has all 3 neighbours and so, the path $y_1z_1z_2$ is a part of the new square $y_1z_1z_2y_2y_1$. $x_2$ is the third neighbour of $x_1$ and $y_2$ is the third neighbour of $y_1$ and so they are e-vertices. We claim that $z_2$ does not lie on the outerface. If $z_2$ lies on the outerface, then the graph has no other vertex, but it has three vertices of degree $2$, a contradiction. Since $z_2$ is an i-vertex with degree $3$, the path $x_2z_2y_2$ is a part of a square, say $x_2z_2y_2yx_2$. Now, $y$ is the third neighbour of $x_2$ and $y_2$. $y$ necessarily lies on the outerface and the graph has no other vertex, but it has two vertices of degree $2$, a contradiction.
**Case 3.** If $k=3, d=5, k_0=2$, then $x$ has two neighbours. Let $N(x)=\{x_1,y_1\}$. By Theorem \[i-vertex\](iii), the edges $xx_1$ and $xy_1$ are e-edges. They lie on an internal face, the pentagon $y_1xx_1z_1z_2y_1$. If $z_1$ be an e-vertex, then $\deg(x_1)=2$, a contradiction. Hence, $z_1$ is an i-vertex and $x_1z_1$ is an i-edge. Similarly, $z_2$ is an i-vertex and $y_1z_2$ is an i-edge. Also, $z_1z_2$ is an i-edge and so it lies on the second pentagonal face $z_1z_2z_4z_5z_3z_1$. By Theorem \[i-vertex\](iv), we have $\{z_3,z_4,z_5\}\cap \{x,x_1,y_1\}=\emptyset$. $x_1z_1$ lies on the boundary of another pentagon, say $x_1z_1z_3z_6x_2x_1$. By Theorem \[i-vertex\](iv), we have $\{x_2,z_3,z_6\}\cap \{x,y_1,z_2,z_4,z_5\}=\emptyset$. The i-edge $y_1z_2$ lies on the second pentagon, $y_1z_2z_4z_7y_2y_1$ and $\{y_2,z_7\}\cap \{x,x_1,z_1,z_3,z_5\}=\emptyset$. Since $x_2$ and $y_2$ are the third neighbours of $x_2$ and $y_2$, respectively, the edges $x_1x_2$ and $y_1y_2$ are e-vertices and so $x_2$ and $y_2$ are e-vertices. The vertices $z_6$ and $z_7$ are i-vertices. Otherwise, $\deg(x_2)=2$ or $\deg(y_2)=2$, a contradiction. By Theorem \[i-vertex\](iv), $z_6=z_7$ if and only if $x_2=y_2$. If $z_6=z_7$, then By Theorem \[i-vertex\](iv), $x_2=y_2$ and so the cycle $xx_1x_2y_1x$ is the boundary of outerface and graph is completed but, $z_5$ root an inflorescence, a contradiction. Consequently, $z_6\ne z_7$ and $x_2\ne y_2$. Also, $z_6\ne y_2$, otherwise, we have $\deg(y_2)=4$, a contradiction. Similarly, $z_7\ne x_2$. The i-edge $z_3z_6$ lies on the second pentagon $z_3z_6z_9z_8z_5z_3$. By Theorem \[i-vertex\](iv), $\{x_2\}\cap\{z_8,z_9\}=\emptyset$. By Theorem \[i-vertex\](iv), $z_7=z_8$ if and only if $z_9=y_2$. In this case, the graph has a triangular internal face $z_4z_5z_7z_4$, a contradiction and Again, by Theorem \[i-vertex\](iv), $z_8=y_2$ if and only if $z_7=z_9$. This case contradicts the planarity of the graph. Therefore, we have $\{y_2,z_7\}\cap\{z_8,z_9\}=\emptyset$. The i-edge $z_4z_7$ lies on the second pentagon $z_4z_5z_8z_{10}z_7z_4$. By Theorem \[i-vertex\](iv), $z_{10}\notin\{y_2,z_9\}$ and $x_2\ne z_{10}$, otherwise, $\deg(x_2)=4$, a contradiction. The i-edge $x_2z_6$ lies on the second pentagon $x_2z_6z_9z_{11}x_3x_2$. If $z_{11}=z_{10}$ we have a triangular internal face $z_8z_9z_{10}$, a contradiction. If $z_{11}=y_2$ then we have $\deg(y_2)=4$, a contradiction. If $x_3=z_{10}$ or $x_3=y_2$, then we have $\deg(z_{10})=4$ or $\deg(y_2)=4$, respectively, a contradiction. Thus, $\{x_3,z_{11}\}\cap \{y_2,z_{10}\}=\emptyset$. Furthermore, $x_3$ is the third neighbour of $x_2$ and so it is an e-vertex. $z_{11}$ is an i-vertex and $x_3z_{11}$ is an i-edge. Otherwise, $\deg(x_3)=2$, a contradiction. Since $z_9z_{11}$ is an i-edge, it lies on the second pentagon $z_9z_{11}z_{12}z_{10}z_8z_9$. If $z_{12}=x_3$, then $z_{11}$ root an inflorescence and if $z_{12}=y_2$, then $\deg(y_2)=4$, a contradiction. Thus, $\{z_{12}\}\cap \{x_3,y_2\}=\emptyset$. The edge $y_2z_7$ is an i-edge and so it lies on the second pentagon $y_2z_7z_{10}z_{12}y_3y_2$. If $x_3=y_3$, then $\deg(x_3)=4$, a contradiction. Furthermore, since $y_3$ is the third neighbour of $y_2$, it is an e-vertex. The i-edge $x_3z_{11}$ has to lie on the second pentagon $x_3z_{11}z_{12}y_3yx_3$. The vertex $y$ is the third neighbour of $y_3$ and so it is an e-vertex and the cycle $xx_1x_2x_3yy_3y_2y_1x$ is the boundary of outerface of the graph $G$. The graph $G$ is completed while $\deg(y)=2$ or $y$ root an inflorescence, a contradiction.
**Case 4.** If $k=4, d=3, k_0=2$, then $x$ has two neighbours. Let $N(x)=\{x_1,y_1\}$. By Theorem \[i-vertex\](iii), the edges $xx_1$ and $xy_1$ are e-edges. The second face consisting the edge $xx_1$ is the triangle $xx_1y_1x$. The edge $x_1y_1$ is an i-edges. Otherwise, $G=C_3$, that has three vertices of degree $2$, a contradiction. The second face consisting the edge $x_1y_1$ is the triangle $x_1y_1zx_1$. If $z$ be an e-vertex, then $G$ has no other vertex and so $G$ has two vertices of degree $2$, a contradiction. Therefore, $z$ is an i-vertex and consequently the edges $x_1z$ and $y_1z$ are i-edges. The second face consisting the edge $x_1z$ is the triangle $x_1zx_2x_1$ and the second face consisting the edge $y_1z$ is the triangle $y_1zy_2y_1$. If $x_2=y_2$, then $\deg(z)=3\ne k$ or $z$ root an inflorescence, a contradiction. Therefore, $x_2$ and $y_2$ are distinct vertices. Since $x_2$ and $y_2$ are the fourth neighbours of $x_1$ and $y_1$, respectively, they are e-vertices. Now, $\deg(z)=4$ and so the second face consisting the i-edge $x_2z$ is the triangle $x_2zy_2x_2$. The edge $x_2y_2$ is an i-edge, otherwise, $\deg(x_2)=\deg(y_2)=3<k$, a contradiction. Let $x_2y_2yx_2$ be the second triangle incident to $x_2y_2$. Now, the edges $x_2y$ and $y_2y$ are the fourth edges passing through $x_2$ and $y_2$, respectively, and so they are the second e-edges passing through $x_2$ and $y_2$. That is, $y$ is an e-vertex and the graph $G$ is completed while $\deg(y)=2$ or $y$ root an inflorescence, a contradiction.
**Case 5.** If $k=4, d=3, k_0=3$, then $x$ is the only vertex by an odd degree while other vertices have an even degree, it is impossible.
**Case 6.** If $k=5, d=3, k_0=2$, then $x$ has two neighbours. Let $N(x)=\{x_1,y_1\}$. By Theorem \[i-vertex\](iii), the edges $xx_1$ and $xy_1$ are e-edges. The second face consisting the edge $xx_1$ is the triangle $xx_1y_1x$. The edge $x_1y_1$ is an i-edges. Otherwise, $G=C_3$, that has three vertices of degree $2$, a contradiction. The second face consisting the edge $x_1y_1$ is the triangle $x_1y_1z_1x_1$. If $z_1$ be an e-vertices, then $G$ has no other vertices and so $G$ has two vertices of degree $2$, a contradiction. Therefore, $z_1$ is an i-vertex and consequently the edges $x_1z_1$ and $y_1z_1$ are i-edges. The second face consisting the edge $x_1z_1$ is the triangle $x_1z_1z_2x_1$ and the second face consisting the edge $y_1z_1$ is the triangle $y_1z_1z_3y_1$. Note that if $z_2=z_3$, then $\deg(z_1)=3$ which is a contradiction. Hence, $z_2\ne z_3$. We call the fifth neighbour of $z_1$ as $z_4$ and so the i-edge $z_1z_4$ has to lie on two triangle $z_1z_2z_4z_1$ and $z_1z_3z_4z_1$. The vertices $z_2$ and $z_3$ are $i$-vertices, otherwise, we have $\deg(x_1)=4<k$ or $\deg(y_1)=4<k$, respectively, a contradiction. The i-edge $z_2z_4$ lies on the second triangle $z_2z_4z_5z_2$. If $z_5=x_1$, then $\deg(z_2)=3$ and if $z_5=z_3$, then $\deg(z_4)=3$ and if $z_5=y_1$, then $\deg(y_1)=6$, a contradiction. Hence, we have $z_5\notin\{x_1,y_1,z_3\}$. Similarly, the i-edge $z_3z_4$ lies on the second triangle $z_3z_4z_6z_3$ and $z_5\ne z_6$. Otherwise, we have $\deg(z_4)=4$, a contradiction. The second face incident with the $i$-edge $x_1z_2$ is the triangle $x_1z_2x_2x_1$ and the second face incident with the $i$-edge $y_1z_3$ is the triangle $y_1z_3y_2y_1$. The vertices $x_2$ and $y_2$ are the fifth neighbours of $x_1$ and $y_1$, respectively, and so they are e-vertices and the edges $x_1x_2$ and $y_1y_2$ are e-edges. The vertices $z_2$ and $z_3$ are i-vertices and $\deg(z_2)=\deg(z_3)=5$, therefore, the i-edges $z_2x_2$ and $z_3y_2$ lie on the triangles $z_2x_2z_5z_2$ and $z_3y_2z_6z_3$, respectively. Note that $x_2\ne y_2$, otherwise, we have $\deg(x_2)=6$, a contradiction. The vertex $z_5$ is an i-vertex, otherwise, $\deg(x_2)=3<k$, a contradiction. Hence, The edge $z_4z_5$ is an i-edge and so it lies on the second triangle $z_4z_5z_6z_4$. Now, the i-edge $z_5z_6$ lies on the second triangle $z_5z_6z_7z_5$. We have $z_7\notin\{x_2,y_2\}$, otherwise, if $z_7=x_2$ or $z_7=y_2$ then $\deg(z_5)=4$ or $\deg(z_6)=4$, respectively, a contradiction. Since $\deg(z_5)=\deg(z_6)=5$, the i-edges $x_2z_5$ and $y_2z_6$ lie on the triangles $x_2z_5z_7x_2$ and $y_2z_6z_7y_2$, respectively. Note that $z_7$ is an i-vertex. Otherwise, $\deg(x_2)=4$, a contradiction. We call the fifth neighbour of $z_7$ as $y$ and so the i-edges $x_2z_7$ and $y_2z_7$ lie on two triangles $x_2z_7yx_2$ and $y_2z_7yy_2$. Since $y$ is the fifth neighbour of $x_2$ and $y_2$, the edges $y_2y$ and $x_2y$ are e-edges and so the graph is completed, but it has a vertex of degree 3, a contradiction.
**Case 7.** If $k=5, d=3, k_0=3$, then $x$ has three neighbours. Let $N(x)=\{x_1,z_1,y_1\}$. By Theorem \[i-vertex\](iii), we choose the vertex $z_1$ as an i-vertex and other two neighbours of $x$ as e-vertices. The i-edge $xz_1$ lies on the two triangles $xz_1x_1x$ and $xz_1y_1x$. The edge $x_1z_1$ is an i-edges. The second face consisting the edge $x_1z_1$ is the triangle $x_1z_1z_2x_1$. Note that $z_2\ne y_1$, otherwise, $\deg(z_1)=3$, a contradiction. Similarly, the edge $z_1y_1$ is an i-edges and the second triangle consisting the edge $z_1y_1$ is the triangle $z_1y_1z_3z_1$. The vertices $z_2$ and $z_3$ are i-vertices. Otherwise, $\deg(x_1)=3<k$ or $\deg(y_1)=3<k$, a contradiction. Also, we have $z_2\ne z_3$, otherwise, $\deg(z_1)=4$, a contradiction. The second triangle incident with the i-edge $z_1z_2$ is the triangle $z_1z_2z_3z_1$ and the second triangle incident with the i-edge $z_2z_3$ is the triangle $z_2z_3z_4z_2$. Note that $z_4\notin\{x_1,y_1\}$, otherwise, $\deg(z_2)=3$ or $\deg(z_3)=3$, a contradiction. The i-edge $x_1z_2$ lies on the second triangle $x_1z_2z_5x_1$. If $z_5=z_4$, then $\deg(z_2)=4$, a contradiction, and so we have $z_5\ne z_4$ and the i-edge $z_2z_5$ lies on the second triangle $z_2z_5z_4z_2$. Note that the vertex $z_5$ is the fifth neighbour of $z_2$ and $z_5\ne y_1$, otherwise, $\deg(y_1)=6$, a contradiction. Similarly, the i-edge $y_1z_3$ lies on the second triangle $y_1z_3z_6y_1$ and the i-edge $z_3z_6$ lies on the second triangle $z_3z_4z_6z_3$. We know that $z_5$ and $z_6$ are i-vertices, otherwise, $\deg(x_1)=4$ or $\deg(y_1)=4$, a contradiction. The i-edge $z_4z_5$ lies on the second triangle $z_4z_5z_7z_4$. Note that $z_6\ne z_7$, otherwise, $\deg(z_4)=4$, a contradiction. Now, the i-edge $z_4z_6$ lies on the second triangle $z_4z_6z_7z_4$. The vertices $z_5$ and $z_6$ are i-vertices. Otherwise, $\deg(x_1)=4$ or $\deg(y_1)=4$, a contradiction. The i-edge $x_1z_5$ lies on the second triangle $x_1z_5x_2x_1$ and the second triangle consisting the i-edge $x_2z_5$ is the triangle $x_2z_5z_7x_2$. Similarly, the i-edge $y_1z_6$ lies on the second triangle $y_1z_6y_2y_1$ and the second triangle consisting the i-edge $y_2z_6$ is the triangle $y_2z_6z_7y_2$. If $x_2=y_2$, then $\deg(z_7)=4$, a contradiction, and so $x_2\ne y_2$. Since the vertices $x_2$ and $y_2$ are the fifth neighbours of $x_1$ and $y_1$, respectively, they are e-vertices and $x_1x_2$ and $y_1y_2$ are e-edges. The edge $x_2z_7$ is an i-edge, otherwise, $\deg(x_2)=3<k$, a contradiction. Since the vertex $z_7$ is already of degree 5, the second triangle consisting $x_2z_7$ is $x_2z_7y_2x_2$. Now, the edge $x_2y_2$ is an i-edge, otherwise, $\deg(x_2)=4<k$, a contradiction. The second triangle consisting $x_2y_2$ is $x_2y_2yx_2$. The vertex $y$ is the fifth neighbour of $x_2$ and $y_2$ and so it is an e-vertex and the edges $x_2y$ and $y_2y$ are e-edges. The graph is completed but it has a vertex of degree 2, a contradiction.
**Case 8.** If $k=5, d=3, k_0=4$, then $x$ has four neighbours. Let $N(x)=\{x_1,z_1,z_2,y_1\}$. By Theorem \[i-vertex\](iii), we choose the vertices $z_1$ and $z_2$ as i-vertices and other two neighbours of $x$ as e-vertices. The e-edge $xx_1$ lies on a triangular face $xx_1wx$. If $w=y_1$ then we have $\deg(x)=2$, a contradiction. Hence, $w\in\{z_1,z_2\}$. We say that $xz_1x_1x$ is the triangular face incident to $xx_1$. Similarly, the e-edge $yy_1$ lies on a triangular face $yy_1wx$. If $w=z_1$ then we have $\deg(x)=3$, a contradiction. Hence, $w=z_2$ and $yz_2y_1y$ is the triangular face incident to $yy_1$. The triangle $xz_1z_2x$ is the second triangle incident to the i-edge $xz_1$. The edge $z_1z_2$ is an i-edge and lies on the second triangle $z_1z_2z_3z_1$. We have $z_3\notin\{x_1,y_1\}$. For example, if $z_3=x_1$, then $\deg(z_1)=3<k$, a contradiction. The i-edge $x_1z_1$ lies on the second triangle $x_1z_1z_4x_1$ and $z_4\ne z_3$, otherwise, $\deg(z_1)=4$, a contradiction. Since $z_4$ is the fifth neighbour of $z_1$, the triangle $z_1z_3z_4z_1$ is the second triangle incident to the i-edge $z_1z_3$. Similarly, the i-edge $y_1z_2$ lies on the second triangle $y_1z_2z_5y_1$ and $z_5\ne z_3$. Also, $z_2z_3z_5z_2$ is the second triangle incident with the i-edge $z_2z_3$. If $z_4=z_5$, then $\deg(z_3)=3$, a contradiction and if $z_4=y_1$ or $z_5=x_1$, then $\deg(y_1)=6$ or $\deg(x_1)=6$, respectively, a contradiction. The vertices $z_4$ and $z_5$ are i-vertices. Otherwise, $\deg(x_1)=3$ or $\deg(y_1)=3$, a contradiction. The i-edge $z_4z_3$ lies on the second triangle $z_3z_4z_6z_3$. We have $z_6\ne z_5$. Otherwise, $\deg(z_3)=4$, a contradiction. $z_6$ is the fifth neighbour of $z_3$ and so the i-edge $z_3z_5$ lies on the second triangle $z_3z_5z_6z_3$. Now we notice that $z_6\notin\{x_1,y_1\}$. Because, if $z_6=x_1$, then $\deg(z_4)=3$, a contradiction and if $z_6=y_1$, then $\deg(z_5)=3$, a contradiction. The i-edge $x_1z_4$ lies on the second triangle $x_1z_4z_7x_1$ and $z_7\ne z_6$. Otherwise, $\deg(z_4)=4$, a contradiction. Now, the triangle $z_4z_6z_7z_4$ is the second triangle incident to the i-edge $z_4z_7$. The vertex $z_7$ is an i-vertex. Otherwise, we have $\deg(x_1)=4$, a contradiction. Similarly, the i-edge $y_1z_5$ lies on the second triangle $y_1z_5z_8y_1$ and the triangle $z_5z_6z_8z_5$ is the second triangle incident to the i-edge $z_5z_8$. Also, the vertex $z_8$ is an i-vertex. Furthermore, we have $z_7\ne z_8$, otherwise, $\deg(z_6)=4$, a contradiction. Since we know all five neighbours of $z_6$, the i-edge $z_6z_7$ lies on the second triangle $z_6z_7z_8z_6$. Now, the i-edge $z_7z_8$ lies on the second triangle $z_7z_8yz_7$. We have $y\notin\{x_1,y_1\}$. Otherwise, $\deg(z_7)=4$ or $\deg(z_8)=4$, a contradiction. Finally, The second triangles incident to the i- edges $x_1z_7$ and $y_1z_8$ are $x_1z_7yx_1$ and $y_1z_8yy_1$, respectively. The vertex $y$ is the fifth neighbour of both e-vertices $x_1$ and $y_1$, Hence, $y$ is an e-vertex and $x_1y$ and $y_1y$ are e-edges. Now, the graph is completed but it has two vertices of degree 4, a contradiction.
\[d1<6\] If a $(k;d_1^{f-1}d_2^{1})$-graph be a $k$-regular $1$-NPG, then $3\le d_1\le5$ for $k=3$ and $d_1=3$ for $k=4,5$.
The graph is $k$-regular and so we have $kn=2m$. By Proposition \[2e\] we have $2m=d_1(f-1)+d_2$ and by Euler’s formula $f-1=m-n+1=\frac{1}{2}(k-2)n+1$. Therefore, $kn=\frac{1}{2}(k-2)nd_1+d_1+d_2$ or $d_1=\dfrac{2kn}{(k-2)n+2}-\dfrac{2d_2}{(k-2)n+2}$ which implies that $d_1<\frac{2k}{k-2}$. Now, if $k=3$, then $3\le d_1\le5$ and if $k\in\{4,5\}$, then $ d_1=3$.
In [@Diestel], it is proved that:
[@Diestel Lemma 4.2.1] Let $G$ be a plane graph, $F$ a face, and $H$ a subgraph of $G$.
- $H$ has a face $F'$ containing $F$.
- If the frontier of $F$ lies in $H$, then $F'=F$.
\[outerface\] If $G$ be a planar graph with the outerface $F_G$ and $H$ be a subgraph of $G$ with the outerface $F_H$, then $F_G\subseteq F_H$. Furthermore, each vertex $u\in V(H)\cap\partial(F_G)$ belongs to the $\partial(F_H)$.
\[2.13\] Let $G$ be a connected planar graph, $x$ a cut-vertex in $G$, and $H$ a component of $G-x$. If $G_1=\langle\{x\}\cup V(H)\rangle$ and $G_2=\langle V(G)\setminus V(H)\rangle$, then:
- $G_1$ has a face containing $G_2\setminus\{x\}$.
- $G_2$ has a face containing $G_1\setminus\{x\}$.
- $G$ has a face incident to $x$ at least twice.
\(i) Two subgraphs $G_1$ and $G_2$ are connected and they have only one vertex, $x$, in common. For a vertex $x_1(\ne x)$ in $G_1$ there exists a face $F_2$ in $G_2$ such that $x_1\in F_2$. Now consider another vertex $x'_1(\ne x)$ in $G_1$. There exists a path between $x_1$ and $x'_1$ in $G_1$ independent from $x$. This path is induced from $G$ and does not meet the graph $G_2$ and so $x'_1\in F_2$. Hence, all vertices of $G_1$ are in the the face $F_2$ except the vertex $x$ that lies on the boundary of $F_2$ with the boundary walk $xy_1y_2\cdots y_{s_2}x$.\
(ii) Similarly, for a vertex $x_2\ne x$ in $G_2$ there exists a face $F_1$ in $G_1$ such that $x_2\in F_1$. Now consider another vertex $x'_2\ne x$ in $G_2$. There exists a path between $x_2$ and $x'_2$ in $G_2$ independent from $x$. This path is induced from $G$ and does not meet the graph $G_1$ and so $x'_2\in F_1$. Hence, all vertices of $G_2$ are in the face $F_1$ except the vertex $x$ that lies on the boundary of $F_1$ with the boundary walk $xz_1z_2\cdots z_{s_1}x$.\
(iii) Now, $F=F_1\cap F_2$ is a region in the plane and $F\cap G=\emptyset$. Therefore, $F$ is a face of $G$ with the boundary walk $xy_1y_2\cdots y_{s_2}xz_1z_2\cdots z_{s_1}x$ such that $y_i\ne z_j$ for all $i$ and $j$.
\[cor345\] Let $G$ be a connected planar graph and $x$ be a cut-vertex of $G$.
- $x$ lies on a face with the length at least $4$.
- If the length of each face incident to $x$ is less than $6$, then $x$ has a neighbour with degree $1$.
By Lemma \[2.13\], $G$ has a face $F$ with the boundary walk $xy_1y_2\cdots y_{s_2}xz_1z_2\cdots z_{s_1}x$ such that $y_i\ne z_j$ for all $i$ and $j$. Thus, the length of this walk is equal to $s_1+s_2+2$. We know $s_1,s_2\ge1$ that implies (i). If each face incident to $x$ has the length less than 6, then $s_1+s_2+2\le 5$ and so we have three cases: $s_1=s_2=1$, $s_1=1,s_2=2$ and $s_1=2,s_2=1$. In the first, we deduce that $G=P_3$ with two endvertices adjacent to $x$. In the second and third, $z_1$ or $y_1$, respectively, is an endvertex adjacent to $x$.
Let $G$ be a $1$-NPG, each cut-vetrtex in $G$ lies on the exceptional face of $G$.
Since $G$ is a $1$-NPG, $G$ has no endvertex and so by Corollary \[cor345\], each cut-vertex lies on a face of the length at least $6$ and by Lemma \[d1<6\], $x$ lies on the exceptional face of $G$.
\[cor-1npg\] Let $G$ be a $1$-NPG and $x$ is a cut-vertex. The length of the unique exceptional face is at least $6$ and $x$ lies on it.
[@West-2001 Definition 4.1.16] A block of a graph $G$ is a maximal connected subgraph of $G$ that has no cut-vertex. If $G$ itself is connected and has no cut-vertex, then $G$ is a block.
[@West-2001 Pages 155,156]\[rem0\] The blocks of a graph are its isolated vertices, its cut-edges, and its maximal 2-connected subgraphs. Two blocks in a graph share at most one vertex, it must be a cut-vertex and a cut-vertex is belong to at least two blocks. A graph that is not a single block has at least two blocks (leaf blocks) that each contain exactly one cut-vertex of the graph.
There is no finite, planar, regular, connected graph that has all but one face of the same degree and a single face of a different degree.
Assume by the contrary, $G$ is a 1-NPG with at least one cut-vertex. By [@Keith-Froncek-Kreher-1 Lemma 4], we have $3\le \deg(w)\le 5$ for all $w\in V(G)$. We consider an embedding of $G$ such that the exceptional face of $G$ be the outerface of $G$. Since $G$ is not a single block, by Remark \[rem0\], $G$ has two leaf blocks that each contain exactly one cut-vertex of $G$. We consider a leaf block $B$, containing the vertex $x$ as a cut-vertex of $G$. Since $G$ is connected, $B$ is not an isolated vertex and $x$ has at least a neighbour $y$ in $B$ with $\deg_B(y)=\deg_G(y)\ge 3$ and so $B$ has at least four vertices and $B$ is not a cut-edge of $G$. Therefore, $B$ is a maximal 2-connected subgraph of $G$. Immediately, we have $\deg_B(x)\ge 2$ and since $x$ has at least one neighbour in another block we see that $\deg_B(x)\le k-1$. On the other hand, for each vertex $z\in V(B)\setminus\{x\}$ we have $\deg_B(z)=\deg_G(x)=k$ and $2\le\deg_B(x)\le k-1$. By Corollary \[cor-1npg\], $x$ lies on the boundary of the unique exceptional face (=outerface) of $G$ with the length at least 6. By Lemma \[2.13\], this face incident to $x$ at least twice. Let $xy_1y_2\cdots y_{s_2}xz_1z_2\cdots z_{s_1}x$ be the boundary walk of the outerface of $G$ such that $y_i\ne z_j$ for all $i$ and $j$.
Here, the block $B$ plays the role of $G_1$ in Lemma \[2.13\] and so a part of the boundary walk\
$xy_1y_2\cdots y_{s_2}xz_1z_2\cdots z_{s_1}x$ , say $xy_1y_2\cdots y_{s_2}x$, is the boundary of a face of $B$. Now, by Observation \[outerface\], $xy_1y_2\cdots y_{s_2}x$ is the boundary walk of the outerface of $B$. Indeed, by 2-connectivity of $B$, $xy_1y_2\cdots y_{s_2}x$ is a cycle.
Since the internal faces of the block $B$ are the internal faces of $G$, they have the same lengths. Therefore, $B$ is a connected planar graph such that all its internal faces have the same lengths, all its vertices, except $x$ lying on the outerface, have the same degree $k$ and $2\le \deg_B(x)\le k-1$. This contradicts the Theorem \[th3.8\]. Consequently, there is no 1-NPG.
[15]{}
M. Atiyah and P. Sutcliffe, *Polyhedra in physics, chemistry and geometry*, Milan J. Math., **71** (2003), 33-58.
J. A. Bondy, U. S. R. Murty, *Graph theory*, Graduate Texts in Mathematics, Vol. 244, Springer, New York, 2008.
D. W. Crowe, *Nearly regular polyhedra with two exceptional faces*, The Many Facets of Graph Theory (Proc. Conf., Western Mich. Univ., Kalamazoo, Mich., 1968), 1969, PP. 63-76.
R. Diestel, *Graph theory*, Fifth, Graduate Texts in Mathematics, vol. 173, Springer, Berlin, 2017.
B. Grünbaum, *Convex polytopes*, With the cooperation of Victor Klee, M. A. Perles and G. C. Shephard. Pure and Applied Mathematics, Vol. 16, Interscience Publishers John Wiley & Sons, Inc., New York, 1967.
M. Horňák, S. Jendrol, *On a conjecture by Plummer and Toft*, J. Graph Theory **30** (1999), no. 3, 177-189.
S. Jendrol, *On the non-existence of certain nearly regular planar maps with two exceptional faces*, Mat. Časopis Sloven. Akad. Vied **25** (1975), no. 2, 159-164.
S. Jendrol, E. Jucovič, *On a conjecture by B. Grünbaum*, Discrete Math. **2** (1972), 35-49.
W. J. Keith, D. Froncek, D. L. Kreher, *Corrigendum to: a note on nearly platonic graphs*, Australas. J. Combin. **72** (2018), 163.
W. J. Keith, D. Froncek, D. L. Kreher, *A note on nearly platonic graphs*, Australas. J. Combin. **70** (2018), 86-103.
D. R. Lloyd, *How old are the Platonic solids?*, BSHM Bull. **27** (2002), no. 3, 131-140.
J. Malkevitch, *Properties of planar graphs with uniform vertex and face structure*, Memoirs of the American Mathematical Society, No. 99, American Mathematical Society, Providence, R.I., 1970.
D. B. West, *Introduction to graph theory*, Second, Pearson Education, Inc., 2001.
H. Weyl, *Symmetry*, Princeton University Press, Princeton, N. J., 1952.
|
---
abstract: 'We report on a measurement of –violating asymmetries () in the Cabibbo-suppressed and decays reconstructed in a data sample corresponding to $5.9$ fb$^{-1}$ of integrated luminosity collected by the upgraded Collider Detector at Fermilab. We use the strong decay $D^{*+}\to D^0\pi^+$ to identify the flavor of the charmed meson at production and exploit –conserving strong $c\bar{c}$ pair-production in $p\bar{p}$ collisions. High-statistics samples of Cabibbo-favored $D^0\to K^-\pi^+$ decays with and without a $D^{*\pm}$ tag are used to correct for instrumental effects and significantly reduce systematic uncertainties. We measure $\Acp(D^0\to\pi^+\pi^-) = \bigl(+0.22\pm0.24\stat\pm0.11\syst\bigr)\%$ and , in agreement with conservation. These are the most precise determinations from a single experiment to date. Under the assumption of negligible direct violation in and decays, the results provide an upper limit to the –violating asymmetry in $D^0$ mixing, $|\Acp^{\rm{ind}}(D^0)|< 0.13\%$ at the 90% confidence level.'
author:
- 'T. Aaltonen'
- 'B. Álvarez González$^z$'
- 'S. Amerio'
- 'D. Amidei'
- 'A. Anastassov$^x$'
- 'A. Annovi'
- 'J. Antos'
- 'G. Apollinari'
- 'J.A. Appel'
- 'T. Arisawa'
- 'A. Artikov'
- 'J. Asaadi'
- 'W. Ashmanskas'
- 'B. Auerbach'
- 'A. Aurisano'
- 'F. Azfar'
- 'W. Badgett'
- 'T. Bae'
- 'A. Barbaro-Galtieri'
- 'V.E. Barnes'
- 'B.A. Barnett'
- 'P. Barria$^{hh}$'
- 'P. Bartos'
- 'M. Bauce$^{ff}$'
- 'F. Bedeschi'
- 'S. Behari'
- 'G. Bellettini$^{gg}$'
- 'J. Bellinger'
- 'D. Benjamin'
- 'A. Beretvas'
- 'A. Bhatti'
- 'D. Bisello$^{ff}$'
- 'I. Bizjak'
- 'K.R. Bland'
- 'B. Blumenfeld'
- 'A. Bocci'
- 'A. Bodek'
- 'D. Bortoletto'
- 'J. Boudreau'
- 'A. Boveia'
- 'L. Brigliadori$^{ee}$'
- 'C. Bromberg'
- 'E. Brucken'
- 'J. Budagov'
- 'H.S. Budd'
- 'K. Burkett'
- 'G. Busetto$^{ff}$'
- 'P. Bussey'
- 'A. Buzatu'
- 'A. Calamba'
- 'C. Calancha'
- 'S. Camarda'
- 'M. Campanelli'
- 'M. Campbell'
- 'F. Canelli$^{11}$'
- 'B. Carls'
- 'D. Carlsmith'
- 'R. Carosi'
- 'S. Carrillo$^m$'
- 'S. Carron'
- 'B. Casal$^k$'
- 'M. Casarsa'
- 'A. Castro$^{ee}$'
- 'P. Catastini'
- 'D. Cauz'
- 'V. Cavaliere'
- 'M. Cavalli-Sforza'
- 'A. Cerri$^f$'
- 'L. Cerrito$^s$'
- 'Y.C. Chen'
- 'M. Chertok'
- 'G. Chiarelli'
- 'G. Chlachidze'
- 'F. Chlebana'
- 'K. Cho'
- 'D. Chokheli'
- 'W.H. Chung'
- 'Y.S. Chung'
- 'M.A. Ciocci$^{hh}$'
- 'A. Clark'
- 'C. Clarke'
- 'G. Compostella$^{ff}$'
- 'M.E. Convery'
- 'J. Conway'
- 'M.Corbo'
- 'M. Cordelli'
- 'C.A. Cox'
- 'D.J. Cox'
- 'F. Crescioli$^{gg}$'
- 'J. Cuevas$^z$'
- 'R. Culbertson'
- 'D. Dagenhart'
- 'N. d’Ascenzo$^w$'
- 'M. Datta'
- 'P. de Barbaro'
- 'M. Dell’Orso$^{gg}$'
- 'L. Demortier'
- 'M. Deninno'
- 'F. Devoto'
- 'M. d’Errico$^{ff}$'
- 'A. Di Canto$^{gg}$'
- 'B. Di Ruzza'
- 'J.R. Dittmann'
- 'M. D’Onofrio'
- 'S. Donati$^{gg}$'
- 'P. Dong'
- 'M. Dorigo'
- 'T. Dorigo'
- 'K. Ebina'
- 'A. Elagin'
- 'A. Eppig'
- 'R. Erbacher'
- 'S. Errede'
- 'N. Ershaidat$^{dd}$'
- 'R. Eusebi'
- 'S. Farrington'
- 'M. Feindt'
- 'J.P. Fernandez'
- 'R. Field'
- 'G. Flanagan$^u$'
- 'R. Forrest'
- 'M.J. Frank'
- 'M. Franklin'
- 'J.C. Freeman'
- 'Y. Funakoshi'
- 'I. Furic'
- 'M. Gallinaro'
- 'J.E. Garcia'
- 'A.F. Garfinkel'
- 'P. Garosi$^{hh}$'
- 'H. Gerberich'
- 'E. Gerchtein'
- 'S. Giagu'
- 'V. Giakoumopoulou'
- 'P. Giannetti'
- 'K. Gibson'
- 'C.M. Ginsburg'
- 'N. Giokaris'
- 'P. Giromini'
- 'G. Giurgiu'
- 'V. Glagolev'
- 'D. Glenzinski'
- 'M. Gold'
- 'D. Goldin'
- 'N. Goldschmidt'
- 'A. Golossanov'
- 'G. Gomez'
- 'G. Gomez-Ceballos'
- 'M. Goncharov'
- 'O. González'
- 'I. Gorelov'
- 'A.T. Goshaw'
- 'K. Goulianos'
- 'S. Grinstein'
- 'C. Grosso-Pilcher'
- 'R.C. Group$^{53}$'
- 'J. Guimaraes da Costa'
- 'S.R. Hahn'
- 'E. Halkiadakis'
- 'A. Hamaguchi'
- 'J.Y. Han'
- 'F. Happacher'
- 'K. Hara'
- 'D. Hare'
- 'M. Hare'
- 'R.F. Harr'
- 'K. Hatakeyama'
- 'C. Hays'
- 'M. Heck'
- 'J. Heinrich'
- 'M. Herndon'
- 'S. Hewamanage'
- 'A. Hocker'
- 'W. Hopkins$^g$'
- 'D. Horn'
- 'S. Hou'
- 'R.E. Hughes'
- 'M. Hurwitz'
- 'U. Husemann'
- 'N. Hussain'
- 'M. Hussein'
- 'J. Huston'
- 'G. Introzzi'
- 'M. Iori$^{jj}$'
- 'A. Ivanov$^p$'
- 'E. James'
- 'D. Jang'
- 'B. Jayatilaka'
- 'E.J. Jeon'
- 'S. Jindariani'
- 'M. Jones'
- 'K.K. Joo'
- 'S.Y. Jun'
- 'T.R. Junk'
- 'T. Kamon$^{25}$'
- 'P.E. Karchin'
- 'A. Kasmi'
- 'Y. Kato$^o$'
- 'W. Ketchum'
- 'J. Keung'
- 'V. Khotilovich'
- 'B. Kilminster'
- 'D.H. Kim'
- 'H.S. Kim'
- 'J.E. Kim'
- 'M.J. Kim'
- 'S.B. Kim'
- 'S.H. Kim'
- 'Y.K. Kim'
- 'Y.J. Kim'
- 'N. Kimura'
- 'M. Kirby'
- 'S. Klimenko'
- 'K. Knoepfel'
- 'K. Kondo[^1]'
- 'D.J. Kong'
- 'J. Konigsberg'
- 'A.V. Kotwal'
- 'M. Kreps'
- 'J. Kroll'
- 'D. Krop'
- 'M. Kruse'
- 'V. Krutelyov$^c$'
- 'T. Kuhr'
- 'M. Kurata'
- 'S. Kwang'
- 'A.T. Laasanen'
- 'S. Lami'
- 'S. Lammel'
- 'M. Lancaster'
- 'R.L. Lander'
- 'K. Lannon$^y$'
- 'A. Lath'
- 'G. Latino$^{hh}$'
- 'T. LeCompte'
- 'E. Lee'
- 'H.S. Lee$^q$'
- 'J.S. Lee'
- 'S.W. Lee$^{bb}$'
- 'S. Leo$^{gg}$'
- 'S. Leone'
- 'J.D. Lewis'
- 'A. Limosani$^t$'
- 'C.-J. Lin'
- 'M. Lindgren'
- 'E. Lipeles'
- 'A. Lister'
- 'D.O. Litvintsev'
- 'C. Liu'
- 'H. Liu'
- 'Q. Liu'
- 'T. Liu'
- 'S. Lockwitz'
- 'A. Loginov'
- 'D. Lucchesi$^{ff}$'
- 'J. Lueck'
- 'P. Lujan'
- 'P. Lukens'
- 'G. Lungu'
- 'J. Lys'
- 'R. Lysak$^e$'
- 'R. Madrak'
- 'K. Maeshima'
- 'P. Maestro$^{hh}$'
- 'S. Malik'
- 'G. Manca$^a$'
- 'A. Manousakis-Katsikakis'
- 'F. Margaroli'
- 'C. Marino'
- 'M. Martínez'
- 'P. Mastrandrea'
- 'K. Matera'
- 'M.E. Mattson'
- 'A. Mazzacane'
- 'P. Mazzanti'
- 'K.S. McFarland'
- 'P. McIntyre'
- 'R. McNulty$^j$'
- 'A. Mehta'
- 'P. Mehtala'
- 'C. Mesropian'
- 'T. Miao'
- 'D. Mietlicki'
- 'A. Mitra'
- 'H. Miyake'
- 'S. Moed'
- 'N. Moggi'
- 'M.N. Mondragon$^m$'
- 'C.S. Moon'
- 'R. Moore'
- 'M.J. Morello$^{ii}$'
- 'J. Morlock'
- 'P. Movilla Fernandez'
- 'A. Mukherjee'
- 'Th. Muller'
- 'P. Murat'
- 'M. Mussini$^{ee}$'
- 'J. Nachtman$^n$'
- 'Y. Nagai'
- 'J. Naganoma'
- 'I. Nakano'
- 'A. Napier'
- 'J. Nett'
- 'C. Neu'
- 'M.S. Neubauer'
- 'J. Nielsen$^d$'
- 'L. Nodulman'
- 'S.Y. Noh'
- 'O. Norniella'
- 'L. Oakes'
- 'S.H. Oh'
- 'Y.D. Oh'
- 'I. Oksuzian'
- 'T. Okusawa'
- 'R. Orava'
- 'L. Ortolan'
- 'S. Pagan Griso$^{ff}$'
- 'C. Pagliarone'
- 'E. Palencia$^f$'
- 'V. Papadimitriou'
- 'A.A. Paramonov'
- 'J. Patrick'
- 'G. Pauletta$^{kk}$'
- 'M. Paulini'
- 'C. Paus'
- 'D.E. Pellett'
- 'A. Penzo'
- 'T.J. Phillips'
- 'G. Piacentino'
- 'E. Pianori'
- 'J. Pilot'
- 'K. Pitts'
- 'C. Plager'
- 'L. Pondrom'
- 'S. Poprocki$^g$'
- 'K. Potamianos'
- 'F. Prokoshin$^{cc}$'
- 'A. Pranko'
- 'F. Ptohos$^h$'
- 'G. Punzi$^{gg}$'
- 'A. Rahaman'
- 'V. Ramakrishnan'
- 'N. Ranjan'
- 'I. Redondo'
- 'P. Renton'
- 'M. Rescigno'
- 'T. Riddick'
- 'F. Rimondi$^{ee}$'
- 'L. Ristori$^{42}$'
- 'A. Robson'
- 'T. Rodrigo'
- 'T. Rodriguez'
- 'E. Rogers'
- 'S. Rolli$^i$'
- 'R. Roser'
- 'F. Ruffini$^{hh}$'
- 'A. Ruiz'
- 'J. Russ'
- 'V. Rusu'
- 'A. Safonov'
- 'W.K. Sakumoto'
- 'Y. Sakurai'
- 'L. Santi$^{kk}$'
- 'K. Sato'
- 'V. Saveliev$^w$'
- 'A. Savoy-Navarro$^{aa}$'
- 'P. Schlabach'
- 'A. Schmidt'
- 'E.E. Schmidt'
- 'T. Schwarz'
- 'L. Scodellaro'
- 'A. Scribano$^{hh}$'
- 'F. Scuri'
- 'S. Seidel'
- 'Y. Seiya'
- 'A. Semenov'
- 'F. Sforza$^{hh}$'
- 'S.Z. Shalhout'
- 'T. Shears'
- 'P.F. Shepard'
- 'M. Shimojima$^v$'
- 'M. Shochet'
- 'I. Shreyber-Tecker'
- 'A. Simonenko'
- 'P. Sinervo'
- 'K. Sliwa'
- 'J.R. Smith'
- 'F.D. Snider'
- 'A. Soha'
- 'V. Sorin'
- 'H. Song'
- 'P. Squillacioti$^{hh}$'
- 'M. Stancari'
- 'R. St. Denis'
- 'B. Stelzer'
- 'O. Stelzer-Chilton'
- 'D. Stentz$^x$'
- 'J. Strologas'
- 'G.L. Strycker'
- 'Y. Sudo'
- 'A. Sukhanov'
- 'I. Suslov'
- 'K. Takemasa'
- 'Y. Takeuchi'
- 'J. Tang'
- 'M. Tecchio'
- 'P.K. Teng'
- 'J. Thom$^g$'
- 'J. Thome'
- 'G.A. Thompson'
- 'E. Thomson'
- 'D. Toback'
- 'S. Tokar'
- 'K. Tollefson'
- 'T. Tomura'
- 'D. Tonelli'
- 'S. Torre'
- 'D. Torretta'
- 'P. Totaro'
- 'M. Trovato$^{ii}$'
- 'F. Ukegawa'
- 'S. Uozumi'
- 'A. Varganov'
- 'F. Vázquez$^m$'
- 'G. Velev'
- 'C. Vellidis'
- 'M. Vidal'
- 'I. Vila'
- 'R. Vilar'
- 'J. Vizán'
- 'M. Vogel'
- 'G. Volpi'
- 'P. Wagner'
- 'R.L. Wagner'
- 'T. Wakisaka'
- 'R. Wallny'
- 'S.M. Wang'
- 'A. Warburton'
- 'D. Waters'
- 'W.C. Wester III'
- 'D. Whiteson$^b$'
- 'A.B. Wicklund'
- 'E. Wicklund'
- 'S. Wilbur'
- 'F. Wick'
- 'H.H. Williams'
- 'J.S. Wilson'
- 'P. Wilson'
- 'B.L. Winer'
- 'P. Wittich$^g$'
- 'S. Wolbers'
- 'H. Wolfe'
- 'T. Wright'
- 'X. Wu'
- 'Z. Wu'
- 'K. Yamamoto'
- 'D. Yamato'
- 'T. Yang'
- 'U.K. Yang$^r$'
- 'Y.C. Yang'
- 'W.-M. Yao'
- 'G.P. Yeh'
- 'K. Yi$^n$'
- 'J. Yoh'
- 'K. Yorita'
- 'T. Yoshida$^l$'
- 'G.B. Yu'
- 'I. Yu'
- 'S.S. Yu'
- 'J.C. Yun'
- 'A. Zanetti'
- 'Y. Zeng'
- 'C. Zhou'
- 'S. Zucchelli$^{ee}$'
title: 'Measurement of CP–violating asymmetries in $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ decays at CDF'
---
Introduction\[sec:intro\]
=========================
The rich phenomenology of neutral flavored mesons provides many experimentally accessible observables sensitive to virtual contributions of non-standard model (SM) particles or couplings. Presence of non-SM physics may alter the expected decay or flavor-mixing rates, or introduce additional sources of violation besides the Cabibbo-Kobayashi-Maskawa (CKM) phase. The physics of neutral kaons and bottom mesons has been mostly explored in dedicated experiments using kaon beams and $e^+e^-$ collisions [@Antonelli:2009ws]. The physics of bottom-strange mesons is currently being studied in detail in hadron collisions [@Antonelli:2009ws]. In spite of the success of several dedicated experiments in the 1980’s and 1990’s, experimental sensitivities to parameters related to mixing and violation in the charm sector were still orders of magnitude from most SM and non-SM expectations [@Bianco:2003vb]. Improvements from early measurements at dedicated $e^+e^-$ colliders at the $\Upsilon(4S)$ resonance ($B$-factories) and the Tevatron were still insufficient for discriminating among SM and non-SM scenarios [@pdg; @hfag; @Artuso:2008vf; @Shipsey:2006zz; @Burdman:2003rs]. Since charm transitions are described by physics of the first two quark generations, –violating effects are expected to be smaller than $\mathcal{O}(10^{-2})$. Thus, relevant measurements require large event samples and careful control of systematic uncertainties to reach the needed sensitivity. Also, –violating effects for charm have significantly more uncertain predictions compared to the bottom and strange sectors because of the intermediate value of the charm quark mass (too light for factorization of hadronic amplitudes and too heavy for applying chiral symmetry). All these things taken together have made the advances in the charm sector slower.
Studies of violation in charm decays provide a unique probe for new physics. The neutral $D$ system is the only one where up-sector quarks are involved in the initial state. Thus it probes scenarios where up-type quarks play a special role, such as supersymmetric models where the down quark and the squark mass matrices are aligned [@Nir:1993mx; @Ciuchini:2007cw] and, more generally, models in which CKM mixing is generated in the up-quark sector. The interest in charm dynamics has increased recently with the observation of charm oscillations [@Aubert:2007wf; @Staric:2007dt; @:2007uc]. The current measurements [@hfag] indicate $\mathcal{O}(10^{-2})$ magnitudes for the parameters governing their phenomenology. Such values are on the upper end of most theory predictions [@Petrov:2006nc]. Charm oscillations could be enhanced by a broad class of non-SM physics processes [@Golowich:2007ka]. Any generic non-SM contribution to the mixing would naturally carry additional –violating phases, which could enhance the observed –violating asymmetries relative to SM predictions. Time integrated –violating asymmetries of singly-Cabibbo-suppressed decays into eigenstates such as $D^0\to\pi^+\pi^-$ and $D^0\to K^+ K^-$ are powerful probes of non-SM physics contributions in the “mixing" transition amplitudes. They also probe the magnitude of “penguin" contributions, which are negligible in the SM, but could be greatly enhanced by the exchange of additional non-SM particles. Both phenomena would, in general, increase the size of the observed violation with respect to the SM expectation. Any significant –violating asymmetry above the 10$^{-2}$ level expected in the CKM hierarchy would indicate non-SM physics. The current experimental status is summarized in Table \[tab:today\]. No violation has been found within the precision of about 0.5% attained by the Belle and experiments. The previous CDF result dates from 2005 and was obtained using data from only 123 pb$^{-1}$ of integrated luminosity. Currently, CDF has the world’s largest samples of exclusive charm meson decays in charged final states, with competitive signal purities, owing to the good performance of the trigger for displaced tracks. With the current sample CDF can achieve a sensitivity that allows probing more extensive portions of the space of non-SM physics parameters.
We present measurements of time-integrated –violating asymmetries in the Cabibbo-suppressed $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ decays (collectively referred to as $D^0\to h^+h^-$ in this article) using 1.96 TeV proton-antiproton collision data collected by the upgraded Collider Detector at Fermilab (CDF II) and corresponding to 5.9 fb$^{-1}$ of integrated luminosity. Because the final states are common to charm and anti-charm meson decays, the time-dependent asymmetry between decays of states identified as $D^0$ and $\Dbar^0$ at the time of production ($t=0$) defined as $$\label{eq:acp}
\Acp(h^+h^-, t) = \frac{N(D^0\to h^+h^-;t)-N(\Dbar^0\to h^+h^-;t)}{N(D^0\to h^+h^-;t)+N(\Dbar^0\to h^+h^-;t)}, \nonumber
%\frac{N(D^0\to h^+h^-;t)-N(\Dbar^0\to h^+h^-;t)}{N(D^0\to h^+h^-;t)+N(\Dbar^0\to h^+h^-;t)},$$ receives contributions from any difference in decay widths between $D^0$ and $\Dbar^0$ mesons in the chosen final state (direct violation), any difference in mixing probabilities between $D^0$ and $\Dbar^0$ mesons, and the interference between direct decays and decays preceded by flavor oscillations (both indirect violation). Due to the slow mixing rate of charm mesons, the time-dependent asymmetry is approximated at first order as the sum of two terms, $$\label{eq:acp2}
\Acp(h^+h^-;t) \approx \Acp^{\rm{dir}}(h^+h^-)+\frac{t}{\tau}\ \Acp^{\rm{ind}}(h^+h^-),$$ where $t/\tau$ is the proper decay time in units of $D^0$ lifetime ($\tau \approx 0.4$ ps), and the asymmetries are related to the decay amplitude $\mathcal{A}$ and the usual parameters used to describe flavored-meson mixing $x,\ y,\ p$, and $q$ [@pdg] by
$$\begin{aligned}
\Acp^{\rm{dir}}(h^+h^-) &\equiv\Acp(t=0)=\frac{\left|\mathcal{A}(D^0\to h^+h^-)\right|^2-\left|\mathcal{A}(\Dbar^0\to h^+h^-)\right|^2}{\left|\mathcal{A}(D^0\to h^+h^-)\right|^2+\left|\mathcal{A}(\Dbar^0\to h^+h^-)\right|^2},\\*
\Acp^{\rm{ind}}(h^+h^-) &= \frac{\eta_{\CP}}{2}\left[y \left(\left|\frac{q}{p}\right|-\left|\frac{p}{q}\right|\right)\cos\varphi-x \left(\left|\frac{q}{p}\right|+\left|\frac{p}{q}\right|\right)\sin\varphi\right],\end{aligned}$$
where $\eta_{\CP} = +1$ is the -parity of the decay final state and $\varphi$ is the –violating phase. The time-integrated asymmetry is then the time integral of Eq. (\[eq:acp2\]) over the observed distribution of proper decay time ($D(t)$), $$\begin{aligned}
\label{eq:acp3}
\Acp(h^+h^-) &= \Acp^{\rm{dir}}(h^+h^-)+\Acp^{\rm{ind}}(h^+h^-)\int_0^\infty \frac{t}{\tau}\ D(t)dt \nonumber \\
&= \Acp^{\rm{dir}}(h^+h^-) + \frac{\langle t \rangle}{\tau}\ \Acp^{\rm{ind}}(h^+h^-).\end{aligned}$$ The first term arises from direct and the second one from indirect violation. Since the value of $\langle t \rangle$ depends on $D(t)$, different values of time-integrated asymmetry could be observed in different experiments, depending on the detector acceptances as a function of decay time. Thus, each experiment may provide different sensitivity to $\Acp^{\rm{dir}}$ and $\Acp^{\rm{ind}}$. Since the data used in this analysis were collected with an online event selection (trigger) that imposes requirements on the displacement of the $D^0$-meson decay point from the production point, our sample is enriched in higher-valued decay time candidates with respect to experiments at the $B$-factories. This makes the present measurement more sensitive to mixing-induced violation. In addition, combination of our results with those from Belle and provides some discrimination between the two contributions to the asymmetry.
Experiment $\Acp(\pi^+\pi^-)\ (\%)$ $\Acp(K^+K^-)\ (\%)$
--------------------------- -------------------------- -------------------------
2008 [@Aubert:2007if] $-0.24\pm0.52 \pm0.22$ $+0.00\pm0.34 \pm0.13 $
Belle 2008 [@:2008rx] $-0.43\pm0.52 \pm0.12$ $-0.43\pm0.30 \pm0.11 $
CDF 2005 [@Acosta:2004ts] $+1.0\pm1.3 \pm0.6 $ $+2.0\pm1.2 \pm0.6$
: Summary of recent experimental measurements of -violating asymmetries. The first quoted uncertainty is statistical, the second uncertainty is systematic.[]{data-label="tab:today"}
Overview\[sec:overview\]
========================
In the present work we measure the –violating asymmetry in decays of $D^0$ and $\overline{D}^0$ mesons into $\pi^+\pi^-$ and $K^+K^-$ final states. Because the final states are charge-symmetric, to know whether they originate from a $D^0$ or a $\Dbar^0$ decay, we need the neutral charm candidate to be produced in the decay of an identified $D^{* +}$ or $D^{* -}$ meson. Flavor conservation in the strong-interaction decay of the $D^{*\pm}$ meson allows identification of the initial charm flavor through the sign of the charge of the $\pi$ meson: $D^{* +} \to D^0~\pi^+$ and $ D^{* -} \to \Dbar^0~\pi^-.$ We refer to $D$ mesons coming from identified $D^{*\pm}$ decays as the [*tagged*]{} sample and to the tagging pion as the [*soft*]{} pion, $\pi_s$.
In the data collected by CDF between February 2002 and January 2010, corresponding to an integrated luminosity of about 5.9 fb$^{-1}$, we reconstruct approximately 215 000 $D^*$–tagged $D^0\to\pi^+\pi^-$ decays and 476 000 $D^*$–tagged $D^0\to K^+K^-$ decays. To measure the asymmetry, we determine the number of detected decays of opposite flavor and use the fact that primary charm and anti-charm mesons are produced in equal numbers by the –conserving strong interaction. The observed asymmetry is the combination of the contributions from violation and from charge asymmetries in the detection efficiency between positive and negative soft pions from the $D^{*\pm}$ decay. To correct for such instrumental asymmetries, expected to be of the order of a few $10^{-2}$, we use two additional event samples: 5 million tagged, and 29 million untagged Cabibbo–favored $D^0\to K^-\pi^+$ decays. We achieve cancellation of instrumental asymmetries with high accuracy and measure the –violating asymmetries of $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ with a systematic uncertainty of about $10^{-3}$.
The paper is structured as follows. In Sec. \[sec:detector\] we briefly describe the components of the CDF detector relevant for this analysis. In Sec. \[sec:trigger\] we summarize how the CDF trigger system was used to collect the event sample. We describe the strategy of the analysis and how we correct for detector-induced asymmetries in Sec. \[sec:method\]. The event selection and the kinematic requirements applied to isolate the event samples are presented in Sec. \[sec:sel\]; the reweighting of kinematic distributions is discussed in Sec. \[sec:kin\]. The determination of observed asymmetries from data is described in Sec. \[sec:fit\]. In Sec. \[sec:syst\] we discuss possible sources of systematic uncertainties and finally, in Sec. \[sec:final\], we present the results and compare with measurements performed by other experiments. We also show that by combining the present measurement with results from other experiments, we can partially disentangle the contribution of direct and indirect violation. A brief summary is presented in Sec. \[sec:theend\]. A mathematical derivation of the method employed to correct for instrumental asymmetries is discussed in Appendix \[sec:method\_math\] and its validation on simulated samples is summarized in Appendix \[sec:mcvalidation\].
The CDF II detector\[sec:detector\]
===================================
The CDF II detector has a cylindrical geometry with forward-backward symmetry and a tracking system in a 1.4 T magnetic field, coaxial with the beam. The tracking system is surrounded by calorimeters [@calorimetro] and muon-detection chambers [@muoni]. A cylindrical coordinate system, $(r,\phi,z)$, is used with origin at the geometric center of the detector, where $r$ is the perpendicular distance from the beam, $\phi$ is the azimuthal angle, and the $\hat{z}$ vector is in the direction of the proton beam. The polar angle $\theta$ with respect to the proton beam defines the pseudorapidity $\eta = -\ln\tan(\theta/2)$.
The CDF II detector tracking system determines the trajectories of charged particles (tracks) and consists of an open cell argon-ethane gas drift chamber called the central outer tracker (COT) [@COT] and a silicon vertex microstrip detector (SVX II) [@SVX]. The COT active volume covers $|z|<155$ cm from a radius of 40 to 140 cm and consists of 96 sense wire layers grouped into eight alternating axial and 2$^{\circ}$ stereo superlayers. To improve the resolution on their parameters, tracks found in the COT are extrapolated inward and matched to hits in the silicon detector. The SVX II has five layers of silicon strips at radial distances ranging from 2.5 cm to 10.6 cm from the beamline. Three of the five layers are double-sided planes with $r-z$ strips oriented at 90$^\circ$ relative to $r-\phi$ strips, and the remaining two layers are double-sided planes with strips oriented at $\pm1.2^\circ$ angles relative to the $r-\phi$ strips. The SVX II detector consists of three longitudinal barrels, each 29 cm in length, and covers approximately 90% of the $p\overline{p}$ interaction region. The SVX II provides precise information on the trajectories of long-lived particles (decay length), which is used for the identification of displaced, secondary track vertices of $B$ and $D$ hadron decays. An innermost single-sided silicon layer (L00), installed at 1.5 cm from the beam, further improves the resolution for vertex reconstruction [@L00]. Outside of the SVX II, two additional layers of silicon assist pattern recognition and extend the sensitive region of the tracking detector to $|\eta|\approx 2$ [@ISL]. These intermediate silicon layers (ISL) are located between the SVX II and the COT and consist of one layer at a radius of 23 cm in the central region, $|\eta|\leq 1$, and two layers in the forward region $1\leq|\eta|\leq 2$, at radii of 20 and 29 cm. The component of a charged particle’s momentum transverse to the beam ($p_T$) is determined with a resolution of for tracks with $p_T>2$ GeV/$c$. The excellent momentum resolution yields precise mass resolution for fully reconstructed $B$ and $D$ decays, which provides good signal-to-background. The typical resolution on the reconstructed position of decay vertices is approximately 30 $\mu$m in the transverse direction, effective to identify vertices from charmed meson decays, which are typically displaced by 250 $\mu$m from the beam. In the longitudinal direction, the resolution is approximately 70 $\mu$m, allowing suppression of backgrounds from charged particles originating from decays of distinct heavy hadrons in the event.
Online sample selection\[sec:trigger\]
======================================
The CDF II trigger system plays an important role in this measurement. Identification of hadronic decays of heavy-flavored mesons is challenging in the Tevatron collider environment due to the large inelastic $p\overline{p}$ cross section and high particle multiplicities at 1.96 TeV. In order to collect these events, the trigger system must reject more than 99.99% of the collisions while retaining good efficiency for signal. In this Section, we describe the CDF II trigger system and the algorithms used in collecting the samples of hadronic $D$ decays in this analysis.
The CDF II trigger system has a three-level architecture: the first two levels, level 1 (L1) and level 2 (L2), are implemented in hardware and the third, level 3 (L3), is implemented in software on a cluster of computers using reconstruction algorithms that are similar to those used off line.
Using information from the COT, at L1, the extremely fast tracker (XFT) [@XFT] reconstructs trajectories of charged particles in the $r-\phi$ plane for each proton-antiproton bunch crossing. Events are selected for further processing when two tracks that satisfy trigger criteria on basic variables are found. The variables include the product of any combination of two particles’ charges (opposite or same sign), the opening angle of the two tracks in the transverse plane ($\Delta\phi$), the two particles’ transverse momenta, and their scalar sum.
At L2 the silicon vertex trigger (SVT) [@SVT] incorporates information from the SVX II detector into the trigger track reconstruction. The SVT identifies tracks displaced from the $p\bar{p}$ interaction point, such as those that arise from weak decays of heavy hadrons and have sufficient transverse momentum. Displaced tracks are those that have a distance of closest approach to the beamline (impact parameter $d_0$) inconsistent with having originated from the $p\bar{p}$ interaction point (primary vertex). The impact parameter resolution of the SVT is approximately 50 $\mu$m, which includes a contribution of 35 $\mu$m from the width of the $p\overline{p}$ interaction region. The trigger selections used in this analysis require two tracks, each with impact parameter typically greater than 120 $\mu$m and smaller than 1 mm. In addition, the L2 trigger requires the transverse decay length ($L_{xy}$) to exceed $200~\mu$m, where $L_{xy}$ is calculated as the projection of the vector from the primary vertex to the two-track vertex in the transverse plane along the vectorial sum of the transverse momenta of the tracks. The trigger based on the SVT collects large quantities of long-lived $D$ hadrons, rejecting most of the prompt background. However, through its impact-parameter-based selection, the SVT trigger also biases the observed proper decay time distribution. This has important consequences for the results of this analysis, which will be discussed in Sec. \[sec:final\].
The L3 trigger uses a full reconstruction of the event with all detector information, but uses a simpler tracking algorithm and preliminary calibrations relative to the ones used off line. The L3 trigger retests the criteria imposed by the L2 trigger. In addition, the difference in $z$ of the two tracks at the point of minimum distance from the primary vertex, $\Delta z_0$, is required not to exceed 5 cm, removing events where the pair of tracks originate from different collisions within the same crossing of $p$ and $\bar{p}$ bunches.
Level-1 Level-2 Level-3
-------------------------- ----------------------------------- -----------------------------------
$p_T > 2.5$ GeV/$c$ $p_T > 2.5$ GeV/$c$ $p_T> 2.5$ GeV/$c$
$\sum p_T > 6.5$ GeV/$c$ $\sum p_T > 6.5$ GeV/$c$ $\sum p_T > 6.5$ GeV/$c$
Opposite charge Opposite charge Opposite charge
$\Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$
$0.12 < d_0 <1.0$ mm $0.1 < d_0 <1.0$ mm
$L_{xy} > 200~\mu$m $L_{xy} >200~\mu$m
$|\Delta z_0|<5$ cm
$|\eta|<1.2$
$p_T > 2$ GeV/$c$ $p_T > 2$ GeV/$c$ $p_T>2$ GeV/$c$
$\sum p_T > 5.5$ GeV/$c$ $\sum p_T > 5.5$ GeV/$c$ $\sum p_T > 5.5$ GeV/$c$
Opposite charge Opposite charge Opposite charge
$\Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$
$0.12 < d_0 <1.0$ mm $0.1 < d_0 <1.0$ mm
$L_{xy} > 200~\mu$m $L_{xy} >200~\mu$m
$|\Delta z_0|<5$ cm
$|\eta|<1.2$
$p_T > 2$ GeV/$c$ $p_T > 2$ GeV/$c$ $p_T>2$ GeV/$c$
$\sum p_T > 4$ GeV/$c$ $\sum p_T > 4$ GeV/$c$ $\sum p_T > 4$ GeV/$c$
$\Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$ $2^\circ < \Delta\phi < 90^\circ$
$0.1 < d_0 <1.0$ mm $0.1 < d_0 <1.0$ mm
$L_{xy} > 200~\mu$m $L_{xy} >200~\mu$m
$|\Delta z_0|<5$ cm
$|\eta|<1.2$
: Typical selection criteria for the three versions of the displaced-tracks trigger used in this analysis. The criteria refer to track pairs. The $p_T$, $d_0$, and $\eta$ requirements are applied to both tracks. The $\sum p_T$ refers to the scalar sum of the $p_T$ of the two tracks. The $\sum p_T$ threshold in each of the three vertical portions of the table identifies the high-$p_T$ (top), medium-$p_T$ (middle), and low-$p_T$ (bottom) trigger selections.[]{data-label="tab:mulbodtrig"}
Over the course of a single continuous period of Tevatron collisions (a store), the available trigger bandwidth varies because trigger rates fall as instantaneous luminosity falls. Higher trigger rates at high luminosity arise from both a larger rate for real physics processes as well as multiplicity-dependent backgrounds in multiple $p\overline{p}$ interactions. To fully exploit the available trigger bandwidth, we employ three main variants of the displaced-tracks trigger. The three selections are summarized in Table \[tab:mulbodtrig\] and are referred to as the low-$p_T$, medium-$p_T$, and high-$p_T$ selections according to their requirements on minimum transverse momentum. At high luminosity, the higher purity but less efficient high-$p_T$ selection is employed. As the luminosity decreases over the course of a store, trigger bandwidth becomes available and the other selections are utilized to fill the available trigger bandwidth and maximize the charm yield. The rates are controlled by the application of a prescale, which rejects a predefined fraction of events accepted by each trigger selection, depending on the instantaneous luminosity.
Suppressing detector-induced charge asymmetries\[sec:method\]
=============================================================
The procedure used to cancel detector-induced asymmetries is briefly outlined here, while a detailed mathematical treatment is given in Appendix \[sec:method\_math\].
We directly measure the observed “raw” asymmetry: $$A(D^0) = \frac{N_{\text{obs}}(D^0)-N_{\text{obs}}(\Dbar^0)}{N_{\text{obs}}(D^0)+N_{\text{obs}}(\Dbar^0)}, \nonumber$$ that is, the number of observed $D^0$ decays into the selected final state ($\pi^+\pi^-$ or $K^+K^-$) minus the number of $\Dbar^0$ decays, divided by the sum.
![Observed asymmetry between the number of reconstructed $D^{*+}$ and $D^{*-}$ mesons as a function of the soft pion’s transverse momentum for pure samples of $D^{*+}\to D^0(\to\pi^+\pi^-)\pi_s^+$ and $D^{*-}\to \overline{D}^0(\to\pi^+\pi^-)\pi_s^-$ decays. The soft pion transverse momentum spectrum is also shown.[]{data-label="fig:soft"}](fig1){width="8.6cm"}
The main experimental difficulty of this measurement comes from the small differences in the detection efficiencies of tracks of opposite charge which may lead, if not properly taken into account, to spuriously-measured charge asymmetries. Relevant instrumental effects include differences in interaction cross sections with matter between positive and negative low-momentum hadrons and the geometry of the main tracking system. The drift chamber layout is intrinsically charge asymmetric because of a $\approx 35^\circ$ tilt angle between the cell orientation and the radial direction, designed to partially correct for the Lorentz angle in the charge drift direction caused by crossed electric and magnetic fields. In the COT, different detection efficiencies are expected for positive and negative low-momentum tracks (especially, in our case, for soft pions), which induce an instrumental asymmetry in the number of reconstructed $D^{*}$–tagged $D^0$ and $\Dbar^0$ mesons. Other possible asymmetries may originate in slightly different performance between positive and negative tracks in pattern-reconstruction and track-fitting algorithms. The combined effect of these is a net asymmetry in the range of a few percent, as shown in Fig. \[fig:soft\]. This must be corrected to better than one per mil to match the expected statistical precision of the present measurement. In order to cancel detector effects, we extract the value of $\Acp(D^0\to h^+h^-)$ using a fully data-driven method, based on an appropriate combination of charge-asymmetries observed in three different event samples: $D^*$-tagged $D^0\to h^+h^-$ decays (or simply $hh^*$), $D^*$-tagged $D^0\to K^-\pi^+$ decays ($K\pi^*$), and untagged $D^0\to K^-\pi^+$ decays ($K\pi$). We assume the involved physical and instrumental asymmetries to be small, as indicated by previous measurements. Neglecting terms of order $\Acp\delta$ and $\delta^2$, the observed asymmetries in the three samples are $$\label{eq:acpraw}
\begin{aligned}
A(hh^*) &= \Acp(hh) + \delta(\pi_s)^{hh^*},\\
A(K\pi^*) &= \Acp(K\pi) + \delta(\pi_s)^{K\pi^*} + \delta(K\pi)^{K\pi^*},\\
A(K\pi) &= \Acp(K\pi) + \delta(K\pi)^{K\pi},
\end{aligned}$$ where $\delta(\pi_s)^{hh^*}$ is the instrumental asymmetry for reconstructing a positive or negative soft pion associated with a $h^+h^-$ charm decay induced by charge-asymmetric interaction cross section and reconstruction efficiency for low transverse momentum pions; $\delta(\pi_s)^{K\pi^*}$ is the same as above for tagged $K^+\pi^-$ and $K^-\pi^+$ decays; and $\delta(K\pi)^{K\pi}$ and $\delta(K\pi)^{K\pi^*}$ are the instrumental asymmetries for reconstructing a $K^+\pi^-$ or a $K^-\pi^+$ decay for the untagged and the tagged case, respectively. All the above effects can vary as functions of a number of kinematic variables or environmental conditions in the detector. If the kinematic distributions of soft pions are consistent in $K\pi^*$ and $hh^*$ samples, and if the distributions of $D^0$ decay products are consistent in $K\pi^*$ and $K\pi$ samples, then $\delta(\pi_s)^{hh^*} \approx \delta(\pi_s)^{K\pi^*}$ and $\delta(K\pi)^{K\pi^*}\approx \delta(K\pi)^{K\pi}$. The –violating asymmetries then become accessible as $$\label{eq:formula}
\Acp(hh) = A(hh^*) - A(K\pi^*) + A(K\pi).$$ This formula relies on cancellations based on two assumptions. At the Tevatron, charm and anticharm mesons are expected to be created in almost equal numbers. Since the overwhelming majority of them are produced by –conserving strong interactions, and the $p\bar{p}$ initial state is symmetric, any small difference between the abundance of charm and anti-charm flavor is constrained to be antisymmetric in pseudorapidity. As a consequence, we assume that the net effect of any possible charge asymmetry in the production cancels out, as long as the distribution of the decays in the sample used for this analysis is symmetric in pseudorapidity. An upper limit to any possible residual effect is evaluated as part of the study of systematic uncertainties (Sec. \[sec:syst\]). The second assumption is that the detection efficiency for the $D^*$ can be expressed as the product of the efficiency for the soft pion times the efficiency for the $D^0$ final state. This assumption has been tested (Sec. \[sec:syst\]), and any residual effect included in the systematic uncertainties.
Before applying this technique to data, we show that our approach achieves the goal of suppressing detector induced asymmetries down to the per mil level using the full Monte Carlo simulation (Appendix \[sec:mcvalidation\]). The simulation contains only charmed signal decays. The effects of the underlying event and multiple interactions are not simulated. We apply the method to samples simulated with a wide range of physical and detector asymmetries to verify that the cancellation works. The simulation is used here only to test the validity of the technique; all final results are derived from data only, with no direct input from simulation.
Analysis event selection\[sec:sel\]
===================================
The offline selection is designed to retain the maximum number of decays with accurately measured momenta and decay vertices. Any requirements that may induce asymmetries between the number of selected $D^0$ and $\Dbar^0$ mesons are avoided. The reconstruction is based solely on tracking, disregarding any information on particle identification. Candidate decays are reconstructed using only track pairs compatible with having fired the trigger. Standard quality criteria on the minimum number of associated silicon-detector and drift-chamber hits are applied to each track to ensure precisely measured momenta and decay vertices in three-dimensions [@tesi-angelo]. Each final-state particle is required to have $p_T>2.2$ GeV/$c$, $|\eta|<1$, and impact parameter between 0.1 and 1 mm. The reconstruction of $D^0$ candidates considers all pairs of oppositely-charged particles in an event, which are arbitrarily assigned the charged pion mass. The two tracks are constrained to originate from a common vertex by a kinematic fit subject to standard quality requirements. The $\pi^+\pi^-$ mass of candidates is required to be in the range 1.8 to 2.4 GeV$/c^2$, to retain all signals of interest and sideband regions sufficiently wide to study backgrounds. The two tracks are required to have an azimuthal separation $2^{\circ} < \Delta\phi < 90^{\circ}$, and correspond to a scalar sum of the two particles’ transverse momenta greater than 4.5 GeV/$c$. We require $L_{xy}$ to exceed 200 $\mu$m to reduce background from decays of hadrons that don’t contain heavy quarks. We also require the impact parameter of the $D^0$ candidate with respect to the beam, $d_0(D^0)$, to be smaller than $100\ \mu$m to reduce the contribution from charmed mesons produced in long-lived $B$ decays (secondary charm). In the rare (0.04%) occurrence that multiple decays sharing the same tracks are reconstructed in the event, we retain the one having the best vertex fit quality.
[fig2a]{} (22,78)[(a)]{}
[fig2b]{} (22,78)[(b)]{}
Figure \[fig:mass\_1d\] shows the $K^-\pi^+$ mass distribution for the resulting sample, which is referred to as “untagged" in the following since no $D^*$ decay reconstruction has been imposed at this stage. The distribution of a sample of simulated inclusive charmed decays is also shown for comparison. Only a single charmed meson decay per event is simulated without the underlying event. In both distributions the kaon (pion) mass is arbitrarily assigned to the negative (positive) particle. The prominent narrow signal is dominated by $D^0\to K^-\pi^+$ decays. A broader structure, also centered on the known $D^0$ mass, are $\Dbar^0\to K^+\pi^-$ candidates reconstructed with swapped $K$ and $\pi$ mass assignments to the decay products. Approximately 29 million $D^0$ and $\Dbar^0$ mesons decaying into $K^{\pm}\pi^{\mp}$ final states are reconstructed. The two smaller enhancements at lower and higher masses than the $D^0$ signal are due to mis-reconstructed $D^0\to K^+K^-$ and $D^0\to\pi^+\pi^-$ decays, respectively. Two sources of background contribute. A component of random track pairs that accidentally meet the selection requirements (combinatorial background) is most visible at masses higher than 2 GeV/$c^2$, but populates almost uniformly the whole mass range. A large shoulder due to mis-reconstructed multi-body charm decays peaks at a mass of approximately 1.6 GeV/$c^2$.
In the “tagged"-samples reconstruction, we form $D^{*+}\to D^0\pi_s^{+}$ candidates by associating with each $D^0$ candidate all tracks present in the same event. The additional particle is required to satisfy basic quality requirements for the numbers of associated silicon and drift chamber hits, to be central ($|\eta|<1$), and to have transverse momentum greater that 400 MeV/$c$. We assume this particle to be a pion (“soft pion") and we match its trajectory to the $D^0$ vertex with simple requirements on relative separation: impact parameter smaller than 600 $\mu$m and longitudinal distance from the primary vertex smaller than 1.5 cm. Since the impact parameter of the low-energy pion has degraded resolution with respect to those of the $D^0$ tracks, no real benefit is provided by a full three–track vertex fit for the $D^*$ candidate. We retain $D^*$ candidates with $D^0\pi_s$ mass smaller than 2.02 GeV/$c^2$. In the 2% of cases in which multiple $D^*$ candidates are associated with a single $D^0$ candidate, we randomly choose only one $D^*$ candidate for further analysis.
The $D^0\pi_s$ mass is calculated using the vector sum of the momenta of the three particles as $D^*$ momentum, and the known $D^0$ mass in the determination of the $D^*$ energy. This quantity has the same resolution advantages of the more customary $M(h^+ h^{(')-} \pi_s)- M(h^+ h^{(')-})$ mass difference, and has the additional advantage that it is independent of the mass assigned to the $D^0$ decay products. Therefore all $D^{*+}\to D^0(\to h^+ h^{(')-})\pi_s^{+}$ modes have the same $D^0\pi_s$ mass distribution, which is not true for the mass difference distribution. In each tagged sample ($D^0\to\pi^+\pi^-$ , $D^0\to K^+K^-$ and $D^0\to K^-\pi^+$) we require the corresponding two-body mass to lie within 24 MeV/$c^2$ of the known $D^0$ mass [@pdg], as shown in Figs. \[fig:mass\_distr\] (a)–(c).
[fig3a]{} (22,78)[(a)]{}
[fig3b]{} (22,78)[(b)]{}
[fig3c]{} (22,78)[(c)]{}
\
[fig3d]{} (22,78)[(d)]{}
[fig3e]{} (22,78)[(e)]{}
[fig3f]{} (22,78)[(f)]{}
Figures \[fig:mass\_distr\] (d)–(f) show the resulting $D^0\pi_s$ mass distribution. A clean $D^*$ signal is visible superimposed on background components that are different in each $D^0$ channel. As will be shown in Sec. \[sec:fit\], the backgrounds in the $D^0 \pi_s$ distributions for $D^0 \to \pi^+ \pi^-$ and $D^0 \to K^+ K^-$ decays are mainly due to associations of random pions with real $D^0$ candidates. In the $D^0\to K^+K^-$ case, there is also a substantial contribution from mis-reconstructed multi-body charged and neutral charmed decays (mainly $D^{*+}\to D^0(\to K^-\pi^+\pi^0)\pi_s^+$ where the neutral pion is not reconstructed) that yield a broader enhancement underneath the signal peak. We reconstruct approximately 215 000 $D^*$–tagged $D^0\to\pi^+\pi^-$ decays, 476 000 $D^*$–tagged $D^0\to K^+K^-$ decays, and 5 million $D^*$–tagged $D^0\to\pi^+K^-$ decays.
Kinematic distributions equalization {#sec:kin}
====================================
Because detector–induced asymmetries depend on kinematic properties, the asymmetry cancellation is realized accurately only if the kinematic distributions across the three samples are the same. Although the samples have been selected using the same requirements, small kinematic differences between decay channels may persist due to the different masses involved. We extensively search for any such residual effect across several kinematic distributions and reweight the tagged $D^0\to h^+h^-$ and untagged $D^0\to K^-\pi^+$ distributions to reproduce the tagged $D^0\to K^-\pi^+$ distributions when necessary. For each channel, identical reweighting functions are used for charm and anti-charm decays.
[fig4a]{} (30,70)[(a)]{}
[fig4b]{} (30,70)[(b)]{}
[fig4c]{} (30,70)[(c)]{}
[fig4d]{} (30,70)[(d)]{}
[fig4e]{} (30,70)[(e)]{}
[fig4f]{} (30,70)[(f)]{}
We define appropriate sideband regions according to the specific features of each tagged sample (Fig. \[fig:mass\_distr\] (a)–(c)). Then we compare background-subtracted distributions for tagged $h^+h^{(')-}$ decays, studying a large set of $\pi_s$ kinematic variables ($p_T$, $\eta$, $\phi$, $d_0$, and $z_0$) [@tesi-angelo]. We observe small discrepancies only in the transverse momentum and pseudorapidity distributions as shown in Fig. \[fig:rew\] (a)–(d). The ratio between the two distributions is used to extract a smooth curve used as a candidate-specific weight. A similar study of $D^0$ distributions for tagged and untagged decays shows discrepancies only in the distributions of transverse momentum and pseudorapidity (Fig. \[fig:rew\]) which are reweighted accordingly.
[fig5]{} (30,49)[$K^-\pi^+$]{} (53.5,61)
[90]{}[$K^+\pi^-$]{}
(54,58)
[-43]{}[$\pi^+\pi^-$]{}
(41,46)
[-43]{}[$K^+K^-$]{}
(26,22)
[48]{}[**Multi-body**]{}
(31,22)
[48]{}[**decays**]{}
(66,64)
[48]{}[**Combinatorics**]{}
Background is not subtracted from the distributions of the untagged sample. We simply select decays with $K^{+}\pi^{-}$ or $K^{-}\pi^{+}$–mass within 24 from the known $D^0$ mass, corresponding approximately to a cross-shaped $\pm 3\sigma$ range in the two-dimensional distribution (Fig. \[fig:mass\_2d\]). The background contamination in this region is about 6%. This contamination has a small effect on the final result. The observed asymmetries show a small dependence on the $D^0$ momentum, because detector-induced charge asymmetries are tiny at transverse momenta greater than 2.2 GeV/$c$, as required for the $D^0$ decay products. Therefore any small imperfection in the reweighting of momentum spectra between tagged and untagged sample has a limited impact, if any. However, a systematic uncertainty is assessed for the possible effects of non-subtracted backgrounds (see Sec. \[sec:syst\]). All entries in distributions shown in the remainder of this paper are reweighted according to the transverse momentum and pseudorapidity of the corresponding candidates unless otherwise stated.
Determination of observed asymmetries\[sec:fit\]
================================================
The asymmetries between observed numbers of $D^0$ and $\Dbar^0$ signal candidates are determined with fits of the $D^*$ (tagged samples) and $D^0$ (untagged sample) mass distributions. The mass resolution of the CDF tracker is sufficient to separate the different decay modes of interest. Backgrounds are modeled and included in the fits. In all cases we use a joint binned fit that minimizes a combined $\chi^2$ quantity, defined as $\chi^2_{\rm tot} = \chi^{2}_{+} + \chi^{2}_{-},$ where $ \chi^{2}_{+}$ and $\chi^{2}_{-}$ are the individual $\chi^2$ for the $D^0$ and $\Dbar^0$ distributions. Because we use copious samples, an unbinned likelihood fit would imply a substantially larger computational load without a significant improvement in statistical resolution. The functional form that describes the mass shape is assumed to be the same for charm and anti-charm, although a few parameters are determined by the fit independently in the two samples. The functional form of the mass shape for all signals is extracted from simulation and the values of its parameters adjusted for the data. The effect of this adjustment is discussed in Sec. \[sec:syst\] where a systematic uncertainty is also assessed.
Fit of tagged samples
---------------------
We extract the asymmetry of tagged samples by fitting the numbers of reconstructed $D^{*\pm}$ events in the $D^0\pi_s^+$ and $\overline{D}^0\pi_s^-$ mass distribution. Because all modes have the same $D^0\pi_s^+$ mass distribution, we use a single shape to fit all tagged signals. We also assume that the shape of the background from random pions associated with a real neutral charm particle are the same. Systematic uncertainties due to variations in the shapes are discussed later in Sec. \[sec:syst\].
The general features of the signal distribution are extracted from simulated samples. The model is adjusted and finalized in a fit of the $D^0\pi_s$ mass of copious and pure tagged $K^-\pi^+$ decays. We fit the average histogram of the charm and anti-charm samples, $m = (m_{+}+m_{-})/2$, where $m_{+}$ is the $D^{*+}$ mass distribution and $m_{-}$ the $D^{*-}$ one. The resulting signal shape is then used in the joint fit to measure the asymmetry between charm and anti-charm signal yields. The signal is described by a Johnson function [@johnson] (all functions properly normalized in the appropriate fit range), $$J(x|\mu,\sigma,\delta,\gamma) = \frac{e^{-\frac{1}{2}\left[\gamma~+~\delta~\text{sinh}^{-1}\left(\frac{x-\mu}{\sigma}\right)\right]^2}}{\sqrt{1+\left(\frac{x-\mu}{\sigma}\right)^2}}, \nonumber$$ that accounts for the asymmetric tail of the distribution, plus two Gaussians, $\gauss(x|\mu,\sigma)$, for the central bulk: $$\begin{aligned}
\pdf_{\text{sig}}(m|\vec{\theta}_{sig}) =& f_J J(m|m_{D^*}+\mu_J,\sigma_J,\delta_J,\gamma_J) +(1-f_J) \nonumber \\*
& \times [ f_{G1}\gauss(m|m_{D^*}+\mu_{G1},\sigma_{G1}) \nonumber\\*
& +(1-f_{G1})\gauss(m|m_{D^*}+\mu_{G2},\sigma_{G2}) ]. \nonumber\end{aligned}$$ The signal parameters $\vec{\theta}_{sig}$ include the relative fractions between the Johnson and the Gaussian components; the shift from the nominal $D^{*\pm}$ mass of the Johnson distribution’s core, $\mu_J$, and the two Gaussians, $\mu_{G1(2)}$; the widths of the Johnson distribution’s core, $\sigma_J$, and the two Gaussians, $\sigma_{G1(2)}$; and the parameters $\delta_J$ and $\gamma_J$, which determine the asymmetry in the Johnson distribution’s tails. For the random pion background we use an empirical shape form, $$\pdf_{\text{bkg}}(m|\vec{\theta}_\text{bkg}) = \mathscr{B}(m|m_{D^0}+m_\pi,b_\text{bkg},c_\text{bkg}), \nonumber$$ with $\mathscr{B}(x|a,b,c) = (x-a)^b e^{-c(x-a)}$ extracted from data by forming an artificial random combination made of a well-reconstructed $D^0$ meson from each event combined with pions from all other events. The total function used in this initial fit is $$N_{\text{sig}}\pdf_{\text{sig}}(m|\vec{\theta}_\text{sig}) + N_{\text{bkg}}\pdf_{\text{bkg}}(m|\vec{\theta}_\text{bkg}).\nonumber$$ Each fit function is defined only above the threshold value of $m_{D^0}+m_{\pi}$.
![Distribution of $D^0\pi_s$ mass of tagged $D^0\to K^-\pi^+$ decays with fit results overlaid. The total fit projection (blue) is shown along with the double Gaussian bulk (dotted line), the Johnson tail (dashed line) and the background (full hatching).[]{data-label="fig:preliminary-kpi*"}](fig6){width="8.6cm"}
Figure \[fig:preliminary-kpi\*\] shows the resulting fit which is used to determine the shape parameters for subsequent asymmetry fits. All parameters are free to float in the fit.
[fig7a]{} (25,75)[(a)]{}
[fig7b]{} (25,75)[(b)]{}
[fig7c]{} (25,75)[(c)]{}
We then fix the signal parametrization and simultaneously fit the $D^0\pi_s$ mass distributions of $D^{*+}$ and $D^{*-}$ candidates with independent normalizations to extract the asymmetry. The parameter $\delta_J$ varies independently for charm and anti-charm decays. The background shape parameters are common in the two samples and are determined by the fit. Figures \[fig:acp-kpi\*\] (a) and (b) show the projections of this simultaneous fit on the $D^{0}\pi_s$ mass distribution, for the tagged $D^0\to K^-\pi^+$ sample. Figures \[fig:acp-kpi\*\] (c) shows the projection on the asymmetry distribution as a function of the $D^0\pi_s$ mass. The asymmetry distribution is constructed by evaluating bin-by-bin the difference and sum of the distributions in mass for charm ($m_+$ ) and anti-charm ($m_-$) decays to obtain $A = (m_{+}-m_{-})/(m_{+}+m_{-})$. The variation of the asymmetry as a function of mass indicates whether backgrounds with asymmetries different from the signal are present. As shown by the difference plots at the bottom of Fig. \[fig:acp-kpi\*\], the fits correctly describe the asymmetry across the whole mass range.
We allowed independent $\delta_J$ parameters in the charm and anti-charm samples because the $D^0\pi_s$ mass distribution for $D^{*+}$ candidates has slightly higher tails and a different width than the corresponding distribution for $D^{*-}$ candidates. The relative difference between the resulting $\delta_J$ values does not exceed $0.5\%$. However, by allowing the parameter $\delta_J$ to vary independently the $\chi^2/$ndf value improves from $414/306$ to $385/304$. We do not expect the source of this difference to be asymmetric background because the difference is maximally visible in the signal region, where the kinematic correlation between $D^0\pi_s$ mass and $\pi_s$ transverse momentum is stronger. Indeed, small differences between $D^{*+}$ and $D^{*-}$ shapes may be expected because the drift chamber has different resolutions for positive and negative low momentum particles. Independent $\delta_J$ parameters provide a significantly improved description of the asymmetry as a function of $D^0\pi_s$ mass in the signal region (Fig. \[fig:acp-kpi\*\] (c)). In Sec. \[sec:syst:rawasy\] we report a systematic uncertainty associated with this assumption. No significant improvement in fit quality is observed when leaving other signal shape parameters free to vary independently for $D^{*+}$ and $D^{*-}$ candidates.
[fig8a]{} (25,75)[(a)]{}
[fig8b]{} (25,75)[(b)]{}
[fig8c]{} (25,75)[(c)]{}
\
[fig8d]{} (25,75)[(d)]{}
[fig8e]{} (25,75)[(e)]{}
[fig8f]{} (25,75)[(f)]{}
The plots in Fig. \[fig:fits-hh\] show the fit results for tagged $D^0\to\pi^+\pi^-$ and $D^0\to K^-K^+$ samples. In the $D^0\to K^+K^-$ fit we include an additional component from mis-reconstructed multibody decays. Because signal plus random pion shapes are fixed to those obtained by fitting the tagged $K\pi$ sample (Fig. \[fig:acp-kpi\*\]), the shape of this additional multibody component is conveniently extracted from the combined fit to data and is described by $$\begin{aligned}
\pdf_{\text{mbd}}(m|\vec{\theta}_\text{mbd}) =& f_\text{mbd} J(m|m_{D^*}+\mu_\text{mbd},\sigma_\text{mbd},\delta_\text{mbd},\gamma_\text{mbd})\nonumber \\*
+&(1-f_\text{mbd}) \mathscr{B}(m|m_{D^0}+m_{\pi},b_\text{mbd},c_\text{mbd}).\nonumber\end{aligned}$$ The total function used to fit the $KK^*$ sample is then $$N_{\text{sig}}\pdf_{\text{sig}}(m|\vec{\theta}_\text{sig}) + N_{\text{bkg}}\pdf_{\text{bkg}}(m|\vec{\theta}_\text{bkg})
+N_\text{mbd}\pdf_\text{mbd}(m|\vec{\theta}_\text{mbd}). \nonumber$$
We observe the following asymmetries in the three tagged samples: $$\begin{aligned}
\label{eq:tagged-results}
A(\pi\pi^*) &= (-1.86\pm0.23)\%, \nonumber \\*
A(KK^*) &= (-2.32\pm0.21)\%, \\*%\quad\text{and}\\
A(K\pi^*) &= (-2.910\pm0.049)\% \nonumber.\end{aligned}$$
Fit of the untagged sample\[sec:fit\_untag\_dkpi\]
--------------------------------------------------
In untagged $K\pi$ decays no soft pion is associated with the neutral charm meson to form a $D^*$ candidate so there is no identification of its charm or anti-charm content. We infer the flavor of the neutral charm meson on a statistical basis using the mass resolution of the tracker and the quasi–flavor-specific nature of neutral charm decays into $K\pi$ final states. The role of mass resolution is evident in Fig. \[fig:mass\_2d\], which shows the distribution of $K^-\pi^+$ mass as a function of $K^+\pi^-$ mass for the sample of untagged decays. The cross-shaped structure at the center of the plot is dominated by $K\pi$ decays. In each mass projection the narrow component of the structure is due to decays where the chosen $K\pi$ assignment is correct. The broader component is due to decays where the $K\pi$ assignment is swapped. In the momentum range of interest, the observed widths of these two components differ by roughly an order of magnitude. Because of the CKM hierarchy of couplings, approximately 99.6% of neutral charm decays into a $K^-\pi^+$ final state are from Cabibbo-favored decays of $D^0$ mesons, with only 0.4% from the doubly-suppressed decays of $\overline{D}^0$ mesons, and vice versa for $K^+\pi^-$ decays. Therefore, the narrow (broad) component in the $K^-\pi^+$ projection is dominated by $D^0$ ($\overline{D}^0$) decays. Similarly, the narrow (broad) component in the $K^+\pi^-$ projection is dominated by $\overline{D}^0$ ($D^0$) decays.
We extract the asymmetry between charm and anti-charm decays in the untagged sample from a simultaneous binned fit of the $K^+\pi^-$ and $K^-\pi^+$ mass distributions in two independent subsamples. We randomly divide the untagged sample into two independent subsamples, equal in size, whose events were collected in the same data-taking period (“odd” and “even” sample). We arbitrarily choose to reconstruct the mass for candidates of the odd sample and the mass for candidates of the even sample. In the odd sample the decay is considered “right sign” (RS) because it is reconstructed with proper mass assignment. In the even sample it is considered a “wrong sign” (WS) decay, since it is reconstructed with swapped mass assignment. The opposite holds for the decay. The shapes used in the fit are the same for odd and even samples. The fit determines the number of (RS decays) from the odd sample and the number of (RS decays) from the even sample thus determining the asymmetry. We split the total untagged sample in half to avoid the need to account for correlations. The reduction in statistical power has little practical effect since half of the untagged $K\pi$ decays are still 30 (67) times more abundant than the tagged $K^+K^-$ ($\pi^+\pi^-$) decays, and the corresponding statistical uncertainty gives a negligible contribution to the uncertainty of the final result.
The mass shapes used in the combined fit of the untagged sample are extracted from simulated events and adjusted by fitting the $K\pi$ mass distribution in data. All functions described in the following are properly normalized when used in fits. The mass line shape of right-sign decays is parametrized using the following analytical expression: $$\begin{aligned}
%\label{eq:RS_param}
\pdf_{\rm RS}(m|\vec{\theta}_{\rm RS}) =& f_{{\rm bulk}} [f_1 \gauss(m | m_{D^{0}}+\delta_{1},\sigma_1) \nonumber \\*
&\quad + (1- f_1) \gauss(m|m_{D^{0}}+\delta_{2},\sigma_2) ] \nonumber \\*
&+ (1-f_{{\rm bulk}}) \tail(m | b,c,m_{D^{0}}+\delta_{1}), \nonumber\end{aligned}$$ where $$\tail(m|b,c,\mu) = e^{b(m-\mu)} {\rm Erfc}(c(m-\mu)), \nonumber$$ with ${\rm Erfc}(x) = (2/\sqrt{\pi})\int^{+\infty}_{x} e^{-t^{2}}dt$. We use the sum of two Gaussians to parametrize the bulk of the distribution. The function $\tail(m;b,c,\mu)$ describes the lower-mass tail due to the soft photon emission. The parameter $f_{{\rm bulk}}$ is the relative contribution of the double Gaussian. The parameter $f_{1}$ is the fraction of dominant Gaussian, relative to the sum of the two Gaussians. The parameters $\delta_{1(2)}$ are possible shifts in mass from the known $D^0$ mass [@pdg]. Because the soft photon emission makes the mass distribution asymmetric, the means of the Gaussians cannot be assumed to be the same. Therefore $m_{D^0}$ is fixed in the parametrization while $\delta_{1(2)}$ are determined by the fit. The mass distribution of wrong-sign decays, $\pdf_{\rm WS}(m;\vec{\theta}_{\rm WS})$, is parametrized using the same functional form used to model RS decays. The mass distribution of $D^0 \to \pi^+\pi^-$ decays is modeled using the following functional form: $$\begin{aligned}
%\label{eq:D0pipi_param}
\pdf_{\pi\pi}(m|\vec{\theta}_{\pi\pi}) =& f_{{\rm bulk}} [ f_1 \gauss(m|m_{0}+\delta_{1},\sigma_1) + \nonumber \\*
&\qquad (1- f_1) \gauss(m|m_{0}+\delta_{2},\sigma_2) ] \nonumber \\*
&+ f_{t1} \tail(m|b_1,c_1,m_{1}) \nonumber \\*
&+ (1-f_{{\rm bulk}}-f_{t1}) \tail(m|b_2,c_2,m_{2}).\nonumber\end{aligned}$$ The bulk of the distribution is described by two Gaussians. Two tail functions $\tail(m;b,c,\mu)$ are added for the low- and high-mass tails due to soft photon emission and incorrect mass assignment, respectively. The shifts in mass, $\delta_{1(2)}$, from the empirical value of the mass of $\pi\pi$ decays assigned the $K\pi$ mass, $m_{0}=1.96736~\massgev$, are free to vary. The mass distributions of the partially reconstructed multibody charm decays and combinatorial background are modeled using decreasing exponential functions with coefficients $b_{\rm mbd}$ and $b_{\rm comb}$, respectively.
The function used in the fit is then $$\begin{aligned}
&N_{\rm RS} \pdf_{\rm RS}(m|\vec{\theta}_{\rm RS}) + N_{\rm WS}\pdf_{\rm WS}(m|\vec{\theta}_{\rm WS}) \nonumber \\*
&+ N_{\pi\pi} \pdf_{\pi\pi}(m|\vec{\theta}_{\pi\pi}) + N_{\rm mbd}\pdf_{{\rm mbd}}(m|b_{\rm mbd}) \nonumber \\*
&+ N_{\rm comb}\pdf_{{\rm comb}}(m|b_{\rm comb}).\nonumber\end{aligned}$$ where $N_{\rm RS}$, $N_{\rm WS}$, $N_{\pi\pi}$, $N_{\rm mbd}$, $N_{\rm comb}$ are the event yields for right-sign decays, wrong-sign decays, $D^0 \to \pi^+\pi^-$ decays, partially reconstructed decays, and combinatorial background, respectively.
![Average ($m$) of the distribution of $K^+\pi^-$ mass in the even sample and $K^-\pi^+$ mass in the odd sample with fit projections overlaid.[]{data-label="fig:fit_mean"}](fig9){width="8.6cm"}
The mass is fit in the range $1.8 < m < 2.4~\massgev$ to avoid the need for modeling most of the partially reconstructed charm meson decays. The ratio $N_{\rm RS}/N_{\rm mbd}$ and the parameter $b_{\rm mbd}$ are fixed from simulated inclusive $D^{0}$ and $D^{+}$ decays. The contamination from partially reconstructed $D^{+}_{s}$ decays is negligible for masses greater that 1.8 . The result of the fit to the distribution averaged between odd and even samples is shown in Fig. \[fig:fit\_mean\]. In this preliminary fit we let vary the number of events in each of the various components, the parameters of the two Gaussians describing the bulk of the $D^0\to h^+h'^-$ distributions, and the slope of the combinatorial background $b_{\rm comb}$. We assume that the small tails are described accurately enough by the simulation. This preliminary fit is used to extract all shape parameters that will be fixed in the subsequent combined fit for the asymmetry.
Odd and even samples are fitted simultaneously using the same shapes for each component to determine the asymmetry of RS decays. Because no asymmetry in $D^0 \to \pi^+\pi^-$ decays and combinatorial background is expected by construction, we include the following constraints: $N^{+}_{\pi\pi}=N^{-}_{\pi\pi}$ and $N^{+}_{\rm comb}=N^{-}_{\rm comb}$. The parameters $N^{+}_{\rm RS}$, $N^{-}_{\rm RS}$, $N^{+}_{\rm WS}$, $N^{-}_{\rm WS}$, $N^{+}_{\rm mbd}$ and $N^{-}_{\rm mbd}$ are determined by the fit independently in the even and odd samples.
[fig10a]{} (27,80)[(a)]{}
[fig10b]{} (27,80)[(b)]{}
[fig10c]{} (27,80)[(c)]{}
Figures \[fig:central\_fit\_proj\] (a) and (b) show the fit projections for odd and even samples. Figure \[fig:central\_fit\_proj\] (c) shows the projection of the simultaneous fit on the asymmetry as a function of the $K\pi$ mass. The observed asymmetry for the $D^0 \to K^-\pi^+$ RS decays is $$\label{eq:untagged-results}
A(K\pi)=(-0.832 \pm 0.033)\%.$$
Systematic uncertainties\[sec:syst\]
====================================
The measurement strategy is designed to suppress systematic uncertainties. However, we consider a few residual sources that can impact the results: approximations in the suppression of detector-induced asymmetries; production asymmetries; contamination from secondary $D$ mesons; assumptions and approximations in fits, which include specific choice of analytic shapes, differences between distributions associated with charm and anti-charm decays, and contamination from unaccounted backgrounds; and, finally, assumptions and limitations of kinematic reweighting.
Most of the systematic uncertainties are evaluated by modifying the fit functions to include systematic variations and repeating the fits to data. The differences between results of modified fits and the central one are used as systematic uncertainties. This usually overestimates the observed size of systematic uncertainties, which include an additional statistical component. However, the additional uncertainty is negligible, given the size of the event samples involved. Sources of systematic uncertainty are detailed below. A summary of the most significant uncertainties is given in Table \[tab:syst\].
Approximations in the suppression of detector-induced effects {#sec:sys-approx}
-------------------------------------------------------------
We check the reliability of the cancellation of all detector-induced asymmetries on simulated samples as described in Appendix \[sec:mcvalidation\]. The analysis is repeated on several statistical ensembles in which we introduce known –violating asymmetries in the $D^0\to h^+h^{(')-}$ decays and instrumental effects (asymmetric reconstruction efficiency for positive and negative soft pions and kaons) dependent on a number of kinematic variables (e.g., transverse momentum). These studies constrain the size of residual instrumental effects that might not be fully cancelled by our method of linear subtraction of asymmetries. They also assess the impact of possible correlations between reconstruction efficiencies of $D^0$ decay-products and the soft pion, which are assumed negligible in the analysis. We further check this assumption on data by searching for any variation of the observed asymmetry as a function of the proximity between the soft pion and the charm meson trajectories. No variation is found.
Using the results obtained with realistic values for the simulated effects, we assess a $\Delta\Acp(hh)=0.009\%$ uncertainty. This corresponds to the maximum shift, increased by one standard deviation, observed in the results, for true –violating asymmetries in input ranging from $-5\%$ to $+5\%$.
Production asymmetries
----------------------
Charm production in high-energy $p\bar{p}$ collisions is dominated by –conserving $c\bar{c}$ production through the strong interaction. No production asymmetries are expected by integrating over the whole phase space. However, the CDF acceptance covers a limited region of the phase space, where conservation may not be exactly realized. Correlations with the $p\overline{p}$ initial state may induce pseudorapidity–dependent asymmetries between the number of produced charm and anti-charm (or positive– and negative–charged) mesons. These asymmetries are constrained by conservation to change sign for opposite values of $\eta$. The net effect is expected to vanish if the pseudorapidity distribution of the sample is symmetric.
To set an upper limit to the possible effect of small residual $\eta$ asymmetries of the samples used in this analysis, we repeat the fits enforcing a perfect $\eta$ symmetry by reweighting. We observe variations of $\Delta\Acp(KK)=0.03\%$ and $\Delta\Acp(\pi\pi)=0.04\%$ between the fit results obtained with and without re-weighting. We take these small differences as an estimate of the size of possible residual effects. The cancellation of production asymmetries achieved in $p\bar{p}$ collisions (an initial –symmetric state) recorded with a polar-symmetric detector provide a significant advantage in high-precision -violation measurements over experiments conducted in $pp$ collisions.
Contamination of $D$ mesons from $B$ decays\[sec:dzero\_da\_B\]
---------------------------------------------------------------
A contamination of charm mesons produced in $b$–hadron decays could bias the results. Violation of symmetry in $b$–hadron decays may result in asymmetric production of charm and anti-charm mesons. This may be large for a single exclusive mode, but the effect is expected to vanish for inclusive $B \to D^0 X$ decays [@gronau]. However, we use the impact parameter distribution of $D^0$ mesons to statistically separate primary and secondary mesons and assign a systematic uncertainty. Here, by “secondary" we mean any $D^0$ originating from the decay of any $b$ hadron regardless of the particular decay chain involved. In particular we do not distinguish whether the $D^0$ meson is coming from a $D^{*\pm}$ or not.
If $f_{B}$ is the fraction of secondary $D^0$ mesons in a given sample, the corresponding observed asymmetry $A$ can be written as a linear combination of the asymmetries for primary and secondary $D^0$ mesons: $$\label{eq:acpB1}
A = f_B A(D^0\ \text{secondary}) + (1-f_B) A(D^0\ \text{primary}).$$ The asymmetry observed for secondary $D^0$ mesons can be expressed, to first order, as the sum of the asymmetry you would observe for a primary $D^0$ sample, plus a possible –violating asymmetry in inclusive $B\to D^0X$ decays, $$\label{eq:acpB2}
A(D^0\ \text{sec.}) = \Acp(B\to D^0 X) + A(D^0\ \text{prim.}).$$ Hence, combining Eq. (\[eq:acpB1\]) and Eq. (\[eq:acpB2\]), the asymmetry observed in each sample is given by $$\label{eq:acpB3}
A = f_B\Acp(B\to D^0 X) + A(D^0\ \text{primary}).$$ Because the fraction of secondary $D^0$ mesons is independent of their decay mode, we assume $f_B(\pi\pi^*)=f_B(KK^*)=f_B(K\pi^*)$. The contribution of violation in $b$–hadron decays to the final asymmetries is written as $$A(hh) = f_B(K\pi) \Acp(B\to D^0X) + \Acp(D^0\to hh),
\label{eq:central_asymmetry}$$ where $f_B$ is estimated in the untagged $K^-\pi^+$ sample because the two terms arising from the tagged components cancel in the subtraction provided by Eq. (\[eq:formula\]).
![Impact parameter distribution of $D^{0}$ candidates in the signal region. Top plot with data and fit projections overlaid uses a logarithmic scale vertically. Bottom plot shows fractional difference between data and the fit on a linear scale.[]{data-label="fig:contamination_daB"}](fig11){width="8.6cm"}
In this analysis, the contamination from secondary $D^0$ decays is reduced by requiring the impact parameter of the $D^0$ candidate, $d_0(D^0)$, not to exceed $100~\mum$. The fraction $f_B$ of residual $D^0$ mesons originating from $B$ decays has been determined by fitting the distribution of the impact parameter of untagged $D^0\to K^-\pi^+$ decays selected within $\pm 24$ MeV/$c^2$ of the known $D^0$ mass [@pdg]. We use two Gaussian distributions to model the narrow peak from primary $D^0$ mesons and a binned histogram, extracted from a simulated sample of inclusive $B\to D^0X$ decays, to model the secondary component. Figure \[fig:contamination\_daB\] shows the data with the fit projection overlaid. A residual contamination of 16.6% of $B \to D^0X$ decays with impact parameter lower than $100~\mum$ is estimated. To constrain the size of the effect from we repeat the analysis inverting the impact parameter selection, namely requiring $d_0(D^0) >100~\mum$. This selects an almost pure sample of decays from $B$ decays ($f_B = 1$). We reconstruct about 900 000 decays with an asymmetry, $A(K\pi)= (-0.647 \pm 0.172)\%$, consistent with $(-0.832 \pm 0.033)\%$, the value used in our measurement. Using Eq. (\[eq:acpB2\]) we write the difference between the above asymmetry and the asymmetry observed in the central analysis (Eq. (\[eq:central\_asymmetry\])), $A(d_{0}>100~\mum) - A(d_{0}<100~\mum)$, as $$\label{eq:acp_diff}
%A(d_{0}>100~\mum) - A(d_{0}<100~\mum) = (1-f_B) A_{\rm CP}(B\to D^0X) = (-0.18 \pm 0.17)\%.
(1-f_B) A_{\rm CP}(B\to D^0X) = (-0.18 \pm 0.17)\%.$$ Using $f_B=16.6\%$ we obtain $\Acp(B\to D^0X) = (-0.21 \pm 0.20)\%$ showing that no evidence for a bias induced by secondary $D^0$ mesons is present. Based on Eq. (\[eq:central\_asymmetry\]), we assign a conservative systematic uncertainty evaluated as $f_B A_{\rm CP}(B\to D X)= f_B/(1-f_B) \Delta= 0.034\%$, where $f_B$ equals 16.6% and $\Delta$ corresponds to the $0.17\%$ standard deviation of the difference in Eq. (\[eq:acp\_diff\]).
Assumptions in the fits of tagged samples\[sec:syst:rawasy\]
------------------------------------------------------------
### Shapes of fit functions\[sec:sys-tagged-shapes\]
The mass shape extracted from simulation has been adjusted using data for a more accurate description of the observed signal shape. A systematic uncertainty is associated with the finite accuracy of this tuning and covers the effect of possible mis-modeling of the shapes of the fit components.
![Shape of $D^0\pi_s$ mass as extracted from simulation without tuning, with data tuning and with anti-data tuning.[]{data-label="fig:syst_shape*"}](fig12){width="8.6cm"}
shows a comparison between the shape extracted from the simulation and the templates used in the fit after the tuning. It also shows an additional template, named “anti-tuned”, where the corrections that adjust the simulation to data have been inverted. If $f(m)$ is the template tuned on data, and $g(m)$ is the template extracted from the simulation, the anti-tuned template is constructed as $h(m) = 2f(m)-g(m)$. We repeat the measurement using the templates extracted from the simulation without any tuning, and those corresponding to the anti-tuning. The maximum variations from the central fit results, $\Delta\Acp(\pipi)=0.009\%$ and $\Delta\Acp(\KK)=0.058\%$, are assigned as systematic uncertainties. The larger effect observed in the $D^0\to K^+K^-$ case comes from the additional degrees of freedom introduced in the fit by the multibody-decays component.
In addition, we perform a cross-check of the shape used for the background of real $D^0$ mesons associated with random tracks. In the analysis, the shape parameters of $D^0\to h^+h^-$ fits are constrained to the values obtained in the higher-statistics tagged $D^0\to K^-\pi^+$ sample. If the parameters are left floating in the fit, only a negligible variation on the final result ($<0.003\%$) is observed.
### Charge-dependent mass distributions {#sec:sys-tagged-charges}
We observe small differences between distributions of $D^0\pi_s$ mass for positive and negative $D^{*}$ candidates. These are ascribed to possible differences in tracking resolutions between low-momentum positive and negative particles. Such differences may impact our results at first order and would not be corrected by our subtraction method. To determine a systematic uncertainty, we repeat the fit in several configurations where various combinations of signal and background parameters are independently determined for positive and negative $D^{*}$ candidates. The largest effects are observed by leaving the background shapes to vary independently and constraining the parameter $\delta_J$ of the Johnson function to be the same [@tesi-angelo]. The values of the shape parameters in $D^0\to h^+h^-$ fits are always fixed to the ones obtained from the $D^0\to K^-\pi^+$ sample. The maximum variations with respect to the central fits, $\Delta\Acp(\pipi)=0.088\%$ and $\Delta\Acp(\KK)=0.027\%$, are used as systematic uncertainties.
### Asymmetries from residual backgrounds {#sec:sys-tagged-bckg}
A further source of systematic uncertainty is the approximations used in the subtraction of physics backgrounds. In the $K^+K^-$ sample we fit any residual background contribution, hence this uncertainty is absorbed in the statistical one. However, in the $\pi^+\pi^-$ and $K^-\pi^+$ cases we assume the residual backgrounds to be negligible. Using simulation we estimate that a $0.22\%$ and $0.77\%$ contamination from physics backgrounds enters the $\pm 24$ MeV/$c^2$ $\pi^+\pi^-$ and $K^-\pi^+$ signal range, respectively. The contamination in the $\pi^+\pi^-$ sample is dominated by the high mass tail of the $D^0\to K^-\pi^+$ signal. The asymmetry of this contamination is determined from a fit of the tagged $K^-\pi^+$ sample. The contamination of the $K^-\pi^+$ sample is dominated by the tail from partially reconstructed $D^0$ decays. The fit of the tagged $K^+K^-$ sample provides an estimate of the asymmetry of this contamination. In both cases we assign a systematic uncertainty that is the product of the contaminating fraction times the additional asymmetry of the contaminant. This yields a maximum effect of $0.005\%$ on the measured asymmetries for both $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ cases.
Assumptions in the fits of untagged samples
-------------------------------------------
### Shapes of fit functions\[sec:mass\_shape\]
We follow the same strategy used for the tagged case to assign the systematic uncertainty associated with possible mis-modeling of the shapes in fits of the untagged sample.
[fig13a]{} (25,78)[(a)]{}
[fig13b]{} (25,78)[(b)]{}
[fig13c]{} (25,78)[(c)]{}
shows the comparison between templates extracted from the simulation without any tuning, those tuned to data (and used in the central fit), and the anti-tuned ones. We repeat the fit using the templates from simulation and the anti-tuned ones. The maximum variation from the central fit, $\Delta A(K\pi)=0.005\%$, is used as the systematic uncertainty.
### Charge-dependent mass distributions\[sec:syst\_pos\_neg\_template\]
In the untagged case we expect the mass shapes of all components to be the same for charm and anti-charm samples. However, we repeat the simultaneous fit under different assumptions to assign the systematic uncertainty associated with possible residual differences. The parameters of the Gaussian distributions used to model the bulk of the mass distributions are left free to vary independently for the charm and anti-charm samples, and separately for the right-sign, wrong-sign, and $D \to \pi^+\pi^-$ components. We assume no difference between mass distributions of combinatorial background and partially reconstructed decays. The differences between estimated shape parameters in charm and anti-charm samples do not exceed $3\sigma$, showing compatibility between the shapes. A systematic uncertainty of $0.044\%$ is obtained by summing in quadrature the shifts from the central values of the estimated asymmetries in the three different cases.
### Asymmetries from residual physics backgrounds
In the measurement of the asymmetry of Cabibbo-favored decays, we neglect the contribution from the small, but irreducible, component of doubly-Cabibbo-suppressed (DCS) $D^0\to K^+\pi^-$ decays. Large violation in DCS decays may bias the charge asymmetry we attribute to $D^0\to K^-\pi^+$ decays. We assign a systematic uncertainty corresponding to $f_{DCS} A_{\it CP}(D^0\to K^+\pi^-) = f_{DCS} \Delta= 0.013\%$, where $f_{DCS}=0.39\%$ is the known [@pdg] fraction of DCS decays with respect to Cabibbo-favored decays and $\Delta=2.2\%$ corresponds to one standard deviation of the current measured limit on the –violating asymmetry $\Acp(D^0\to K^+\pi^-)$ as reported in .
In the central fit for the untagged $D^0\to K^-\pi^+$ sample, no asymmetry in decays or combinatorial background is included, as expected by the way the untagged sample is defined. We confirm the validity of this choice by fitting the asymmetry with independent parameters for these two shapes in the charm and anti-charm samples. The result corresponds to a $\Delta A(K\pi)=0.011\%$ variation from the central fit.
Limitations of kinematic reweighting
------------------------------------
The tagged event samples are reweighted after subtracting the background, sampled in signal mass sidebands. We constrain the size of possible residual systematic uncertainties by repeating the fit of tagged $D^0\to h^+h^-$ after a reweighting without any sideband subtraction. The variation in observed asymmetries is found to be negligible with respect to other systematic uncertainties.
In reweighting the untagged sample we do not subtract the background. The signal distributions are extracted by selecting a mass region corresponding approximately to a cross-shaped window of $\pm 3\sigma$ in the two-dimensional space ($M(K^+\pi^-), M(K^-\pi^+)$). To assign a systematic uncertainty we extract the signal distributions and reweight the data using a smaller cross-shaped region of $\pm 2\sigma$ (i.e. within 16 from the nominal $D^0$ mass). The background contamination decreases from $6\%$ to $4\%$. We repeat the analysis and find $A(K\pi)= (-0.831 \pm 0.033)\%$ corresponding to a variation from the central fit of $<0.001\%$, thus negligible with respect to other systematic uncertainties.
Total systematic uncertainty
----------------------------
Table \[tab:syst\] summarizes the most significant systematic uncertainties considered in the measurement. Assuming them independent and summing in quadrature, we obtain a total systematic uncertainty of $0.11\%$ on the observed –violating asymmetry of $D^0\to\pi^+\pi^-$ decays and $0.09\%$ in $D^0\to K^+K^-$ decays. Their sizes are approximately half of the statistical uncertainties.
Source $\Acp(\pi^+\pi^-)$ \[%\] $\Acp(K^+K^-$) \[%\]
--------------------------------------------------------------- -------------------------- -- ----------------------
Approximations in the suppression of detector-induced effects $0.009$ $0.009$
Production asymmetries $0.040$ $0.030$
Contamination of secondary $D$ mesons $0.034$ $0.034$
Shapes assumed in fits $0.010$ $0.058$
Charge-dependent mass distributions $0.098$ $0.052$
Asymmetries from residual backgrounds $0.014$ $0.014$
Limitations of sample reweighting $<0.001$ $<0.001$
Total $0.113$ $0.092$
Final result\[sec:final\]
=========================
Using the observed asymmetries from Eqs. (\[eq:tagged-results\]) and (\[eq:untagged-results\]) in the relationships of Eq. (\[eq:acpraw\]), we determine the time-integrated –violating asymmetries in $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ decays to be $$\begin{aligned}
\Acp(\pi^+\pi^-) &= \bigl(+0.22\pm0.24\stat\pm0.11\syst\bigr)\% \nonumber \\*
\Acp(K^+K^-) &= \bigl(-0.24\pm0.22\stat\pm0.09\syst\bigr)\%,\nonumber\end{aligned}$$ corresponding to conservation in the time-evolution of these decays. These are the most precise determinations of these quantities to date, and significantly improve the world’s average values. The results are also in agreement with theory predictions [@Bigi:1986dp; @Golden:1989qx; @Buccella:1994nf; @Xing:1996pn; @Du:2006jc; @Grossman:2006jg].
A useful comparison with results from other experiments is achieved by expressing the observed asymmetry as a linear combination (Eq. (\[eq:acp3\])) of a direct component, $\Acp^{\rm{dir}}$, and an indirect component, $\Acp^{\rm{ind}}$, through a coefficient that is the mean proper decay time of charm mesons in the data sample. The direct component corresponds to a difference in width between charm and anti-charm decays into the same final state. The indirect component is due to the probability for a charm meson to oscillate into an anti-charm meson being different from the probability for an anti-charm meson to oscillate into a charm meson.
[fig14a]{} (20,85)[(a)]{}
[fig14b]{} (20,85)[(b)]{}
The decay time of each $D^0$ meson, $t$, is determined as $$\label{eq:t_from_Lxy}
t = \frac{L_{xy}}{c \left( \beta \gamma\right)_T}
= L_{xy} \ \frac{m_{D^0}}{c\ p_T},\nonumber$$ where $(\beta \gamma )_T = p_T/m_{D^0}$ is the transverse Lorentz factor. This is an unbiased estimate of the actual decay time only for primary charmed mesons. For secondary charm, the decay time of the parent $B$ meson should be subtracted. The mean decay times of our signals are determined from a fit to the proper decay time distribution of sideband-subtracted tagged decays (Fig. \[fig:propertime\]). The fit includes components for primary and secondary $D$ mesons, whose shapes are modeled from simulation. The simulation is used to extract the information on the mean decay time of secondary charmed decays, using the known true decay time. The proportions between primary and secondary are also determined from this fit and are consistent with results of the fit to the $D^0$ impact parameter in data (Sec. \[sec:dzero\_da\_B\]). We determine a mean decay time of $2.40\pm0.03$ and $2.65\pm0.03$, in units of $D^0$ lifetime, for and decays, respectively. The uncertainty is the sum in quadrature of statistical and systematic contributions. The small difference in the two samples is caused by the slightly different kinematic distributions of the two decays, which impacts their trigger acceptance.
Each of our measurements defines a band in the $(\Acp^{\rm{ind}},\Acp^{\rm{dir}})$ plane with slope $-\left<t\right>/\tau$ (Eq. (\[eq:acp3\])). The same holds for and Belle measurements, with slope $-1$ [@Aubert:2007wf; @Staric:2007dt], due to unbiased acceptance in decay time. The results of this measurement and the most recent $B$-factories’ results are shown in Fig. \[fig:combination\], which displays their relationship. The bands represent $\pm 1\sigma$ uncertainties and show that all measurements are compatible with conservation (origin in the two-dimensional plane). The results of the three experiments can be combined assuming Gaussian uncertainties. We construct combined confidence regions in the $(\Acp^{\rm{ind}},\Acp^{\rm{dir}})$ plane, denoted with $68\%$ and $95\%$ confidence level ellipses. The corresponding values for the asymmetries are $\Acp^{\rm{dir}}(D^0\to\pi^+\pi^-) = (0.04 \pm 0.69)\%$, $\Acp^{\rm{ind}}(D^0\to\pi^+\pi^-) = (0.08 \pm 0.34)\%$, $\Acp^{\rm{dir}}(D^0\to K^+K^-) = (-0.24 \pm 0.41)\%$, and $\Acp^{\rm{ind}}(D^0\to K^+K^-) = (0.00\pm 0.20)\%$, in which the uncertainties represent one-dimensional 68% confidence level intervals.
[fig15a]{} (23,34)[(a)]{}
[fig15b]{} (23,34)[(b)]{}
CP violation from mixing only
-----------------------------
Assuming negligible direct violation in both decay modes, the observed asymmetry is only due to mixing, $\Acp(h^+h^-) \approx \Acp^{\rm{ind}}\ \langle t \rangle / \tau$, yielding $$\begin{aligned}
\Acp^{\rm{ind}}(\pi^+\pi^-) &= \bigl(+0.09\pm0.10\stat\pm0.05\syst\bigr)\% \nonumber \\*
\Acp^{\rm{ind}}(K^+K^-) &= \bigl(-0.09\pm0.08\stat\pm0.03\syst\bigr)\%. \nonumber\end{aligned}$$ Assuming that no large weak phases from non-SM contributions appear in the decay amplitudes, $\Acp^{\rm{ind}}$ is independent of the final state. Therefore the two measurements can be averaged, assuming correlated systematic uncertainties, to obtain a precise determination of violation in charm mixing: $$\Acp^{\rm{ind}}(D^0) = \bigl(-0.01\pm0.06\stat\pm0.04\syst\bigr)\%. \nonumber$$ This corresponds to the following upper limits on violation in charm mixing: $$|\Acp^{\rm{ind}}(D^0)| < 0.13~(0.16)\% \mbox{ at the 90 (95)\% C.L}.\nonumber$$
[fig16a]{} (85,75)[(a)]{}
[fig16b]{} (85,75)[(b)]{}
\
[fig16c]{} (85,75)[(c)]{}
[fig16d]{} (85,75)[(d)]{}
The bias toward longer-lived decays of the CDF sample offers a significant advantage over $B$-factories in sensitivity to the time-dependent component, as shown in Figs. \[fig:direct\_and\_indirect\] (a), (c).
Direct CP violation only
------------------------
Assuming that symmetry is conserved in charm mixing, our results are readily comparable to measurements obtained at $B$-factories; $\Acp(\pi^+\pi^-)= (0.43\pm0.52\stat\pm0.12\syst)\%$ and $\Acp(K^+K^-)= (-0.43\pm0.30\stat\pm0.11\syst)\%$ from Belle, and $\Acp(\pi^+\pi^-)= (-0.24\pm0.52\stat\pm0.22\syst)\%$ and $\Acp(K^+K^-)= (0.00\pm0.34\stat\pm0.13\syst)\%$ from (Figs. \[fig:direct\_and\_indirect\] (b)-(d)). The CDF result is the world’s most precise.
Difference of asymmetries
-------------------------
A useful comparison with theory predictions is achieved by calculating the difference between the asymmetries observed in the $D^0 \to K^+ K^-$ and $D^0 \to \pi^+ \pi^-$ decays ($\Delta\Acp$). Since the difference in decay-time acceptance is small, $\Delta\langle t \rangle/\tau = 0.26 \pm 0.01$, most of the indirect -violating asymmetry cancels in the subtraction, assuming that no large -violating phases from non-SM contributions enter the decay amplitudes. Hence $\Delta\Acp$ approximates the difference in direct -violating asymmetries of the two decays. Using the observed asymmetries from Eq. (\[eq:tagged-results\]), we determine $$\begin{aligned}
\Delta\Acp =& \Acp(K^+K^-) - \Acp(\pi^+\pi^-) \nonumber \\*
=& \Delta\Acp^{\rm{dir}} + \Acp^{\rm{ind}}\Delta\langle t \rangle/\tau \nonumber \\*
=& A(KK^*) - A(\pi\pi^*) \nonumber \\*
=& \bigl(-0.46 \pm 0.31\stat \pm 0.12 \syst \bigr)\%. \nonumber\end{aligned}$$ The systematic uncertainty is dominated by the 0.12% uncertainty from the shapes assumed in the mass fits, and their possible dependence on the charge of the $D^*$ meson. This is determined by combining the difference of shifts observed in Secs. \[sec:sys-tagged-shapes\] and \[sec:sys-tagged-charges\] including correlations: $(0.058 - 0.009)\% = 0.049\%$ and $(-0.027 - 0.088)\% = 0.115\%$. Smaller contributions include a 0.009% from the finite precision associated to the suppression of detector-induced effects (Sec. \[sec:sys-approx\]), and a 0.005% due to the 0.22% background we ignore under the $D^0\to \pi^+\pi^-$ signal (Sec.\[sec:sys-tagged-bckg\]). The effects of production asymmetries and contamination from secondary charm decays cancel in the difference.
We see no evidence of a difference in violation between $D^0\to K^+K^-$ and $D^0 \to \pi^+\pi^-$ decays. Figure \[fig:difference\] shows the difference in direct asymmetry ($\Delta\Acp^{\rm{dir}}$) as a function of the indirect asymmetry compared with experimental results from and Belle [@Aubert:2007wf; @Staric:2007dt]. The bands represent $\pm 1\sigma$ uncertainties. The measurements, combined assuming Gaussian uncertainties, provide $68\%$ and $95\%$ confidence level regions in the $(\Delta\Acp^{\rm{dir}}, \Acp^{\rm{ind}})$ plane, denoted with ellipses. The corresponding values for the asymmetries are $\Delta\Acp^{\rm{dir}} = (-0.37 \pm 0.45)\%$, $\Acp^{\rm{ind}} = (-0.35 \pm 2.15)\%$.
![Difference between direct –violating asymmetries in the $K^+K^-$ and $\pi^+\pi^-$ final states as a function of the indirect asymmetry. Belle and measurements are also reported for comparison. The point with error bars denotes the central value of the combination of the three measurements with one-dimensional 68% confidence level uncertainties.[]{data-label="fig:difference"}](fig17){width="8.6cm"}
Summary\[sec:theend\]
=====================
In summary, we report the results of the most sensitive search for violation in singly-Cabibbo–suppressed $D^0\to\pi^+\pi^-$ and $D^0\to K^+K^-$ decays. We reconstruct signals of $\mathcal{O}(10^5)$ $D^*$–tagged decays in an event sample of $p\bar{p}$ collision data corresponding to approximately 5.9 fb$^{-1}$ of integrated luminosity collected by a trigger on displaced tracks. A fully data-driven method to cancel instrumental effects provides effective suppression of systematic uncertainties to the 0.1% level, approximately half the magnitude of the statistical uncertainties.
We find no evidence of violation and measure $\Acp(D^0\to\pi^+\pi^-) = \bigl(+0.22\pm0.24\stat\pm0.11\syst\bigr)\%$ and . These are the most precise determinations from a single experiment to date, and supersede the corresponding results of Ref. [@Acosta:2004ts]. The average decay times of the charmed mesons used in these measurements are $2.40 \pm 0.03$ units of $D^0$ lifetime in the $D^0\to \pi^+\pi^-$ sample and $2.65 \pm 0.03$ units of $D^0$ lifetime in the $D^0\to K^+K^-$ sample. Assuming negligible violation in and decay widths (direct violation), the above results, combined with the high-valued average proper decay time of the charmed mesons in our sample, provide a stringent general constraint on violation in $D^0$ mixing, $|\Acp^{\rm{ind}}(D^0)|< 0.13\%$ at the 90% confidence level. The results probe significant regions of the parameter space of charm phenomenology where discrimination between SM and non-SM dynamics becomes possible [@Bigi:2011re; @Bigi:2011em].
We thank Y. Grossmann, A. Kagan, A. Petrov, and especially I. I. Bigi and A. Paul for useful discussions. We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium für Bildung und Forschung, Germany; the Korean World Class University Program, the National Research Foundation of Korea; the Science and Technology Facilities Council and the Royal Society, UK; the Russian Foundation for Basic Research; the Ministerio de Ciencia e Innovación, and Programa Consolider-Ingenio 2010, Spain; the Slovak R&D Agency; and the Academy of Finland.
Method to suppress detector asymmetries\[sec:method\_math\]
===========================================================
A mathematical derivation of the concepts described in Sec. \[sec:method\] follows. We measure the –violating asymmetry by determining the asymmetry between number of detected particles of opposite charm content $A = (N_+-N_-)/(N_++N_-)$, where $N_+$ and $N_-$ are the number of $D^0$ and $ \Dbar^0$ decays found in three different data samples: $D^*$-tagged $D^0\to h^+h^-$ decays (or simply $hh^*$), $D^*$-tagged $D^0\to K^-\pi^+$ decays ($K\pi^*$) and untagged $D^0\to K^-\pi^+$ decays ($K\pi$). We show that the combination of asymmetries measured in these three samples yields an unbiased estimate of the physical value of with a high degree of suppression of systematic uncertainties coming from detector asymmetries. In the discussion we always refer to the *true* values of kinematic variables of particles. The *measured* quantities, affected by experimental uncertainties, play no role here since we are only interested in counting particles and all detection efficiencies are assumed to be dependent on true quantities only.
$D^*$–tagged $D^0\to h^+h^-$
----------------------------
Assuming factorization of efficiencies for reconstructing the neutral charmed meson and the soft pion, we write $$\begin{aligned}
N_\pm = & \frac{N_*}{2} B_{D\pi}^* \int\!\!dp_* dp_s dp_{h^+} dp_{h^-} \rho_{*\pm}(p_* ) B_{hh}^\pm \nonumber\\
&\times \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \varepsilon_{s\pm} (p_{s} ) , \nonumber
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%N_\pm = & \frac{N_*}{2} B_{D\pi}^* \int\!\!dp_* dp_s dp_{h^+} dp_{h^-} \nonumber\\
% &\times \rho_{*\pm}(p_* ) B_{hh}^\pm \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \nonumber \\
% &\times \varepsilon_{hh} (p_{h^+},p_{h^-} ) \varepsilon_{s\pm} (p_{s} ), \nonumber \end{aligned}$$ where $N^*$ is the total number of $D^{*+}$ and $D^{*-}$ mesons; $p_*, p_s, p_{h^+}, p_{h^-}$ are the three-momenta of the $D^*$, soft $\pi$, $h^+$, and $h^-$, respectively; $\rho_{*+}$ and $\rho_{*-}$ are the densities in phase space of $D^{*+}$ and $D^{*-}$ mesons (function of the production cross sections and experimental acceptances and efficiencies); $\rho_{hh^*}$ is the density in phase space of the soft pion and $h^+h^-$ pair from $D^0$ decay; $B_{hh}^+$ and $B_{hh}^-$ are the branching fractions of $D^0\to h^+h^-$ and $\Dbar^0\to h^+h^-$; $B_{D\pi}^*$ is the branching fraction of $D^{*+} \rightarrow D^0\pi^+$ and $D^{*-} \rightarrow \Dbar^0\pi^-$, assumed to be charge–symmetric; $\varepsilon_{hh}$ is the detection efficiency of the $h^+h^-$ pair from the $D^0$ decay; and $\varepsilon_{s+}$ and $\varepsilon_{s-}$ are the detection efficiencies of the positive and negative soft pion, respectively. Conservation of four-momenta is implicitly assumed in all densities. Densities are normalized as $\int dp_* \rho_{*\pm} (p_*) = 1 = \int dp_s dp_{h^+} dp_{h^-} \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*)$ for each $p_*$. The difference between event yields is therefore $$\begin{aligned}
N_+ - N_- = & \frac{N_*}{2} B_{D\pi}^* \int\!\! dp_* dp_s dp_{h^+} dp_{h^-} \nonumber \\
& \times \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \nonumber\\
& \times \{ \rho_{*+}(p_* ) B_{hh}^+ \varepsilon_{s+} (p_{s} ) - \rho_{*-}(p_* ) B_{hh}^- \varepsilon_{s-} (p_{s} ) \}\nonumber\\
\nonumber \\*
= & \frac{N_*}{2} B_{D\pi}^* \int\!\! dp_* dp_s dp_{h^+} dp_{h^-} \varepsilon_{hh} (p_{h^+},p_{h^-} ) \nonumber \\
& \times \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \rho_{*}(p_* ) B_{hh} \varepsilon_{s} (p_{s} ) \nonumber\\
&\times [ (1+\delta \rho_{*}(p_* )) \left( 1+A_{\CP} \right) (1+\delta\varepsilon_{s} (p_{s} ) ) \nonumber\\
& \quad - \ (1-\delta \rho_{*}(p_* )) \left( 1-A_{\CP} \right) (1-\delta\varepsilon_{s} (p_{s} ) ) ], \nonumber\end{aligned}$$ where we have defined the following additional quantities: $\rho_* = (1/2)\left(\rho_{*+} + \rho_{*-}\right)$, $\delta \rho_* = (\rho_{*+} - \rho_{*-})/(\rho_{*+} + \rho_{*-})$, $B_{hh} = (1/2) (B_{hh}^+ + B_{hh}^-)$, $A_{\CP} \equiv A_{\CP}(hh)= (B_{hh}^+ - B_{hh}^-)/(B_{hh}^+ + B_{hh}^-)$, $\varepsilon_s = (1/2)( \varepsilon_{s+} + \varepsilon_{s-})$, and $\delta \varepsilon_s = (\varepsilon_{s+} - \varepsilon_{s-})(\varepsilon_{s+} + \varepsilon_{s-})$. Expanding the products we obtain $$\begin{aligned}
N_+ - N_- = & N_* B_{D\pi}^* B_{hh} \int\!\! dp_* dp_s dp_{h^+} dp_{h^-} \rho_{*}(p_* ) \varepsilon_{s} (p_{s} ) \nonumber\\
& \times \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \nonumber\\
& \times [A_{\CP} + \delta \rho_{*}(p_* ) + \delta\varepsilon_{s} (p_{s} ) \nonumber \\
& \quad + A_{\CP} \delta \rho_{*}(p_* ) \delta\varepsilon_{s} (p_{s} ) ]. \nonumber\end{aligned}$$ Since the symmetry of the $p\bar{p}$ initial state ensures that $\delta\rho_*(p_*) = - \delta\rho_*(-p_*)$, the second and fourth term in brackets vanish when integrated over a $p_*$ domain symmetric in $\eta$. In a similar way we obtain $$\begin{aligned}
N_+ + N_- = & N_* B_{D\pi}^* B_{hh} \int\!\! dp_* dp_s dp_{h^+} dp_{h^-} \rho_{*}(p_* ) \varepsilon_{s} (p_{s} )\nonumber\\
& \times \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \nonumber\\
& \times [ 1 + A_{\CP} \delta\varepsilon_{s} (p_{s} ) A_{\CP} \delta \rho_{*}(p_* )\nonumber \\
& \quad + \delta\varepsilon_{s} (p_{s}) \delta \rho_{*}(p_* ) ]. \nonumber\end{aligned}$$ The second term in brackets is small with respect to $A_{\CP} $ and can be neglected, while the third and fourth terms vanish once integrated over a $p_*$ domain symmetric in $\eta$. Hence the observed asymmetry is written as
$$\begin{aligned}
A(hh^*) & = \left( \frac{N_+ - N_- }{N_+ + N_- }\right)^{hh^*} = A_{\CP}(h^+h^-) + \int dp_s h^{hh^*}_s(p_s) \delta\varepsilon_s(p_s), \text{where} \\
h^{hh^*}_s(p_s) & = \frac{ \int\!\! dp_* dp_{h^+} dp_{h^-} \rho_{*}(p_* ) \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \varepsilon_{s} (p_{s} )}
{ \int\!\! dp_* dp_{h^+} dp_{h^-} dp_s \rho_{*}(p_* ) \rho_{hh^*}( p_{h^+}, p_{h^-},p_s \: \vert \: p_*) \varepsilon_{hh} (p_{h^+},p_{h^-} ) \varepsilon_{s} (p_{s} )} \label{eq:normalized-densities}
\end{aligned}$$
is the normalized density in phase space of the soft pion for the events included in our sample.
$D^*$-tagged $D^0\to K^-\pi^+$
------------------------------
Assuming factorization of efficiencies for reconstructing the neutral charmed meson and the soft pion, we write $$\begin{aligned}
%N_\pm = & \frac{N_*}{2} B_{D\pi}^* \int\!\! dp_* dp_s dp_{\pi} dp_{K} \nonumber\\
% & \times \rho_{*\pm}(p_* ) B_{K\pi}^\pm \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \nonumber \\
% & \times \varepsilon_{K\mp\pi\pm} (p_{K} ,p_{\pi} ) \varepsilon_{s\pm} (p_{s} ), \nonumber
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N_\pm = & \frac{N_*}{2} B_{D\pi}^* \int\!\! dp_* dp_s dp_{\pi} dp_{K} \rho_{*\pm}(p_* ) B_{K\pi}^\pm \nonumber\\
& \times \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \varepsilon_{K\mp\pi\pm} (p_{K} ,p_{\pi} ) \varepsilon_{s\pm} (p_{s} ), \nonumber
\end{aligned}$$ where $p_\pi$ and $p_K$ are the three-momenta of the pion and kaon, $\rho_{K\pi}^*$ is the density in phase space of the soft pion and $K\pi$ pair from the $D^0$ decay, $B_{K\pi}^+$ and $B_{K\pi}^-$ are the branching fractions of $D^0\to K^-\pi^+$ and $\Dbar^0\to K^+ \pi^-$, and $\varepsilon_{K-\pi+}$ and $\varepsilon_{K+\pi-}$ are the detection efficiencies of the $K^-\pi^+$ and $K^+\pi^-$ pairs from $D^0$ and $\Dbar^0$ decay. The difference between charm and anti-charm event yields is written as $$\begin{aligned}
N_+ - N_- = & \frac{N_*}{2} B_{D\pi}^* \int\!\! dp_* dp_s dp_{\pi} dp_{K} \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \nonumber \\
&\times [ \rho_{*+}(p_* ) B_{K\pi}^+ \varepsilon_{K-\pi+} (p_{K} ,p_{\pi} ) \varepsilon_{s+} (p_{s} ) \nonumber\\
& \quad - \rho_{*-}(p_* ) B_{K\pi}^- \varepsilon_{K+\pi-} (p_{K} ,p_{\pi} ) \varepsilon_{s-} (p_{s} ) ] \nonumber\\
\nonumber \\*
= & \frac{N_*}{2} B_{D\pi}^* B_{K\pi} \int\!\! dp_* dp_s dp_{\pi} dp_{K} \rho_{*}(p_* ) \varepsilon_{s} (p_{s} ) \nonumber \\*
&\times \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \varepsilon_{K\pi}(p_K,p_{\pi} )\nonumber\\
&\times \{ (1+\delta \rho_{*}(p_* ) ) (1+A_{\CP}) \nonumber \\
& \times (1+\delta \varepsilon_{K\pi} (p_K,p_{\pi} ) ) (1+\delta \varepsilon_{s} (p_{s} ) ) \nonumber\\
& \qquad - \: (1-\delta \rho_{*}(p_* ) ) (1-A_{\CP}) \nonumber \\*
& \times (1-\delta \varepsilon_{K\pi} (p_K,p_{\pi} ) ) (1-\delta \varepsilon_{s} (p_{s} ) ) \},\nonumber\end{aligned}$$ where we have defined the following additional quantities: $B_{K\pi} = (1/2)(B_{K\pi}^+ + B_{K\pi}^-)$, $A_{\CP} \equiv A_{\CP}(K\pi) = (B_{K\pi}^+ - B_{K\pi}^-)/(B_{K\pi}^+ + B_{K\pi}^-)$, $\varepsilon_{K\pi} = (1/2)( \varepsilon_{K-\pi+} + \varepsilon_{K+\pi-})$, and $\delta \varepsilon_{K\pi} = (\varepsilon_{K-\pi+} - \varepsilon_{K+\pi-})/(\varepsilon_{K-\pi+} + \varepsilon_{K+\pi-})$. Expanding the products and observing that all terms in $\delta \rho_{*}(p_* )$ vanish upon integration over a symmetric $p_*$ domain, we obtain $$\begin{aligned}
N_+ - N_- = & N_* B_{D\pi}^* B_{K\pi} \int\!\! dp_* dp_s dp_{\pi} dp_{K}\rho_{*}(p_* ) \varepsilon_{s} (p_{s} ) \nonumber\\
&\times \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \varepsilon_{K\pi} (p_{K},p_{\pi} ) \nonumber\\
& \times \{ A_{\CP} +\delta \varepsilon_{K\pi} (p_K,p_{\pi}) +\delta \varepsilon_{s} (p_{s} ) + \ldots \}, \nonumber\end{aligned}$$ where we have neglected one term of order $A_{\CP} \delta^2$. Similarly, $$\begin{aligned}
N_+ + N_- = & N_* B_{D\pi}^* B_{K\pi} \int\!\! dp_* dp_s dp_{\pi} dp_{K} \rho_{*}(p_* ) \varepsilon_{s} (p_{s} ) \nonumber\\
& \times \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*) \varepsilon_{K\pi} (p_{K},p_{\pi} ) \nonumber\\
& \times [ 1 + A_{\CP} \delta \varepsilon_{K\pi} (p_K,p_{\pi}) +A_{\CP} \delta \varepsilon_{s} (p_{s} ) \nonumber \\
& \quad + \delta \varepsilon_{K\pi} (p_K,p_{\pi})\delta \varepsilon_{s}(p_{s} ) ]. \nonumber\end{aligned}$$ If we neglect all terms of order $A_{\CP} \delta$ and $\delta^2$, we finally obtain
$$\begin{aligned}
A(K\pi^*) = \left( \frac{N_+ - N_- }{N_+ + N_- }\right)^{K\pi^*} =& A_{\CP}(K^-\pi^+)
+ \int dp_\pi h^{K\pi^*}_{K\pi}(p_K,p_\pi) \delta\varepsilon_{K\pi}(p_K,p_\pi)
+ \int dp_s h^{K\pi^*}_s(p_s) \delta\varepsilon_s(p_s), \\
\text{where} \qquad h^{K\pi^*}_{K\pi}(p_K,p_\pi) =& \frac{ \int\!\! dp_* dp_s \rho_{*}(p_* ) \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*)
\varepsilon_{K\pi} (p_{K},p_{\pi} ) \varepsilon_{s} (p_{s} ) }
{ \int dp_* dp_{\pi} dp_{K} dp_s \rho_{*}(p_* ) \rho_{K\pi^*}( p_{K}, p_{\pi},p_s \: \vert \: p_*)
\varepsilon_{K\pi} (p_{K},p_{\pi} ) \varepsilon_{s} (p_{s} ) },
\end{aligned}$$
and $h^{K\pi^*}_s(p_s)$ (the $K\pi$ analogous to $h^{hh^*}_s(p_s)$ in Eq. \[eq:normalized-densities\]) are the normalized densities in phase space of $\pi$,$K$ and soft $\pi$, respectively, for the events included in our sample.
Untagged $D^0\to K^-\pi^+$
--------------------------
In this case $$\begin{aligned}
N_\pm = & \frac{N_0}{2} \int dp_0 dp_\pi dp_K \rho_{0\pm}(p_0) B^\pm_{K\pi} \nonumber \\
&\times \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \varepsilon_{K\mp\pi\pm}(p_K,p_\pi) \nonumber\end{aligned}$$ $$\begin{aligned}
N_+ - N_- = & \frac{N_0}{2} B_{K\pi} \int dp_0 dp_\pi dp_K \nonumber \\
& \times \rho_{0}(p_0) \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \varepsilon_{K\pi}(p_K,p_\pi) \nonumber\\
& \times \{ (1+\delta\rho_0(p_0)) (1+A_{\CP}) (1+\delta\varepsilon_{K\pi}(p_K,p_\pi)) \nonumber\\
& \quad - (1-\delta\rho_0(p_0)) (1-A_{\CP}) (1-\delta\varepsilon_{K\pi}(p_K,p_\pi))\} \nonumber\end{aligned}$$ where we have defined the following quantities $\rho_0 = (1/2)\left(\rho_{0+} + \rho_{0-}\right)$ and $\delta \rho_0 = (\rho_{0+} - \rho_{0-})/(\rho_{0+} + \rho_{0-})$. Assuming $\eta$ symmetry of the $p_0$ integration region, $$\begin{aligned}
%N_+ - N_- = & N_0 B_{K\pi} \int dp_0 dp_\pi dp_K \nonumber \\
%& \times \rho_{0}(p_0) \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \varepsilon_{K\pi}(p_K,p_\pi) \nonumber\\
%& \times [ A_{\CP} - \delta\varepsilon_{K\pi}(p_K,p_\pi)]. \nonumber
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N_+ - N_- = & N_0 B_{K\pi} \int dp_0 dp_\pi dp_K \rho_{0}(p_0) \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \nonumber \\
& \times \varepsilon_{K\pi}(p_K,p_\pi) [ A_{\CP} - \delta\varepsilon_{K\pi}(p_K,p_\pi)]. \nonumber\end{aligned}$$ Similarly we obtain $$\begin{aligned}
%N_+ +N_- = & N_0 B_{K\pi} \int dp_0 dp_\pi dp_K \nonumber \\
%& \times \rho_{0}(p_0) \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \varepsilon_{K\pi}(p_K,p_\pi) \nonumber\\
%& \times [ 1+ A_{\CP} \delta\varepsilon_{K\pi}(p_K,p_\pi)], \nonumber
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N_+ +N_- = & N_0 B_{K\pi} \int dp_0 dp_\pi dp_K \rho_{0}(p_0) \rho^0_{K\pi}(p_K,p_\pi \:\vert\: p_0) \nonumber \\
& \times \varepsilon_{K\pi}(p_K,p_\pi) [ 1+ A_{\CP} \delta\varepsilon_{K\pi}(p_K,p_\pi)], \nonumber\end{aligned}$$ and neglecting the second term in brackets,
$$\begin{aligned}
A(K\pi) & = \left( \frac{N_+ - N_- }{N_+ + N_- }\right)^{K\pi} = A_{\CP}(K^-\pi^+)
+ \int dp_\pi dp_K h^{K\pi}_{K\pi}(p_K, p_\pi) \delta\varepsilon_{K\pi}(p_K,p_\pi), \text{where} \nonumber \\
h^{K\pi}_{K\pi}(p_K,p_\pi) & = \frac{ \int dp_0 \rho_{0}(p_0) \rho_{K\pi}^0( p_{K}, p_{\pi}\: \vert \: p_0)
\varepsilon_{K\pi} (p_K, p_{\pi} ) }
{ \int dp_0 dp_{\pi} dp_{K} \rho_{0}(p_0) \rho_{K\pi}^0( p_{K}, p_{\pi} \: \vert \: p_0) \varepsilon_{K\pi} (p_K, p_{\pi} ) }\end{aligned}$$
is the normalized density in phase space of the $K\pi$ system in the events included in our sample.
Combining the asymmetries\[sec:AllTogether\]
--------------------------------------------
By combining the asymmetries measured in the three event samples we obtain
$$\begin{aligned}
A(hh^*)-&A(K\pi^*)+A(K\pi) = A_{\CP}(h^+h^-) + \int dp_s h^{hh^*}_s(p_s) \delta\varepsilon_s(p_s) \nonumber \\
&- A_{\CP}(K^-\pi^+) -\int dp_K dp_\pi h^{K\pi^*}_{K\pi}(p_K,p_\pi) \delta\varepsilon_{K\pi}(p_K,p_\pi)-\int dp_s h^{K\pi^*}_s(p_s) \delta\varepsilon_s(p_s) \nonumber \\
&+ A_{\CP}(K^-\pi^+)
+\int dp_K dp_\pi h^{K\pi}_{K\pi}(p_K, p_\pi) \delta\varepsilon_{K\pi}(p_K,p_\pi) = A_{\CP}(h^+h^-),\end{aligned}$$
where we assumed $h^{K\pi^*}_s(p_s) = h^{hh^*}_s(p_s)$, and $h^{K\pi^*}_{K\pi}(p_K,p_\pi) = h^{K\pi}_{K\pi}(p_K, p_\pi)$. The last two equalities are enforced by appropriate kinematic reweighing of the event samples. We need to equalize distributions with respect to the true momenta while we only access the distributions with respect to the measured momenta. Hence the assumption that event samples that have the same distribution with respect to the measured quantities also have the same distribution with respect to the true quantities is needed.
The mathematical derivation shows that for small enough physics and detector-induced asymmetries, the linear combination of the observed asymmetries used in this measurement achieves an accurate cancellation of the instrumental effects with minimal impact on systematic uncertainties.
Monte Carlo test of the analysis technique\[sec:mcvalidation\]
==============================================================
We tested the suppression of instrumental effects by repeating the analysis in simulated samples in which known instrumental and physics asymmetries were introduced. Many different configurations for the input asymmetries were tested, covering a rather extended range, to ensure the reliability of the method independently of their actual size in our data. For each configuration, $\mathcal{O}(10^6)$ decays were simulated to reach the desired 0.1% sensitivity. Only the $D^0\to \pi^+\pi^-$ sample was tested although the results are valid for the $D^0\to K^+K^-$ case as well.
![Curves corresponding to simulated ratios of efficiencies for reconstructing positive versus negative pions as a function of transverse momentum.[]{data-label="fig:curves"}](fig18){width="50.00000%"}
We test cancellation of instrumental effects arising from different reconstruction efficiencies between positive and negative particles, which in general depend on the particle species and momentum. Furthermore, the reliability of the suppression should not depend on the actual size of violation in $D^0\to K^-\pi^+$ and $D^0\to \pi^+\pi^-$ decays.
We repeated the measurement on statistical ensembles where the above effects are known and arbitrarily varied using a combination of event-specific weights applied to the true values of simulated quantities. Each ensemble consists of approximately one thousand trials. We compare the resulting observed asymmetry $\Acp^\text{obs}(\pi\pi)$ to the one given in input, $\Acp^\text{true}(\pipi)$, by inspecting the distribution of the residual, $\Delta\Acp(\pi\pi) = \Acp^\text{obs}(\pipi)-\Acp^\text{true}(\pipi).$
We first investigate the individual impact of each effect. We scan the value of a single input parameter across a range that covers larger variations than expected in data and assume all other effects are zero. First a $p_T$-dependent function that represents the dependence observed in data (see Fig. \[fig:soft\]) is used to parametrize the soft pion reconstruction efficiency ratio as $\epsilon(\pi^+)/\epsilon(\pi^-)= \text{Erf}\left(1.5\cdot p_T + A\right)$, where $p_T$ is in GeV/$c$ and various values of the constant $A$ have been tested so that the efficiency ratio at $0.4$ GeV/$c$ spans the 0.6–1 GeV/$c$ range as shown in Fig. \[fig:curves\]. Then, the kaon reconstruction efficiency ratio $\epsilon(K^-)/\epsilon(K^+)$ is varied similarly in the 0.6–1 GeV/$c$ range. Finally, a range $-10\%< \Acp< 10\%$ is tested for the physical –violating asymmetry in $D^0\to K^-\pi^+$ and $D^0\to \pi^+\pi^-$ decays.
![image](fig19a){width="8.6cm"} ![image](fig19b){width="8.6cm"}\
![image](fig19c){width="8.6cm"} ![image](fig19d){width="8.6cm"}
The results are shown in Fig. \[fig:c0b\] (empty dots). The cancellation of instrumental asymmetries is realized at the sub-per mil level even with input effects of size much larger than expected in data.
Figure \[fig:c0b\] (filled dots) shows the results of a more complete test in which other effects are simulated, in addition to the quantities varied in the single input parameter scan: a $p_T$-dependent relative efficiency $\epsilon(\pi^+)/\epsilon(\pi^-)$, corresponding to 0.8 at $0.4$ GeV/$c$, $\epsilon(K^-)/\epsilon(K^+)=98\%$, $\Acp(K\pi) = 0.8\%$ and $\Acp(\pi\pi) = 1.1\%$. Larger variations of the residual are observed with respect to the previous case. This is expected because mixed higher-order terms corresponding to the product of different effects are not canceled and become relevant.
![Asymmetry residual as a function of the physical –violating asymmetry in $D^0\to \pi^+\pi^-$ decays. Realistic effects other than shown in the scan are also simulated. The line represents the value averaged over the $-5\% < \Acp(\pi\pi)< 5\%$ range.[]{data-label="fig:realt"}](fig20){width="8.6cm"}
Finally we tested one case with more realistic values for the input effects. The $p_T$ dependence of $\epsilon(\pi^+)/\epsilon(\pi^-)$ is extracted from fitting data (Fig. \[fig:soft\]) to be distributed as $\text{Erf}\left(2.49\ p_T \right)$, with $p_T$ in GeV/$c$. We used , in which the approximation holds assuming equal efficiency for reconstructing positive and negative pions at $p_T>2$ GeV/$c$ [@bhh-paper]. We assume $\Acp(K\pi) = 0.1\%$, ten times larger than the current experimental sensitivity. A $-5\% < \Acp(\pi\pi) < 5\%$ range is tested in steps of $0.5\%$ for the physical asymmetry to be measured. The results are shown in Fig. \[fig:realt\]. The maximum observed bias is of the order of $0.02\%$, one order of magnitude smaller than the statistical resolution on the present measurement. The observed bias is $(0.0077\pm0.0008)\%$ averaged over the $\Acp(\pi\pi)$ range probed. These results, which extend to the $K^+K^-$ case, demonstrate the reliability of our method in extracting a precise and unbiased measurement of violation in $D^0$ meson decays into $K^+K^-$ and $\pi^+\pi^-$ final states, even in the presence of sizable instrumental asymmetries.
The results discussed in this appendix are used in Sec. \[sec:syst\] to estimate a systematic uncertainty on the final results due to neglecting higher order terms in Eq. (\[eq:formula\]), including possible non-factorization of $h^+h^{'-}$ and $\pi_s$ reconstruction efficiencies.
[99]{}
M. Antonelli [*et al.*]{}, Phys. Rept. [**494**]{}, 197 (2010).
S. Bianco, F. L. Fabbri, D. Benson, and I. I. Bigi, Riv. Nuovo Cim. [**26N7**]{}, 1 (2003).
K. Nakamura [*et al.*]{} (Particle Data Group), J. Phys. G [**37**]{}, 075021 (2010) and 2011 partial update for the 2012 edition.
D. Asner [*et al.*]{}, arXiv:1010.1589 and online update at http://www.slac.stanford.edu/xorg/hfag.
M. Artuso, B. Meadows, and A. A. Petrov, Ann. Rev. Nucl. Part. Sci. [**58**]{}, 249 (2008).
I. Shipsey, Int. J. Mod. Phys. A [**21**]{}, 5381 (2006).
G. Burdman and I. Shipsey, Ann. Rev. Nucl. Part. Sci. [**53**]{}, 431 (2003).
Y. Nir and N. Seiberg, Phys. Lett. B [**309**]{}, 337 (1993).
M. Ciuchini [*et al.*]{}, Phys. Lett. B [**655**]{}, 162 (2007).
B. Aubert [*et al.*]{} ( Collaboration), Phys. Rev. Lett. [**98**]{}, 211802 (2007).
M. Staric [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. [**98**]{}, 211803 (2007).
T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**100**]{}, 121802 (2008).
A. A. Petrov, Int. J. Mod. Phys. A [**21**]{}, 5686 (2006).
E. Golowich, J. Hewett, S. Pakvasa, and A. A. Petrov, Phys. Rev. D [**76**]{}, 095009 (2007).
B. Aubert [*et al.*]{} ( Collaboration), Phys. Rev. Lett. [**100**]{}, 061803 (2008).
M. Staric [*et al.*]{} (Belle Collaboration), Phys. Lett. B [**670**]{}, 190 (2008).
D. E. Acosta [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**94**]{}, 122001 (2005).
L. Balka [*et al.*]{}, Nucl. Instrum. Methods A [**267**]{}, 272 (1988); S. Bertolucci [*et al.*]{}, Nucl. Instrum. Methods A [**267**]{}, 301 (1988); M. Albrow [*et al.*]{}, Nucl. Instrum. Methods A [**480**]{}, 524 (2002); and G. Apollinari [*et al.*]{}, Nucl. Instrum. Methods A [**412**]{}, 515 (1998).
G. Ascoli [*et al.*]{}, Nucl. Instrum. Methods A [**268**]{}, 33 (1988).
T. Affolder [*et al.*]{}, Nucl. Instrum. Methods A [**526**]{}, 249 (2004).
A. Sill [*et al.*]{}, Nucl. Intrum. Methods A [**447**]{}, 1 (2000).
C. S. Hill [*et al.*]{}, Nucl. Instrum. Meth. A [**530**]{}, 1 (2004).
A. Affolder [*et al.*]{}, Nucl. Instrum. Meth. A [**453**]{}, 84 (2000).
E. J. Thomson [*et al.*]{}, IEEE Trans. Nucl. Sci. [**49**]{}, 1063 (2002); R. Downing [*et al.*]{}, Nucl. Instrum. Methods, A [**570**]{}, 36 (2007).
L. Ristori and G. Punzi, Annu. Rev. Nucl. Part. Sci. [**60**]{}, 595 (2010); W. Ashmanskas [*et al.*]{}, Nucl. Instrum. Methods, A [**518**]{}, 532 (2004).
A. Di Canto, Ph.D. Thesis, University of Pisa, Fermilab Report No. FERMILAB-THESIS-2011-29 (2011).
N. L. Johnson, Biometrika [**36**]{}, 149 (1949).
S. Bar-Shalom, G. Eilam, M. Gronau, and J. L. Rosner, Phys. Lett. B [**694**]{}, 374 (2011).
I. I. Bigi and A. I. Sanda, Phys. Lett. B [**171**]{}, 320 (1986).
M. Golden and B. Grinstein, Phys. Lett. B [**222**]{}, 501 (1989).
F. Buccella [*et al.*]{}, Phys. Rev. D [**51**]{}, 3478 (1995).
Z.-Z. Xing, Phys. Rev. D [**55**]{}, 196 (1997).
D.-S. Du, Eur. Phys. J. [**50**]{}, 579 (2007).
Y. Grossman, A. L. Kagan, and Y. Nir, Phys. Rev. D [**75**]{}, 036008 (2007).
I. I. Bigi, A. Paul, and S. Recksiegel, J. High Energy Phys. 06 (2011) 089.
I. I. Bigi and A. Paul, arXiv:1110.2862.
T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**106**]{}, 181802 (2011).
[^1]: Deceased
|
---
abstract: 'Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e. the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a new class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this new benchmark to test two popular methods of community detection, modularity optimization and Potts model clustering. The results show that the new benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.'
author:
- Andrea Lancichinetti
- Santo Fortunato
- Filippo Radicchi
title: Benchmark graphs for testing community detection algorithms
---
Introduction {#sec1}
============
Many complex systems in nature, society and technology display a modular structure, i.e. they appear as a combination of compartments that are fairly independent of each other. In the graph representation of complex systems [@Newman:2003; @vitorep], where the elementary units of a system are described as nodes and their mutual interactions as links, such modular structure is revealed by the existence of groups of nodes, called [*communities*]{} or [*modules*]{}, with many links connecting nodes of the same group and comparatively few links joining nodes of different groups [@Girvan:2002; @miareview]. Communities reveal a non-trivial internal organization of the network, and allow to infer special relationships between the nodes, that may not be easily accessible from direct empirical tests. Communities may be groups of related individuals in social networks [@Girvan:2002; @Lusseau:2005], sets of Web pages dealing with the same topic [@Flake:2002], biochemical pathways in metabolic networks [@Guimera:2005; @palla], etc.
Detecting communities in networks is a big challenge. Many methods have been devised over the last few years, within different scientific disciplines such as physics, biology, computer and social sciences. This race towards the ideal method aims at two main goals, i.e. improving the accuracy in the determination of meaningful modules and reducing the computational complexity of the algorithm. The latter is a well defined objective: in many cases it is possible to compute analytically the complexity of an algorithm, in others one can derive it from simulations of the algorithm on systems of different sizes. The main problem is then to estimate the accuracy of a method and to compare it with other methods. This issue of testing is in our opinion as crucial as devising new powerful algorithms, but till now it has not received the attention it deserves.
Testing an algorithm essentially means analyzing a network with a well defined community structure and recovering its communities. Ideally, one would like to have many instances of real networks whose modules are precisely known, but this is unfortunately not the case. Therefore, the most extensive tests are performed on computer generated networks, with a built-in community structure. The most famous benchmark for community detection is a class of networks introduced by Girvan and Newman (GN) [@Girvan:2002]. Each network has $128$ nodes, divided into four groups with $32$ nodes each. The average degree of the network is $16$ and the nodes have approximately the same degree, as in a random graph. At variance with a random graph, nodes tend to be connected preferentially to nodes of their group: a parameter $k_{out}$ indicates what is the expected number of links joining each node to nodes of different groups (external degree). When $k_{out}<8$ each node shares more links with the other nodes of its group than with the rest of the network. In this case, the four groups are well defined communities and a good algorithm should be able to identify them.
This benchmark is regularly used to test algorithms. However, there are several caveats that one has to consider:
- [all nodes of the network have essentially the same degree;]{}
- [the communities are all of the same size;]{}
- [the network is small.]{}
The first two remarks indicate that the GN benchmark cannot be considered a proxy of a real network with community structure.
![image](fig1){width="\textwidth"}
Real networks are characterized by heterogeneous distributions of node degree, whose tails often decay as power laws. Such heterogeneity is responsible for a number of remarkable features of real networks, such as resilience to random failures/attacks [@albert00], and the absence of a threshold for percolation [@cohen00] and epidemic spreading [@pastor01]. Therefore, a good benchmark should have a skewed degree distribution, like real networks. Likewise, it is not correct to assume that all communities have the same size: the distribution of community sizes of real networks is also broad, with a tail that can be fairly well approximated by a power law [@palla; @guimera03; @arenasrev; @clausetfast]. A reliable benchmark should include communities of very different sizes. A variant of the GN benchmark with communities of different size was introduced in [@danon06]. Finally, the GN benchmark was a network of a reasonable size for most existing algorithms at the time when it was introduced. Nowadays, there are methods able to analyze graphs with millions of nodes [@clausetfast; @blondel08; @lancichinetti08] and it is not appropriate to compare their performances on small graphs. In general, an algorithm should be tested on benchmarks of variable size and average degree, as these parameters may seriously affect the outcome of the method, and reveal its limits, as we shall see.
In this paper we propose a realistic benchmark for community detection, that accounts for the heterogeneity of both degree and community size. Detecting communities on this class of graphs is a challenging task, as shown by applying well known community detection algorithms.
The benchmark {#sec2}
=============
We assume that both the degree and the community size distributions are power laws, with exponents $\gamma$ and $\beta$, respectively. The number of nodes is $N$, the average degree is $\langle k\rangle$.
In the GN benchmark a node may happen to have more links outside than inside its community even when $k_{out}<8$, due to random fluctuations, which raises a conceptual problem concerning the natural classification of the node. The construction of a realization of our benchmark proceeds through the following steps:
1. [Each node is given a degree taken from a power law distribution with exponent $\gamma$. The extremes of the distribution $k_{min}$ and $k_{max}$ are chosen such that the average degree is $\langle k\rangle$. The configuration model [@molloy] is used to connect the nodes so to keep their degree sequence.]{}
2. [Each node shares a fraction $1-\mu$ of its links with the other nodes of its community and a fraction $\mu$ with the other nodes of the network; $\mu$ is the [*mixing parameter*]{}.]{}
3. [The sizes of the communities are taken from a power law distribution with exponent $\beta$, such that the sum of all sizes equals the number $N$ of nodes of the graph. The minimal and maximal community sizes $s_{min}$ and $s_{max}$ are chosen so to respect the constraints imposed by our definition of community: $s_{min} > k_{min}$ and $s_{max}>k_{max}$. This ensures that a node of any degree can be included in at least a community.]{}
4. [At the beginning, all nodes are homeless, i.e. they are not assigned to any community. In the first iteration, a node is assigned to a randomly chosen community; if the community size exceeds the internal degree of the node (i.e. the number of its neighbors inside the community), the node enters the community, otherwise it remains homeless. In successive iterations we place a homeless node to a randomly chosen community: if the latter is complete, we kick out a randomly selected node of the community, which becomes homeless. The procedure stops when there are no more homeless nodes.]{}
5. [To enforce the condition on the fraction of internal neighbors expressed by the mixing parameter $\mu$, several rewiring steps are performed, such that the degrees of all nodes stay the same and only the split between internal and external degree is affected, when needed. In this way the ratio between external and internal degree of each node in its community can be set to the desired share $\mu$ with good approximation.]{}
The prescription we have given leads to fast convergence. In Fig. \[fig1b\] we show how the time to completion scales with the number of links of the graphs. The latter is expressed by the average degree, as the number of nodes of the graphs is kept fixed. The curves clearly show a linear relation between the computer time and the number of links of the graph. Therefore our procedure allows to build fairly large networks (up to $10^5-10^6$ nodes) in a reasonable time.
![\[fig1b\] Study of the complexity of our algorithm. The plots show the scaling of the computer time (in seconds) with the average degree of the graph. The curves correspond to different choices for the exponents $\gamma$ and $\beta$ and the value of $\mu$. The two panels reproduce graphs with $1000$ (a) and $10000$ nodes (b). The calculations were performed on Opteron processors.](comptime.eps "fig:"){width="\columnwidth"} ![\[fig1b\] Study of the complexity of our algorithm. The plots show the scaling of the computer time (in seconds) with the average degree of the graph. The curves correspond to different choices for the exponents $\gamma$ and $\beta$ and the value of $\mu$. The two panels reproduce graphs with $1000$ (a) and $10000$ nodes (b). The calculations were performed on Opteron processors.](comptime_a.eps "fig:"){width="\columnwidth"}
Due to the strong constraints we impose to the system, in some instances convergence may not be reached. However, this is very unlikely for the range of parameters we have used. For the exponents we have taken typical values of real networks: $2\leq \gamma\leq 3$, $1\leq\beta\leq 2$.
Our algorithm tries to set the $\mu$-value of each node to the predefined input value, but of course this does not work in general, especially for nodes of small degree, where the possible values of $\mu$ are just a few and clearly separated. So, the distribution of $\mu$-values for a given benchmark graph cannot be a $\delta$-function, but it will have a bell-shaped curve, with a pronounced peak (Fig. \[fig1c\]).
![\[fig1c\] Distribution of the $\mu$-values for benchmark graphs obtained with our algorithm for different choices of the exponents and system size. ](mudistr.eps){width="\columnwidth"}
Tests {#sec3}
=====
We have used our benchmark to test the performance of two methods to detect communities in networks, i.e. modularity optimization [@Newman:2004c; @Duch:2005; @Guimera:2005], probably the most popular method of all, and the algorithm based on the Potts model introduced by Reichardt and Bornholdt [@reichardt04].
For modularity, the optimization was carried out through simulated annealing, as in [@Guimera:2005], which is not a fast technique but yields good estimates of modularity maxima. In Fig. \[fig2\] we plot the performance of the method as a function of the external degree of the nodes for the GN benchmark.
![\[fig2\] Test of modularity optimization on the benchmark of Girvan and Newman.](fig2){width="\columnwidth"}
To compare the built-in modular structure with the one delivered by the algorithm we adopt the [*normalized mutual information*]{}, a measure of similarity of partitions borrowed from information theory, which has proved to be reliable [@Danon:2005]. As we can see from the figure, the natural partition is always found up until $k_{out}=6$, then the method starts to fail, although it finds good partitions even when communities are fuzzy ($k_{out}\geq 8$). Meanwhile, many algorithms are able to achieve comparable performances, so the benchmark can hardly discriminate between different methods. As we can see from the figure, for $k_{out}< 8$ we are close to the top performance and there seems to be little room for improvement.
In Fig. \[fig3\] we show what happens if one optimizes modularity on the new benchmark, for $N=1000$. The four panels correspond to four pairs for the exponents $(\gamma, \beta)=(2,1), (2,2), (3,1), (3,2)$. We have chosen combinations of the extremes of the exponents’ ranges in order to explore the widest spectrum of graph structures. For each pair of exponents, we have used three values for the average degree $\langle k\rangle=15, 20, 25$. Each curve shows the variation of the normalized mutual information with the mixing parameter $\mu$.
![\[fig3\] Test of modularity optimization on the new benchmark. The number of nodes $N=1000$. The results clearly depend on all parameters of the benchmark, from the exponents $\gamma$ and $\beta$ to the average degree $\langle k \rangle$. The threshold $\mu_c=0.5$ (dashed vertical line in the plots) marks the border beyond which communities are no longer defined in the strong sense, i.e. such that each node has more neighbors in its own community than in the others [@radicchi]. Each point corresponds to an average over $100$ graph realizations.](fig3){width="\columnwidth"}
![\[fig4\] Test of modularity optimization on the new benchmark. The number of nodes is now $N=5000$, the other parameters are the same as in Fig. \[fig3\]. Each point corresponds to an average over $25$ graph realizations.](fig4){width="\columnwidth"}
In general, from Fig. \[fig3\] we can infer that the method gives good results. However, we find that it begins to fail even when communities are only loosely connected to each other (small $\mu$). This is due to the fact that modularity optimization has an intrinsic resolution limit that makes small communities hard to detect [@FB]. Our benchmark is able to disclose this limit. We have explicitely verified that the modularity of the natural partition of the graph is lower than the maximum obtained from the optimization, and that the partition found by the algorithm has systematically a smaller number of clusters, due to the merge of small communities into larger groups.
We also see that the performance of the method is the better the larger the average degree $\langle k\rangle$, whereas it gets worse when the communities are more similar to each other in size (larger $\beta$).
![\[fig5\] Test of Potts model clustering on the new benchmark. The number of nodes $N=1000$. The results clearly depend on all parameters of the benchmark, from the exponents $\gamma$ and $\beta$ to the average degree $\langle k \rangle$. Each point corresponds to an average over $100$ graph realizations.](fig5){width="\columnwidth"}
![\[fig6\] Test of Potts model clustering on the new benchmark. The number of nodes $N=5000$, the other parameters are the same as in Fig. \[fig5\]. Each point corresponds to an average over $10$ graph realizations.](fig6){width="\columnwidth"}
To check how the performance is affected by the network size, we have tested the method on a set of larger graphs (Fig. \[fig4\]). Now $N=5000$, whereas the other parameters are the same as before. Curves corresponding to the same parameters are similar, but shifted towards the bottom for the larger systems. We conclude that the performance of the method worsens if the size of the graph increases. If we consider that networks with $5000$ nodes are much smaller than many graphs one would like to analyze, modularity optimization may give inaccurate results in practical cases, something which could not be inferred from tests on existing benchmarks.
We have repeated the same analysis for the Potts model algorithm. We closely followed the implementation suggested by the authors of [@reichardt04]: we set the number of spin states equal to the number of nodes of the network, the ferromagnetic coupling $J$ was set to $1$, whereas the antiferromagnetic coupling $\gamma$ equals the density of links of the network. The results are shown in Figs. \[fig5\] and \[fig6\]. The performance of the method is fair, and it worsens for larger system sizes, like for modularity optimization, which proves superior.
Summary {#sec4}
=======
We have introduced a new class of graphs to test algorithms identifying communities in networks. These new graphs extend the GN benchmark by introducing features of real networks, i.e. the heterogeneity in the distributions of node degree and community size. We found that these elements pose a harder test to existing methods. We have tested modularity optimization and a clustering technique based on the Potts model against the new benchmark. From the results the resolution limit of modularity emerges immediately. Furthermore, we have seen that the size of the graph and the density of its links have a sizeable effect on the performance of the algorithm, so it is very important to study this dependence when testing a new algorithm. The new benchmark is suitable for this type of analysis, as the graphs can be constructed very quickly, and one can span several orders of magnitude in network size. A software package to generate the benchmark graphs can be downloaded from [*http://santo.fortunato.googlepages.com/benchmark.tgz*]{}.
[20]{}
M. E. J. Newman, SIAM Review [**45**]{}, 167 (2003).
S. Boccaletti, V. Latora, Y. Moreno, M. Chavez and D.-U. Hwang, Phys. Rep. [**424**]{}, 175 (2006).
M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. [**99**]{}, 7821 (2002).
S. Fortunato and C. Castellano, in [*Encyclopedia of Complexity and System Science*]{}, ed. B. Meyers (Springer, Heidelberg, 2009), arXiv:0712.2716 at www.arXiv.org.
D. Lusseau and M. E. J. Newman, Proc. R. Soc. London B [**271**]{}, S477 (2004).
G. W. Flake, S. Lawrence, C. Lee Giles and F. M. Coetzee, IEEE Computer **35**(3), 66 (2002).
R. Guimerà and L. A. N Amaral, Nature [**433**]{}, 895 (2005).
G. Palla, I. Derényi, I. Farkas and T. Vicsek, Nature [**435**]{}, 814 (2005).
R. Albert, H. Jeong and A.-L. Barabási, Nature [**406**]{}, 378 (2000).
R. Cohen, K. Erez, D. ben-Avraham and S. Havlin, Phys. Rev. Lett. [**85**]{}, 4626 (2000).
R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. [**86**]{}, 3200 (2001).
R. Guimerà, L. Danon, A. Díaz-Guilera, F. Giralt and A. Arenas, Phys. Rev. E [**68**]{}, 065103 (R) (2003).
L. Danon, J. Duch, A. Arenas and A. Díaz-Guilera, in [*Large Scale Structure and Dynamics of Complex Networks: From Information Technology to Finance and Natural Science*]{}, eds. G. Caldarelli and A. Vespignani (World Scientific, Singapore, 2007), pp 93–114.
A. Clauset, M. E. J. Newman and C. Moore, Phys. Rev. E [**70**]{}, 066111 (2004).
L. Danon, A. Díaz-Guilera and A. Arenas, JSTAT P11010, (2006).
V. D. Blondel, J.-L. Guillaume, R. Lambiotte and E. Lefebvre, arXiv:0803.0476 at www.arXiv.org.
A. Lancichinetti, S. Fortunato and J. Kertész, arXiv:0802.1218 at www.arXiv.org.
M. Molloy and B. Reed, Comb. Prob. Comput. [**6**]{}, 161 (1995).
M. E. J. Newman, Phys. Rev. E [**69**]{}, 066133 (2004).
J. Duch and A. Arenas, Phys. Rev. E [**72**]{}, 027104 (2005).
J. Reichardt, S. Bornholdt, Phys. Rev. Lett. [**93**]{}, 218701 (2004).
L. Danon, A. Díaz-Guilera, J. Duch and A. Arenas, J. Stat. Mech., P09008 (2005).
F. Radicchi, C. Castellano, F. Cecconi, V. Loreto and D. Parisi, Proc. Natl. Acad. Sci. USA [**101**]{}, 2658–2663 (2004).
S. Fortunato and M. Barthélemy, Proc. Natl. Acad. Sci. USA [**104**]{}, 36 (2007).
|
---
abstract: 'We study a model for the transverse thermoelectric response due to quantum superconducting fluctuations in a two-leg Josephson ladder, subject to a perpendicular magnetic field $B$ and a transverse temperature gradient. Assuming a weak Josephson coupling on the rungs, the off-diagonal Peltier coefficient ($\alpha_{xy}$) and the Nernst effect are evaluated as functions of $B$ and the temperature $T$. In this regime, the Nernst effect is found to exhibit a prominent peak close to the superconductor–insulator transition (SIT), which becomes progressively enhanced at low $T$. In addition, we derive a relation to diamagnetic response: $\alpha_{xy}= -M/T_0$, where $M$ is the equilibrium magnetization and $T_0$ a plasma energy in the superconducting legs.'
author:
- 'Yeshayahu Atzmon$^{1}$ and Efrat Shimshoni$^{1}$'
title: 'Nernst Effect as a Signature of Quantum Fluctuations in Quasi-1D Superconductors'
---
Introduction {#sec:intro}
============
In low-dimensional superconducting (SC) systems (ultra-thin films, wires and Josephson arrays), fluctuations of the SC order parameter field lead to broadening of the transition to the SC state, and give rise to anomalous transport properties in the adjacent normal phase [@SCfluc]. While close to or above the critical temperature $T_c$ thermally excited fluctuations dominate these conduction anomalies, quantum fluctuations are expected to dominate at low temperatures $T\ll T_c$, where superconductivity is weakened due to, e.g., the effect of a magnetic field, disorder or repulsive Coulomb interactions. Their most dramatic manifestation is the onset of a superconductor–insulator transition (SIT) when an external parameter such as magnetic field or thickness is tuned beyond a critical point [@SITrev; @SIT1D].
A striking signature of the fluctuations regime, which attracted much attention in the recent years, is the anomalous enhancement of transverse thermoelectric effects in the presence of a perpendicular magnetic field $B$. In particular, a substantial Nernst effect measured far above $T_c$, e.g., in the underdoped regime of high-$T_c$ superconductors [@ong1; @ong2] and disordered thin films [@behnia; @Pourret]. As the Nernst signal (a voltage developing in response to a temperature gradient in the perpendicular direction) is typically small in ordinary metals, its magnification in such systems has been attributed to the dynamics of thermally excited Gaussian SC fluctuations [@UD; @USH; @MF; @SSVG], or mobile vortices above a Kosterlitz-Thouless [@KT] transition [@PRV]. Theoretical studies were also extended to the quantum critical regime of SC fluctuations [@BGS].
Conceptually, the above mentioned theoretical models share a common intuitive idea: in the phase-disordered, vortex liquid state (which is qualitatively equivalent to a regime dominated by dynamical Gaussian fluctuations), vortex flow generated parallel to a thermal gradient ($\nabla_y T$) naturally induces an electric field ($E_x$) in the perpendicular direction. Consequently, the general expression for the Nernst coefficient $$\nu\equiv\frac{E_x}{(\nabla_y T) B}=\frac{\rho_{xx}\alpha_{xy}-\rho_{xy}\alpha_{yy}}{B}
\label{Nernst_def}$$ is overwhelmingly dominated by the first term in the numerator, dictated by the off-diagonal Peltier coefficient $\alpha_{xy}$: SC fluctuations typically do not contribute to the second term due to particle-hole symmetry. This is in sharp contrast with ordinary metals, where the two terms almost cancel. Measurement of the Nernst signal is therefore often regarded as a direct probe of $\alpha_{xy}$, which is an interesting quantity: while being a transport coefficient, it is intimately related to thermodynamic properties. In particular, it was found (both experimentally and theoretically) to be proportional to the diamagnetic response [@ong2; @USH; @PRV]: $\alpha_{xy}\sim -M/T$. In the clean limit (i.e. for Galilean invariant systems), it was shown to encode the entropy per carrier [@CHR; @BO; @SRM].
![(color online) A scheme for measurement of the Nernst effect in a Josephson ladder subject to a magnetic field $B$ perpendicular to the plane, and a temperature difference between the top ($T_1$) and bottom ($T_2$) SC wires. Dashed lines represent Josephson coupling. \[fig1\] ](nernst_fig.pdf){width="0.9\linewidth"}
Note, however, that even in the case where Eq. (\[Nernst\_def\]) is dominated by the first term, the overall contribution to $\nu$ is not determined by $\alpha_{xy}$ alone, but rather by its product with the electric resistivity $\rho_{xx}$. Observation of a large Nernst signal therefore necessitates a reasonably resistive normal state. This indicates that a large Nernst signal is a subtle effect: on one hand it requires the presence of superconducting fluctuations, and on the other hand requires the superconducting fluctuations to be [*dynamic*]{} in order to produce a sizable voltage drop. A conjunction of these competing tendencies occurs in the fluctuations dominated regime. Moreover, the Nernst effect is expected to serve as a sensitive probe of a SIT. It should be pointed out, however, that when the normal state adjacent to the SC transition is an insulator, $\alpha_{xy}$ can [*not*]{} be directly deduced from $\nu$: unlike most cases studied in the literature thus far, $\rho_{xx}$ can not be assumed constant. Rather, it possesses a significant dependence of its own on the deviation from the critical point, and on $T$ (in particular, a [*divergence*]{} in the $T\rightarrow 0$ limit). As a result, although $\alpha_{xy}$ is bound to vanish for $T\rightarrow 0$ by the third law of thermodynamics, $\nu$ may in principle approach a finite value in this limit.
Motivated by the above general observations, in the present paper we develop a theory for the transverse thermoelectric coefficients and their relation to diamagnetism in the quasi one-dimensional (1D) superconducting device depicted in Fig. \[fig1\], in which the geometry dictates an appreciable Nernst effect in the fluctuations-dominated regime. The device considered is a two-leg Josephson ladder subject to a perpendicular magnetic field $B$, where a small temperature difference between the legs induces voltage along the ladder due to transport of vortices across the junction [@glazman]. At low $T$, one expects vortex transport to be dominated by quantum tunneling. This system serves as a minimal setup for observing transverse thermoelectric effects [@Jnernst]. The relative simplicity of the model describing the quantum dynamics of SC fluctuations allows an explicit evaluation of $\alpha_{xy}$, $\nu$ and the magnetization density $M$ in a wide range of parameters. In particular, we investigate their behavior when the wires parameters are tuned through a SIT, and find a prominent peak in $\nu$ close to the transition, which becomes progressively enhanced [@3rd_law] at low $T$. We further confirm the proportionality relation between $\alpha_{xy}$ and $-M$, however the prefactor is $1/T_0$, with $T_0$ the plasma energy scale, rather than $1/T$ as found in the 2D case [@USH; @PRV].
The rest of the paper is organized as follows: in Sec. \[sec:model\] we set up the model. In Sec. \[sec:thermoelectric\] we detail our derivation of the transverse thermoelectric coefficients $\alpha_{xy}$ (subsection \[sec:alpha\_xy\]) and $\nu$ (subsection \[sec:nu\]). In Sec. \[sec:M\] we derive a relation between $\alpha_{xy}$ and the diamagnetic magnetization ($-M$). Finally, in Sec. \[sec:discussion\] we summarize our main results and conclusions.
The Model {#sec:model}
=========
We consider the system indicated in Fig. \[fig1\], which consists of two SC wires of length $L$ parallel to the $x$ direction separated by a thin insulating layer of width $W$, which allows a weak Josephson coupling $J$ per unit length. In each of the separate wires ($n=1,2$), the 1D quantum dynamics of fluctuations in the phase of the SC condensate is governed by a Hamiltonian of the form (in units where $\hbar =1$) $$H_n = \frac{v}{{2\pi }}\int_{ - \frac{L}{2}}^{\frac{L}{2}} d x\left[ {K{{({\partial _x}{\theta _n})}^2 + }\frac{1}{K}{{({\partial _x}{\phi _n})}^2}} \right]\; ;
\label{Hn}$$ here $\phi _{n}(x)$ is the collective phase field, and $\theta
_{n}(x)$ its conjugate field (obeying $[\phi _{n}(x),\partial
_{x}\theta _{n}(x^{\prime })]=i\pi \delta (x^{\prime }-x)$) which denotes Cooper pair number fluctuations. This model can be viewed as describing, e.g., the continuum limit of a Josephson chain [@SIT1D], where the Josephson coupling ($E_J$) and charging energy ($E_c$) between adjacent SC grains are related to the parameters of $H_n$ by $K=\sqrt{E_c/E_J}$ and $v=\sqrt{E_cE_J}\pi
a$, with $a$ the grain size. Note that Eq. \[Hn\] is a low-energy approximation of the quantum fluctuations. As will be shown later, to get non-trivial transverse thermoelectric effects we will need to keep corrections to $H_n$ involving, e.g., higher derivatives of the fields. Such corrections take into account, for example, fluctuations in the current density within the finite width of the wires ($\sim a$), and coupling of the collective modes to microscopic degrees of freedom.
The Josephson coupling between the wires is given by $$H_J=-J\int_{ - \frac{L}{2}}^{\frac{L}{2}} d x\, \cos \{{\phi_1} -
{\phi_2} - qx\}
\label{HJ}$$ where $q$ is the deviation of the vortex density in the junction area from a commensurate value: $$q=2\pi\left(\frac{WB}{\Phi_0}\;{\rm mod}\;\frac{1}{a}\right)
\label{q_def}$$ in which (for $\hbar=c=1$) $\Phi_0=\pi/e$ is the flux quantum. Assuming the hierarchy of scales $Ja\ll T\ll v/a$ (with $T$ an average temperature of the system), $H_J$ \[Eq. (\[HJ\])\] can be treated perturbatively. Note that the first inequality justifies this approximation for an arbitrary value of the Luttinger parameter $K$ in Eq. (\[Hn\]): for $K<2$ and sufficiently small $q$, the Josephson term becomes relevant [@Giamarchi], and induces a SC phase where fluctuations in the relative phase between the wires are gapped [@OG; @AS] in the $T\rightarrow 0$ limit. In turn, additional perturbations such as inter-wire charging energy [@OG; @AS] and disorder [@GS] generate a transition to an insulating $T\rightarrow 0$ phase for sufficiently large $K$. Since, as shown below, in both extreme phases the Nernst effect is strongly suppressed, we focus our attention on the [*intermediate*]{} $T$ regime, where temperature exceeds the energy scale associated with all these perturbations.
In addition to $H_J$, we introduce a weak backscattering term due to disorder of the form $$\begin{aligned}
\label{HD} H_{D}&=&\sum_{n=1,2}\int dx\zeta_n (x)\cos \{2\theta
_{n}(x)\}\; ,\\ \overline{\zeta_n (x)} &=& 0\; ,\quad
\overline{\zeta_n (x)\zeta_{n^{\prime}} (x^{\prime })} =D\delta
(x-x^{\prime })\delta_{n,n^{\prime}} \nonumber\end{aligned}$$ where overline denotes disorder averaging. This term generates the leading contribution to the resistivity $\rho_{xx}$, and thus to $\nu$ via Eq. (\[Nernst\_def\]).
The Transverse Thermoelectric Coefficients {#sec:thermoelectric}
==========================================
We next consider a temperature difference $\Delta T=T_1-T_2$ between the top and bottom wires (see Fig. \[fig1\]), each assumed to be at equilibrium with a separate heat reservoir. As a result of the transverse Peltier effect, a current is induced along the legs of the ladder. Alternatively, if one maintains open boundary conditions, a voltage develops along the wires due to the Nernst effect. Below we derive the corresponding coefficients.
The Off Diagonal Thermoelectric Coefficient $\alpha_{xy}$ {#sec:alpha_xy}
---------------------------------------------------------
We first evaluate the electric current $I_x$ induced along the ladder for $\Delta T \ll T \equiv \frac{{{T_1} + {T_2}}}{2}$, which yields the transverse Peltier coefficient $$\alpha_{xy}=\frac{I_x}{\Delta T}\; .
\label{alpha_xy_def}$$ Alternatively, implementing the Onsager relations [@USH], one can deduce it from the coefficient $\tilde\alpha_{xy}=T\alpha_{xy}$, which dictates the heat current $I^{(h)}_x$ generated along the device in response to a voltage difference $V_y$ between the two wires. We show explicitly below that the result of both calculations is indeed the same.
The electric current is given by the expectation value $$I_x = 2e\pi \left\langle \dot {\theta }_1 +\dot {\theta
}_2\right\rangle
\label{Ix_def}$$ where the current operators $\dot{ {\theta }}_n
(x,t)=i[H,{\theta_n }]$ ($H=H_0+H_J$ where $H_0\equiv H_1+H_2$) are evaluated perturbatively in $H_J$ \[Eq. (\[HJ\])\] using the interaction representation. The leading contribution to $I_x$ arises from the second order: $$\begin{aligned}
& &\dot{ {\theta }}_n (x,t)= U(t){{\dot {\theta}_n^{(0)}
}(x,t)}{U^\dag }(t)\quad{\rm where} \nonumber \\ \label{I_n}&
&\dot {\theta}_n^{(0)} (x,t) = \frac{v}{K}{\partial_x}{\phi
_n}(x,t)\; ,\\
& &U(t)= 1 + {\mathop {i\smallint }\limits_{ - \infty }
^t}d{t_1}{H_J}({t_1}) - {\mathop \smallint \limits_{ - \infty }
^t}d{t_1}{\mathop \smallint \limits_{ - \infty }
^{{t_1}}}d{t_2}{H_J}({t_1}){H_J}({t_2})\; .
\nonumber\end{aligned}$$ Employing Eq. (\[HJ\]) and inserting the resulting expressions for $\dot{ {\theta }}_n$ in Eq. (\[Ix\_def\]), we obtain
$$\begin{array}{*{20}{l}}
{{I_x} = \frac{{{\pi evJ^2}}}{2}\int\limits_{ - \infty }^t {d{t_1}\int\limits_{ - \infty }^{t_1} {d{t_2}} } \int\limits_{ - \frac{L}{2}}^{\frac{L}{2}} {d{x_1}\int\limits_{ - \frac{L}{2}}^{\frac{L}{2}} {d{x_2}} } \sin \left[ {q({x_1} - {x_2})} \right]
i\Im m\left[ {{{ {{e^{ - \frac{K}{2}{F_{T_1}}\left( {{x_1} - {x_2};{t_1} - {t_2}} \right)}}} }}{{ {{e^{ - \frac{K}{2}{F_{T_2}}\left( {{x_1} - {x_2};{t_1} - {t_2}} \right)}}} }}} \right]}\\
{\left[ {\left\{ {{{\left. {{\partial _x}{F_{T_1}}(x - {x_1};t - {t_1}) + {\partial _x}{F_{T_1}}({x_2} - x;{t_2} - t)} \right\}}} - \left\{ {{{\left. {{\partial _x}{F_{T_2}}(x - {x_1};t - {t_1}) + {\partial _x}{F_{T_2}}({x_2} - x;{t_2} - t)} \right\}}}} \right.} \right.} \right.}\\
- \left. {\left\{ {{{\left. {{\partial _x}{F_{T_1}}(x - {x_2};t - {t_1}) + \partial_x{F_{T_1}}({x_1} - x;{t_2} - t)} \right\}}} + } \right.\left\{ {{{\left. {{\partial _x}{F_{T_2}}(x - {x_2};t - {t_1}) + {\partial _x}{F_{T_2}}({x_1} - x;{t_2} - t)} \right\}}}} \right.} \right]\; ,
\end{array}
\label{Ix_int}$$
where we use the Boson correlation function $F_T(x;t)\equiv \frac{1}{K}\langle[\phi_n(x,t)-\phi_n(0,0)]^2\rangle$ at fixed temperature $T$: [@Giamarchi] $${F_T}(x;t) = \frac{1}{2}\log \left[ {\frac{{{v^2}}}{{{\pi ^2}{a ^2}{T^2}}}\left\{ {\left. {\sinh \left[ {\pi T\left( {\frac{x}{v} - t + i\epsilon\, {\rm sign}(t)} \right)} \right]\sinh \left[ {\pi T\left( {\frac{x}{v} + t - i\epsilon \,{\rm sign}(t)} \right)} \right]} \right\}} \right.} \right]
\label{FT_def}$$
in which $\epsilon\sim a/v$ is associated with the short-distance cutoff.
Performing the integral first over the center-of-mass coordinate $x_c=\frac{x_1+x_2}{2}$ in Eq. (\[Ix\_int\]), and taking the limit $\epsilon\rightarrow 0$ in Eq. (\[FT\_def\]), it is easy to see that, since $\Im m\{F\}=\pi$ is independent of $T$, the resulting expression actually vanishes. This follows from the Lorentz invariance of the model for phase-fluctuations, Eq. (\[Hn\]). As we discuss further in Sec. \[sec:discussion\], this behavior is in fact quite characteristic: deviations from a linear energy-momentum dispersion are required to get a finite $\alpha_{xy}$. In the present case, a non-vanishing result (of order $\epsilon$) would emerge if $\epsilon$ in Eq. (\[FT\_def\]) is kept finite. This signifies that the leading contribution to $\alpha_{xy}$ arises from physics on scales of the short-distance cutoff, which depends on microscopic details. We therefore need to include such corrections to Eq. (\[Hn\]), namely terms which violate Lorentz invariance: as will be elaborated in Sec. \[sec:discussion\], such terms are indeed necessary to provide the Josephson vortices in this system with entropy.
Tracing back to the underlying microscopic theory of SC devices, Eq. (\[Hn\]) is an effective Hamiltonian for the collective fields $\phi_n$, $\theta_n$ arising to leading order in a gradient expansion. A variety of higher energy corrections, allowed by the symmetries of the problem, are present in any physical system. In particular, as a concrete example, corrections to the Josephson Hamiltonian which hybridize phase and charge fluctuations have been derived in earlier literature [@ESA; @SGF] for a single junction. When incorporated in the continuum limit of a model for Josephson array, these yield higher order derivatives, e.g. a term of the form [@SGF] $$\label{HJcorr}
H_n^{corr}=\frac{\mathcal{C}va}{2\pi}\int_{ - \frac{L}{2}}^{\frac{L}{2}}dx\,[(i{\partial _x}{\phi_n})({\partial_x^2}{\theta_n})+h.c.]$$ where $\mathcal{C}$ is a dimensionless constant. This adds a correction to the current operator in the wire $n$ \[Eq. (\[I\_n\])\] of the form $$\label{I_ncorr}
\delta\dot {\theta}_n^{(0)}=i\mathcal{C}va({\partial_x^2}{\theta_n})\approx \frac{i\mathcal{C}a}{K}\partial_t{\partial _x}{\phi_n}$$ where in the last approximation we have used the leading term in the equation of motion for $\phi_n$. Inserting these corrections to $\dot{\theta}_1$, $\dot{\theta}_2$ into Eqs. (\[Ix\_def\]), (\[I\_n\]) we obtain
$$\begin{array}{*{20}{l}}
{{I_x} \approx -\frac{{{\pi e\mathcal{C}aJ^2}}}{2}\int\limits_{ 0 }^{\infty} {d{t}} } \int\limits_{ - \frac{L}{2}}^{\frac{L}{2}} {d{x_c}\int\limits_{ - L}^{L} {d{x}} } \sin \left[ {qx} \right]
\Im m\left[ {{{ {{e^{ - \frac{K}{2}{F_{T_1}}\left( {x;t} \right)}}} }}{{ {{e^{ - \frac{K}{2}{F_{T_2}}\left( {x;t} \right)}}} }}} \right]\\
{\left[ {\left\{ {{{\left. {{\partial _x}{F_{T_1}}\left(-x_c - \frac{x}{2};0\right) + {\partial _x}{F_{T_1}}\left(x_c - \frac{x}{2};-t\right)
} \right\}}} - \left\{ {{{\left. {{\partial _x}{F_{T_2}}\left(-x_c - \frac{x}{2};0\right)
+ {\partial _x}{F_{T_2}}\left(x_c - \frac{x}{2};-t\right)
} \right\}}}} \right.} \right.} \right.}\\
- \left. {\left\{ {{{\left. {{\partial _x}{F_{T_1}}\left(-x_c + \frac{x}{2};0\right)
+ \partial_x{F_{T_1}}\left(x_c + \frac{x}{2};-t\right)
} \right\}}} + } \right.\left\{ {{{\left. {{\partial _x}{F_{T_2}}\left(-x_c + \frac{x}{2};0\right)
+ {\partial _x}{F_{T_2}}\left(x_c + \frac{x}{2};-t\right)
} \right\}}}} \right.} \right]\; ,
\end{array}\nonumber$$
$$\begin{aligned}
\label{Ix_int_corr}
&\approx &-(\Delta T)2\pi^2e\mathcal{C}aJ^2\int\limits_{ 0 }^{\infty} {d{t}}\int\limits_{-\infty}^{\infty} {d{x}}\,x\sin \left[ {qx} \right]
\Im m \left\{\chi(x,t)\right\}\; ,\\
& & \chi(x,t)=\frac{(\pi aT/v)^K}{\left(\sinh \left[ {\pi T\left( {\frac{x}{v} - t + i\epsilon} \right)} \right]\sinh \left[ {\pi T\left( {\frac{x}{v} + t - i\epsilon } \right)} \right] \right)^{K/2}} \nonumber\end{aligned}$$
where in the last step we have assumed further $\Delta T\ll T$ and $L\rightarrow\infty$, and inserted the explicit expression for ${F_T}$ Eq. (\[FT\_def\]). Employing the definition Eq. (\[alpha\_xy\_def\]), we evaluate the remaining integrals and finally get
$$\label{alpha_xy_general}
\alpha_{xy} = - \frac{e{(\pi J)^2}a^3\mathcal{C}}{4v^2}
\sin \left( {\frac{{\pi K}}{2}} \right){\left( {\frac{{2\pi a T}}{v}} \right)^{K-2}}{\partial _q}\left\{
\left|B\left( {i\frac{{vq}}{{4\pi T}} + \frac{K}{4},1 - \frac{K}{2}} \right)\right|^2\right\}
$$
where $B(z,w)$ is the Beta function [@GRbook]. Note that the resulting $\alpha_{xy}(T)$ exhibits a power-law $T$-dependence, which indicates an apparent divergence in the $T\rightarrow 0$ for sufficiently small $K$. Since $\alpha_{xy}$ is known to be proportional to the entropy of carriers, such behavior would violate the third law of thermodynamics. We emphasize, however, that this is an artefact of the approximation leading to Eq. (\[alpha\_xy\_general\]), which assume a finite temperature and in particular $J\ll T$. While the true $T\rightarrow 0$ is beyond the scope of the present theory, we speculate that due to an opening of a gap (either superconducting or insulating, depending on the value of $K$), the coefficient $\alpha_{xy}$ is suppressed as expected.
We now consider the alternative setup where an electric voltage $V_y$ is imposed between the top and bottom wires (at uniform $T$), and evaluate the heat current $I^{(h)}_x$ induced along the ladder. For $J=0$, but accounting for the corrections $H_n^{corr}$ \[Eq. (\[HJcorr\])\], the local heat current operator is given by [@RA] $$J_h^{(0)}=v^2\sum_{n=1,2}\partial_x\theta_n
\left({\partial_x}{\phi_n}+i\mathcal{C}\frac{a}{v}\partial_t{\partial_x}{\phi_n}\right)\; .
\label{Jh_def}$$ The voltage bias corresponds to a difference in chemical potentials in the two legs, $\mu_{1,2}=\pm eV_y$, which introduce constant shifts of $\partial_x\theta_{1,2}$ by $\pm \pi eV_y/vK$. The heat current $I^{(h)}_x=\left\langle U(t)J_h^{(0)}{U^\dag }(t)\right\rangle$ \[with $U(t)$ expanded to second order in $H_J$ as in Eq. (\[I\_n\])\], is hence given by $$I^{(h)}_x=e\pi V_y\left\langle U(t)(\dot {\theta}_1^{(0)}-\dot {\theta}_2^{(0)}){U^\dag }(t)\right\rangle\; .
\label{Ih}$$ The resulting expression coincides with $(V_yT/\Delta T)I_x$. We thus confirm that $\tilde\alpha_{xy}=I^{(h)}_x/V_y=T\alpha_{xy}$, as required by Onsager’s relation.
The Nernst Coefficient $\nu$ {#sec:nu}
----------------------------
To derive the Nernst coefficient, we next employ Eq. (\[Nernst\_def\]) noting that within our level of approximations, $\alpha_{yy}$ and $\rho_{xy}$ vanish due to particle-hole symmetry. The Nernst signal in the setup depicted in Fig. \[fig1\], defined as $\nu=|V/\Delta TB|$, is hence determined by the product of $\alpha_{xy}$ \[Eq. (\[alpha\_xy\_general\])\] and the longitudinal resistance of the ladder $R_{xx}$. To leading order in $H_D$ \[Eq. (\[HD\])\] [@Giamarchi; @SIT1D; @GS], $${R_{xx}} = \frac{{\pi^3 LDa^2}}{{2{e^2}v^2}}\cos \left( {\frac{\pi }{K}} \right)B\left( {\frac{1}{K},1 - \frac{2}{K}} \right){\left( {\frac{{2\pi a T}}{v}} \right)^{\frac{2}{K}-2}}\; .
\label{rho_xx}$$ At low magnetic field such that $q=2\pi WB/\Phi_0\ll T/v$, this yields an expression for $\nu\approx\alpha_{xy}R_{xx}/B$ of the form $$\nu\approx\nu_0{\mathcal F}(K)\left( {\frac{{2\pi a T}}{v}} \right)^{K+\frac{2}{K}-6}
\label{nu}$$ where the constant prefactor $\nu_0\propto LDJ^2$ and $${\mathcal F}(K)\equiv\frac{\Gamma^2\left(\frac{K}{4}\right)\Gamma\left(1-\frac{K}{2}\right)\Gamma\left(\frac{1}{K}\right)
\left\{\psi^\prime\left(\frac{K}{4}\right)-\psi^\prime\left(1-\frac{K}{4}\right)\right\}}
{2^{2/K}\Gamma^2\left(1-\frac{K}{4}\right)\Gamma\left(\frac{K}{2}\right)\Gamma\left(\frac{1}{K}+\frac{1}{2}\right)}
\label{F(K)}$$ \[$\Gamma(z)$, $\psi^\prime(z)$ are the Gamma and Trigamma functions, respectively [@GRbook]\].
The resulting dependence of $\nu$ on $K$, the parameter which tunes the SIT in the SC wires, is depicted in Fig. \[fig2\] for different values of $T\ll v/ a$, and for low magnetic fields where $vq\ll T$. In this regime, $\nu$ exhibits a pronounced maximum at $K^\ast(T)$, slightly below the transition from SC to insulator [@SIT1D; @zaikin]($K_c=2$). As $T$ is lowered, the peak becomes progressively enhanced and $K^\ast\sim\sqrt{2}$ as dictated by the rightmost exponential factor in Eq. (\[nu\]). This non-monotonous behavior can be traced back to the competition between electric resistance (which signifies the rate of phase-slips), and $\alpha_{xy}$ (which signifies the strength of diamagnetic response).
![(color online) Isotherms of $\nu$ as a function of $K$ for $v/2\pi a=1$K, $T=0.06,0.07,0.08,0.09,0.1$K. \[fig2\] ](nu_vs_K.pdf){width="1.0\linewidth"}
The Magnetization $M$ {#sec:M}
=====================
We next investigate the relation of $\alpha_{xy}$ to the diamagnetic magnetization density $M = - (1/LW)(\partial F /{\partial B})$, assuming that the $B$-dependence of the free energy $F$ is primarily restricted to the flux in the junction area, i.e. the parameter $q$ in Eq. (\[HJ\]). To leading order in $H_J$ and at low magnetic field $B=(\Phi_0/2\pi W)q$, $$\begin{aligned}
\label{FJ}
M &\cong &\frac{2\pi}{\Phi_0}\frac{T}{{2L }}\partial_q{\left\langle {{ {S_J}^2}} \right\rangle _0}\; , \\ \nonumber
S_J &=&- J\int\limits_{ - \frac{L}{2}}^{\frac{L}{2}} dx\int\limits_0^{\frac{1}{T} } {d\tau }\cos \left\{ {{\phi _1}({x},{\tau}) - {\phi _2}({x},{\tau }) - q{x}} \right\}\; .\end{aligned}$$ The resulting expression for $M$ is identical to Eq. (\[alpha\_xy\_general\]) up to a constant prefactor. We thus obtain the relation $$\alpha_{xy}=-\frac{M}{T_0}\; ,\quad T_0\equiv \frac{v}{\pi a\mathcal{C}}\sim\sqrt{E_cE_J}
\label{alpha_xy_M}$$ where the last proportionality relation associates the energy scale $T_0$ with the plasma energy in the Josephson chain forming the legs of the ladder. This resembles the linkage pointed out in earlier literature, except the thermal energy scale $T$ in the prefactor (as obtained for full-fledged 2D systems) is replaced here by the characteristic scale of quantum dynamical phase-slips.
Discussion {#sec:discussion}
==========
To summarize, we studied the transverse thermoelectric coefficients due to quantum SC fluctuations in a Josephson two–leg ladder, and their relation to diamagnetism. Most importantly, we predict a large Nernst signal, particularly at moderately low temperatures ($Ja\ll T\ll T_0$) where a pronounced peak is predicted close to the SIT. This behavior reflects a subtle interplay between diamagnetism (favored in the SC phase), and dynamical phase-fluctuations (which proliferate in the insulator).
A crucial step in the derivation of the leading contribution to the transverse Peltier coefficient $\alpha_{xy}$ relies on a correction to the Hamiltonian which violates Lorentz invariance of the model for quantum phase fluctuations in the SC wires. Such terms are indeed necessary to couple the charge current to the thermal current: the correlation between them is at the heart of the Peltier effect. This point can be understood also from a different angle, implementing the language of vortex physics: it is in fact possible to view $\alpha_{xy}$ as a dual of the ordinary (longitudinal) thermopower $Q=\Delta V/\Delta T$, which is known to encode the specific heat (or entropy) of charge carriers. In a dual representation, electric current in the transverse direction plays the role of a “voltage" applying a [*longitudinal*]{} (Magnus) force on the vortices, which balances the force imposed by the temperature gradient.The latter is proportional to the entropy of the vortices. Hence, $\alpha_{xy}$ vanishes in the case where vortices carry no entropy (see also Ref. [@Jnernst]).
We finally note that the remarkably simple relation to the entropy per carrier $\alpha_{xy}\sim -(s/B)$, derived for clean (Galilean invariant) systems [@CHR; @BO], does not hold here. Indeed, this relation can be recovered, e.g., employing the Boltzmann equation for an ordinary conductor in the limit $\omega_c\tau\rightarrow\infty$ (with $\omega_c$ the cyclotron frequency and $\tau$ a mean free time [@BoltzmannNotes]). In contrast, for $\omega_c\tau\ll 1$ the same calculation yields $\alpha_{xy}\sim B\tau^2$. In our case, the latter limit is appropriate: while translational invariance holds in the $x$-direction, charge conductance along the $y$-direction is governed by weak tunneling between two discrete points, $\sim J$. We hence expect $\alpha_{xy}\sim BJ^2$, in accord with Eq. (\[alpha\_xy\_general\]).
As a concluding remark, we note that a qualitatively similar behavior of the Nernst effect is expected to hold in 2D SC films, or an infinite stack of such ladders (which is essentially equivalent to an anisotropic ultrathin SC film). Possibly, it can also explain some properties of the existing data: see, for example, Fig. 3 in the paper by Pourret [*et al.*]{} [@Pourret], where the Nernst signal measured in NbSi films exhibits an increase (and sharpening) of the peak for $T<T_c$. It is therefore suggestive that more elaborate Josephson arrays models can serve as a useful arena for studying transverse thermoelectric effects in disorder SC films.
We thank A. Auerbach, D. Arovas, K. Behnia, A. Frydman, A. Goldman, K. Michaeli and D. Podolsky for useful discussions. E. S. is grateful to the hospitality of the Aspen Center for Physics (NSF Grant No. 1066293). This work was supported by the US-Israel Binational Science Foundation (BSF) grant 2008256, and the Israel Science Foundation (ISF) grant 599/10.
[99]{}
For a review see, e.g., A. I. Larkin and A. A. Varlamov in [*The Physics of Superconductors*]{}, Vol. I, eds. K.-H.Bennemann and J. B. Ketterson (Springer, Berlin 2003).
For a review and extensive references, see Y. Liu and A. M. Goldman, Mod. Phys. Lett. B **8**, 277 (1994); S. L. Sondhi, S. M. Girvin, J. P. Carini and D. Shahar, Rev. Mod. Phys. **69**, 315 (1997); A. M. Goldman and N. Markovic, Physics Today **51**, 39 (1998); N. Nagaosa, *Quantum Field Theory in Condensed Matter Physics*, Sec. 5.3 (Springer, 1999).
K. Yu. Arutyunov, D. S. Golubev and A. D. Zaikin, Physics Reports [**464**]{}, 1 (2008), and refs. therein.
Z. A. Xu, N. P. Ong, Y. Wang, T. Kakeshita and S. Uchida, Nature [**406**]{}, 486 (2000); Y. Wang, Z. A. Xu, T. Kakeshita, S. Uchida, S. Ono, Y. Ando and N. P. Ong, Phys. Rev. B [**64**]{}, 224519 (2001).
Y. Wang, L. Li and N. P. Ong, Phys. Rev. B [**73**]{}, 024510 (2006).
P. Spathis, H. Aubin, A. Pourret and K. Behnia, Europhys. Lett. [**83**]{}, 57005 (2008).
A. Pourret, P. Spathis, H. Aubin and K. Behnia, New J. Phys. [**11**]{}, 055071 (2009).
S. Ullah and A. T. Dorsey, Phys. Rev. Lett. [**65**]{}, 2066 (1990); S. Ullah and A. T. Dorsey, Phys. Rev. B [**44**]{}, 262 (1991).
I. Ussishkin, S. L. Sondhi and D. A. Huse, Phys. Rev. Lett. [**89**]{}, 287001 (2002).
M. N. Serbyn, M. A. Skvortsov, A. A. Varlamov and V. M. Galitski, Phys. Rev. Lett. [**102**]{}, 067001 (2009); A. Sergeev, M. Reizer, and V. Mitin, Phys. Rev. Lett. [**106**]{}, 139701 (2011); M. N. Serbyn, M. A. Skvortsov, A. A. Varlamov and V. M. Galitski, Phys. Rev. Lett. [**106**]{}, 139702 (2011).
K. Michaeli and A. M. Finkelstein, Europhys. Lett. [**86**]{}, 27007 (2009).
J. M. Kosterlitz and D. J. Thouless, J. Phys. C [**6**]{}, 1181 (1973).
D. Podolsky, S. Raghu and A. Vishwanath, Phys. Rev. Lett. [**99**]{}, 117004 (2007).
M. J. Bhaseen, A. G. Green and S. L. Sondhi, Phys. Rev. Lett. [**98**]{}, 166801 (2007); Phys. Rev. B [**79**]{}, 094502 (2009).
N. R. Cooper, B. I. Halperin and I. M. Ruzin, Phys. Rev. B [**55**]{}, 2344 (1997).
D. L. Bergman and V. Oganesyan, Phys. Rev. Lett. [**104**]{}, 066601 (2010).
A. Sergeev, M. Reizer and V. Mitin, Europhys. Lett. [**92**]{}, 27003 (2010).
Devices of this type have been already fabricated and utilized to study magneto-resistance due to quantum dynamics of vortices; see, e.g., C. Bruder, L.I. Glazman, A.I. Larkin, J.E. Mooij and A. van Oudenaarden, Phys. Rev. B **59**, 1383 (1999).
The Nernst effect in a classical version of such device - a long Josephson junction between two bulk SC - has been measured and studied in detail in earlier literature; see, e.g., G. Yu. Logvenov, V. A. Larkin and V. V. Ryazanov, Phys. Rev. B [**48**]{}, 16853(R) (1993); V. M. Krasnov, V. A. Oboznov and N. F. Pedersen, Phys. Rev. B [**55**]{}, 14486 (1997).
We note that this behavior is confined to an intermediate regime of $T$, where our approximations hold (see Sec. II), and possibly changes in the limit $T\rightarrow 0$.
See, e.g., T. Giamarchi, *Quantum Physics in One Dimension*, (Oxford, New York, 2004).
E. Orignac and T. Giamarchi, Phys. Rev. B **64**, 144515 (2001).
Y. Atzmon and E. Shimshoni, Phys. Rev. B **83**, 220518(R) (2011); Phys. Rev. B **85**, 134523 (2012).
T. Giamarchi and H. J. Schulz, Phys. Rev. B [**37**]{}, 325 (1988).
U. Eckern, G. Sch[ö]{}n and V. Ambegaokar, Phys. Rev. B [**30**]{}, 6419 (1984).
E. Shimshoni, Y. Gefen and S. Fishman, Phys. Rev. B [**40**]{}, 2158 (1989).
A. Rosch and N. Andrei, Phys. Rev. Lett. [**85**]{}, 1092 (2000).
I. S. Gradshteyn and I. M. Ryzhik, *Tables of Integrals, Series and Products* (Academic Press, 1980).
A. D. Zaikin, D. S. Golubev, A. van Otterlo and G. T. Zimanyi, Phys. Rev. Lett. [**78**]{}, 1552 (1997).
Yeshayahu Atzmon, *Superconductivity in low-dimensional systems*, Ph.D. thesis (Bar-Ilan University, 2012); see Appendix C.
|
---
abstract: 'We study the influence of the environment on an optically induced rotation of a single electron spin in a charged semiconductor quantum dot. We analyze the decoherence mechanisms resulting from the dynamical lattice response to the charge evolution induced in a trion-based optical spin control scheme. Moreover, we study the effect of the finite trion lifetime and of the imperfections of the unitary evolution such as off-resonant excitations and the nonadiabaticity of the driving. We calculate the total error of the operation on a spin-based qubit in an InAs/GaAs quantum dot system and discuss possible optimization against the different contributions. We indicate the parameters which allow for coherent control of the spin with a single qubit gate error as low as $10^{-4}$.'
author:
- 'A. Grodecka'
- 'C. Weber'
- 'P. Machnikowski'
- 'A. Knorr'
title: |
Interplay and optimization of decoherence mechanisms\
in the optical control of spin quantum bits\
implemented on a semiconductor quantum dot
---
Introduction
============
The spin degree of freedom of an excess electron in a charged semiconductor quantum dot (QD) has been proposed as an attractive candidate for the use as a qubit [@calarco03; @imamoglu99; @feng03]. The advantage of storing the logical values in spin states is their long coherence time [@hanson03]. In addition, it is possible to optically induce charge dynamics dependent on the spin state of an electron via the Pauli exclusion principle and optical selection rules [@pazy03; @emary07; @economou06]. Coupling between charge states via static [@pazy03; @calarco03] and interband [@nazir04; @lovett05] electric dipole moments allows one to perform quantum conditional operations on two spins. In this way, one can implement single- and two-qubit gates in QD systems. The utilization of the optical control methods leads to shorter switching times, on a picosecond time scale, in comparison with nanosecond magnetic control of the spin, which is essential for the implementation of quantum information processing schemes. The capability of encoding and manipulating information at the single-spin level is of great importance and has been experimentally demonstrated recently: the generation [@dutt05] and optical control [@bayer06; @dutt06] of the spin coherence together with a possible read-out of the state of a single confined spin in a QD system [@atature07] can make the implementation of a quantum computer feasible.
A promising scheme of quantum optical control of a spin in a single QD was recently proposed in Ref. . It has been shown that coupling to a trion (charged exciton) state leads to an arbitrary rotation between the two Zeeman-split spin states. In this way, optical coherent control of a spin in a QD via adiabatic Raman transitions is possible. This control protocol does not require an auxiliary fourth state which was needed in a similar scheme previously proposed [@troiani03], removing the requirement for the transfer of the electron between two QDs and the delocalized hole state.
In realistic experiments, the QDs used for the proposed implementation are embedded in a solid state matrix, and confined carriers interact with the phonon bath, leading to a loss of coherence. In optical spin control schemes, spin rotation is achieved by spin-dependent charge evolution. Therefore, the spin decoherence in these schemes results mainly from the lattice response to the evolving charge density [@calarco03; @roszak05; @grodecka05; @parodi06]. This effect has been studied [@roszak05] for a control scheme using an auxiliary state [@troiani03] but not for the scheme of Ref. that seems to be advantageous in some respects. Because of the different scheme of control the phonon-induced errors in the latter case have a different form and can be attributed to two channels: pure dephasing and phonon-assisted trion generation. Furthermore, since in this control procedure the spin rotation involves a finite trion occupation (unlike in the other scheme[@troiani03]), in addition to the phonon interaction the finite trion lifetime can also lead to decoherence, because of the nonzero probability of the radiative decay of the trion [@caillet07]. Moreover, the operation in the ideal case should be performed adiabatically and the imperfect adiabaticity of the evolution will contribute to the total error of the operation. Finally, since this control scheme involves spectral selection of transitions (in contrast to the other one), off-resonant terms can also lead to an unwanted leakage to the trion state and result in large discrepancies in the desired spin rotation. It is not clear in advance which of these factors (if any) will dominate the decoherence under specific driving conditions. So far, for the spin control scheme of Ref. , only the error resulting from imperfect adiabaticity of the driving and from the finite trion lifetime was evaluated [@caillet07] but the impact of phonons, off-resonant excitations, and the joint decoherence effect have not been studied.
In this paper, we study the combined influence of the phonon and photon environments and of the imperfections of the evolution on an optical spin control scheme based on an off-resonant coupling of the spin states to a trion state in a doped semiconductor QD [@chen04]. As a single-qubit gate, an optically induced arbitrary rotation of the electron spin state is considered. We show that the interactions with phonons and photons are the dominant sources of decoherence in this system. The phonon-induced decoherence has two origins: pure dephasing and phonon-assisted trion generation, with a different dependence on the duration and the detuning of the optical control pulses. We show that by slowing down the evolution one can considerably decrease only the error due to the pure dephasing, while the phonon-assisted trion generation part still results in a large operation error. The contributions to the error resulting from the finite lifetime of the trion and the imperfections of the evolution are also studied. We calculate the total error of the operation and study the nontrivial interplay of the different contributions. In particular, we show that for moderate detunings ($\sim 1$ meV) and long pulse durations the error is dominated by phonon-related effects. We indicate the optimal conditions for the qubit rotation for which the error is considerably small, with values even below $10^{-4}$, which is essential for coherent quantum control, and discuss the possible optimization against particular contributions to the error.
The paper is organized as follows. In Sec. \[sec:model\], we introduce the model for the qubit based on the spin states of a confined electron in a charged QD. Next, in Sec. \[sec:rotation\], we describe a single-qubit rotation scheme. In Sec. \[sec:imperfections\], we study the imperfections of the evolution. Section \[sec:perturbation\] describes the general perturbative method for describing the effects of the environment on an arbitrary operation on a logical qubit. Section \[sec:decoherence\] contains the results for the error contributions due to the interaction of the carriers with the phonon and photon environments. In Sec. \[sec:interplay\], we calculate the total error of the spin rotation and discuss the interplay and possible optimization of the different sources of the error. Section \[sec:conclusion\] concludes the paper with final remarks. In addition, some technical details are presented in the Appendixes.
Model system {#sec:model}
============
We consider a single semiconductor QD charged with one additional electron. A magnetic field applied in the $x$ direction \[see Fig. \[fig:stirap\](a)\] causes a Zeeman splitting $\Delta_{\rm B}$ between the electron states with spin-up and spin-down (with respect to the direction of the magnetic field) which define the two logical qubit states $|0\rangle$ and $|1\rangle$ in the proposed scheme [@chen04]. These two states are off-resonantly coupled (with a detuning $\Delta$) to a trion state $|2\rangle$ by $\sigma_{+}$–polarized laser pulses with real amplitudes $\Omega_{0}(t)$ and $\Omega_{1}(t)$ and with different phases \[see Fig. \[fig:stirap\](a,b)\]. These two electron spin states $|0\rangle$ and $|1\rangle$ together with the trion state $|2\rangle$ compose a three-level system, known in the literature as a $\Lambda$ system [@scully97].
The Hamiltonian of the system is given by $$H = H_{\mathrm{C}} + H_{\mathrm{env}} + V,$$ where the first term (control Hamiltonian) describes the carriers and their interaction with the classical driving field (the laser beam) and the second is the sum of the free phonon $H_{\mathrm{ph}}$ and photon $H_{\mathrm{rad}}$ contributions. The last part, $V = H_{\mathrm{c-ph}}+ H_{\mathrm{c-rad}}$, describes the coupling of the carriers to the environment, where the first term denotes the carrier-phonon interaction and the second describes the coupling of the carriers to the photon field.
The control Hamiltonian for this system in dipole and rotating wave approximations is $$\begin{aligned}
H_{\mathrm C} & = & \sum_{n} \epsilon_{n} |n {\rangle\!\langle}n | \\
&& + \frac{\hbar}{2}\left[ \sum_{n=0,1} \Omega_{n} (t)
e^{i(\omega_{n} t + \gamma_{n})}
(|0\rangle+|1\rangle)\langle 2| + \mathrm{H.c.} \right],\end{aligned}$$ where $\epsilon_{n}$ are the energies of the corresponding states, $\omega_{n}$ are the laser frequencies, and $\gamma_{n}$ are the phases of the pulses. The frequencies $\omega_{0}$ and $\omega_{1}$ have to satisfy the Raman conditions, namely that the detunings from the corresponding transition energies $\epsilon_{2} - \epsilon_{n}$ should be the same. To this end, we set $\omega_{n} = (\epsilon_{2} - \epsilon_{n})/\hbar - \Delta$ ($n = 0, 1$), where $\Delta$ is the common Raman detuning. We perform a unitary transformation to the rotating frame with $|\tilde n\rangle = e^{i(\omega_{n} t - \gamma_{0})} |n\rangle$ ($n=0,1$). We can set $\gamma_{0} = \gamma$, $\gamma_{1}=0$ because only the relative phase is important. In the rotating frame, the Hamiltonian reads $$\begin{aligned}
\label{hc}
H_{\mathrm C} & = & \hbar \Delta |2 {\rangle\!\langle}2 | + \frac{\hbar}{2} \Omega_{0} (t)
\left(e^{i \gamma} |\tilde 0{\rangle\!\langle}2| + e^{-i \gamma}|2 {\rangle\!\langle}\tilde 0| \right) \\ \nonumber
&& + \frac{\hbar}{2} \Omega_{1} (t)
\left(|\tilde 1{\rangle\!\langle}2| + |2 {\rangle\!\langle}\tilde 1| \right)+H_{\mathrm{C}}',\end{aligned}$$ where $$H_{\mathrm{C}}'=
\frac{\hbar}{2}\Omega_{1}(t) e^{i \Delta_{\rm B} t} |\tilde 0{\rangle\!\langle}2|
+\frac{\hbar}{2}\Omega_{0}(t)e^{-i \Delta_{\rm B} t + i \gamma} |\tilde 1{\rangle\!\langle}2|
+ {\rm H.c.}$$ contains oscillating (off-resonant) terms which can be treated as a perturbation to the ideal evolution generated by the Hamiltonian given in Eq. (\[hc\]).
The free phonon Hamiltonian has the form $$H_{\mathrm{ph}} = {\sum_{\bm{k}}}\hbar{\omega_{\bm{k}}}^{\phantom{\dag}} {\beta_{\bm{k}}^{\dag}}{\beta_{\bm{k}}}^{\phantom{\dag}},$$ where ${\beta_{\bm{k}}^{\dag}}$ and ${\beta_{\bm{k}}}$ are phonon creation and annihilation operators, respectively, with corresponding frequencies ${\omega_{\bm{k}}}$, where ${\bm{k}}$ is the phonon wave number. The unperturbed photon Hamiltonian in the absence of charges reads $$H_{\mathrm{rad}} = \sum_{{\bf q},\lambda}\hbar\omega'_{\bf q}
c_{{\bf q} \lambda}^{\dag} c_{{\bf q} \lambda}^{\phantom{\dag}}$$ with photon creation and annihilation operators $c_{{\bf q} \lambda}^{\dag}$ and $c_{{\bf q} \lambda}$, respectively. Here, $\omega'_{\bf q} = c |{\bf q}|/n_{\mathrm{r}}$ is the photon frequency, with the speed of light in vacuum $c$, the photon wave number ${\bf q}$, and the refractive index of the semiconductor medium $n_{\mathrm{r}}$; $\lambda$ labels the polarization. The unperturbed evolution of the system and of the decoupled environment is described by $H_{\mathrm{C}}+H_{\mathrm{ph}}+H_{\mathrm{rad}}$.
The interaction of the carriers with the environment includes the coupling to phonons and photons. The carrier-phonon interaction reads $$\label{hcph}
H_{\mathrm{c-ph}} = \sum_{n,n'} |n {\rangle\!\langle}n'| {\sum_{\bm{k}}}f_{nn'}({\bm{k}})
\left({\beta_{\bm{k}}}^{\phantom{\dag}} + {\beta_{-\bm{k}}}^{\dagger} \right)$$ with coupling constants $f_{nn'}({\bm{k}})$ having the symmetry $f_{nn'}({\bm{k}}) = f_{n'n}^{*}(-{\bm{k}})$. The states $|0\rangle$ and $|1\rangle$ correspond to a single electron confined in the same QD structure and differ only by the spin orientation. Therefore, they have the same orbital wave functions, and thus the coupling constants $f_{00}({\bm{k}})$ and $f_{11}({\bm{k}})$ are the same. These states have different spins so that $f_{01}({\bm{k}})$ and $f_{10}({\bm{k}})$ would describe a ‘direct’ phonon-assisted spin flip, mediated by the spin-orbit coupling [@golovach04]. However, in the optical spin-control schemes, the effect of this process is many orders of magnitude weaker than the decoherence due to the dynamical response to charge evolution [@roszak05]. Therefore, we neglect this coupling and set these coefficients equal to zero.
Initially, before the arrival of the pulses, the lattice is in a configuration where one electron is present in the QD surrounded by a lattice deformation (a polaron-like state [@vagov02; @jacak03; @machnikowski07]). We redefine the phonon modes to obtain the ground state of the carrier-phonon system corresponding to this new lattice equilibrium. In terms of the new phonon operators $${b_{\bm{k}}}= {\beta_{\bm{k}}}+ \frac{f_{00}({\bm{k}})}{\hbar{\omega_{\bm{k}}}},$$ the interaction with the lattice modes reads $$\begin{aligned}
\label{Hc-ph}
H_{\mathrm{c-ph}} & = & |2 {\rangle\!\langle}2| {\sum_{\bm{k}}}F_{22}({\bm{k}})\left({b_{\bm{k}}}^{\phantom{\dag}}+{b_{-\bm{k}}^{\dag}}\right) \\ \nonumber
&& + \left[ | \tilde 1 {\rangle\!\langle}2| {\sum_{\bm{k}}}F_{12}({\bm{k}})\left({b_{\bm{k}}}^{\phantom{\dag}}+{b_{-\bm{k}}^{\dag}}\right) + \mathrm{H.c.} \right],\end{aligned}$$ where $F_{22}({\bm{k}}) = f_{22}({\bm{k}}) - f_{00}({\bm{k}})$ and $F_{12}({\bm{k}}) = f_{12}({\bm{k}}) e^{-i(\omega_{1}t-\gamma_{0})}$. Additionally, there is a polaron-like energy shift which is included in the control Hamiltonian $H_{\mathrm{C}}$. The interband off-diagonal phonon coupling \[the second term in Eq. (\[Hc-ph\])\] has a negligible effect due to energetic reasons [@roszak05] and can be disregarded. For spectrally narrow pulses, which are needed for the adiabaticity of the procedure, only acoustic phonons contribute to the dephasing. For overlapping electron and hole wave functions, the excitation of a confined trion does not involve considerable charge redistribution and the effect of the piezoelectric coupling is very weak [@krummheuer02]. Therefore, we consider only the interaction with longitudinal acoustic phonons via the deformation potential coupling.
We assume for simplicity that the trion state is described by a product of electron and hole wave functions $\Psi_{\rm e(h)}(\bm{r})$ which are the same as those corresponding to a single confined carrier. This is a reasonable approximation in the strong confinement limit, where the Coulomb interaction is of minor influence on the wave functions and leads only to energy renormalization effects which are included in the transition energies. The coupling between the trion and phonons \[the first term in Eq. (\[Hc-ph\])\] has the form (see Appendix \[app:couplings\]) $$\label{f22}
F_{22}({\bm{k}}) = f_{22}({\bm{k}})-f_{00}({\bm{k}})=
\sqrt{\frac{\hbar k}{2 \rho V c_{\mathrm l}}}
(D_{\mathrm e} -D_{\mathrm h}) \mathcal{F}({\bm{k}}),$$ where $\rho$ is the crystal density, $V$ is the normalization volume of the phonon modes, $c_{\mathrm{l}}$ is the longitudinal speed of sound, and $D_{\mathrm{e(h)}}$ is the deformation potential constant for the electron (hole). The form factor $\mathcal F({\bm{k}})$ depends on the geometry of the wave functions $\Psi_{\rm e(h)}(\bm{r})$. We assume Gaussian wave functions $$\label{wavefunction}
\Psi_{\rm e(h)}(\bm{r}) \sim \exp{\left(-\frac{r_{\perp}^{2}}{
2l_{\rm e(h)}^{2}} -\frac{z^{2}}{2 l_{z}^{2}} \right)},$$ where $l_{\mathrm{e(h)}}$ is the confinement size for an electron (a hole) in the $xy$ plane, $l_{z}$ is the common confinement size along $z$, and $r_{\bot}$ and $r_{z}$ are the corresponding components of the position. Then, neglecting the small correction resulting from the difference between the electron and hole confinement sizes, one finds $$\label{formfactor}
\mathcal{F}({\bm{k}}) = e^{-(k_{\bot}l/2)^{2}-(k_{z}l_{z}/2)^{2}},$$ where $l^{2}=(l_{\mathrm{e}}^{2}+l_{\mathrm{h}}^{2})/2$ and $k_{\bot}$ and $k_{z}$ are the components of the wave vector in the $xy$ plane and along $z$, respectively (see Appendix \[app:couplings\] for details).
The carrier-photon interaction Hamiltonian in the rotating wave approximation reads (in the rotating frame) $$\begin{aligned}
\label{hcrad}
H_{\rm{c-rad}} & = & \frac{1}{\sqrt{2}}\sum_{{\bf q},\lambda}
g_{{\bf q} \lambda}^{\phantom{\dag}} c_{{\bf q} \lambda}^{\dag}
\left[ e^{i \omega_{0} t} |\tilde{0}{\rangle\!\langle}2|
\right. \\ \nonumber && \left. + e^{i (\omega_{1} t - \gamma)}
|\tilde{1}{\rangle\!\langle}2| \right] + \mathrm{H.c.}\end{aligned}$$ with the coupling constants $$g_{{\bf q} \lambda} = -i \sum_{ \alpha = 1}^{3} d_{ \alpha }
\sqrt{\frac{\hbar \omega'_{\bf q}}{2 \epsilon_{0} \epsilon_{\rm r} V}}
e_{ \alpha }^{(\lambda)}({\bf q}).$$ Here, $\alpha$ denotes Cartesian components, $d_{ \alpha }$ is the interband dipole moment, $\epsilon_{0}$ and $\epsilon_{\rm r}=n_{\mathrm{r}}^{2}$ are the vacuum dielectric constant and the semiconductor relative dielectric constant, respectively, and $\bm{e}^{(\lambda)}({\bf q})$ is the unit polarization vector.
In Tab. \[tab:param\], the material parameters (corresponding to a self-assembled InAs/GaAs system) are given.
-------------------------------------- --------------------------------- ---------------
Deformation potential coupling $D_{\mathrm{e}}-D_{\mathrm{h}}$ 8 eV
Crystal density $\rho$ 5360 kg/m$^3$
Speed of sound (longitudinal) $c_{\mathrm{l}}$ 5150 m/s
Wave function width in-plane $l$ 5 nm
Wave function width in $z$ direction $l_{z}$ 1 nm
Trion decay rate $\Gamma$ 1 ns$^{-1}$
Zeeman splitting $\hbar\Delta_{\rm B}$ 1 meV
-------------------------------------- --------------------------------- ---------------
: \[tab:param\]System parameters used in the calculations.
Unperturbed spin rotation {#sec:rotation}
=========================
In this section, we present the formal description of the spin rotation procedure [@chen04] without the interaction with the environment. In the ideal case, the evolution is slow and may be described by invoking the adiabatic theorem [@messiah66].
In the considered three-level system \[Fig. \[fig:stirap\](b)\], it is possible to perform an arbitrary rotation of the electron spin via the intermediate trion state $|2\rangle$. To show this, one sets in the control Hamiltonian $H_{\mathrm{C}}$ \[Eq. (\[hc\])\] $$\Delta = \Theta(t) \cos[2\phi(t)]$$ and $$\Omega_{0}(t) = \Omega(t)\cos\beta,\quad
\Omega_{1}(t) = \Omega(t)\sin\beta,$$ where $$\Omega(t) = \Theta(t) \sin[2\phi(t)].$$ Here, $\Theta(t)=\sqrt{\Omega^{2}(t)+\Delta^{2}}$ is the (time-dependent) effective Rabi frequency and $$\sin^{2}\phi(t) = \frac{1}{2}\left(
1-\frac{\Delta}{\sqrt{\Omega^{2}(t)+\Delta^{2}}} \right).$$ For $\Delta>0$, switching the pulses off corresponds to $\phi\to 0$.
To ensure an adiabatic evolution, the angle $\beta$, defined via $\tan \beta = \Omega_{1}/\Omega_{0}$, should vary slowly in time. We choose $\Omega_{0}$ and $\Omega_{1}$ to have the same envelope shapes, so that $\beta$ becomes time independent.
We introduce new basis states $$\begin{aligned}
|B\rangle & = & e^{i\gamma} \cos\beta |\tilde 0\rangle + \sin\beta |\tilde 1\rangle,\\
|D\rangle & = & e^{i\gamma} \sin\beta |\tilde 0\rangle - \cos\beta |\tilde 1\rangle,\end{aligned}$$ which are superpositions of the qubit states $|0\rangle$ and $|1\rangle$ selected by the laser pulses, where only the *bright* state $|B\rangle$ is coupled to the trion state and the orthogonal *dark* state $|D\rangle$ remains unaffected. The Hamiltonian $H_{\mathrm{C}}$ which generates the ideal evolution \[Eq. (\[hc\])\] now reads $$H_{\rm C} = \hbar\Delta |2{\rangle\!\langle}2| + \frac{\hbar}{2}\Omega(t)
\left( |B {\rangle\!\langle}2| + |2 {\rangle\!\langle}B| \right)$$ and has the instantaneous eigenstates
$$\begin{aligned}
\label{basis}
|a_{0}(t)\rangle & = & |D\rangle, \\
|a_{1}(t)\rangle & = & \cos\phi(t) |B\rangle -\sin\phi(t) |2\rangle,\\
|a_{2}(t)\rangle & = & \sin\phi(t) |B\rangle + \cos\phi(t) |2\rangle\end{aligned}$$
with the corresponding eigenvalues $$\begin{aligned}
\lambda_{0}(t) & = & 0,\\
\lambda_{1}(t) & = & -\hbar\Theta(t)\sin^{2}\phi(t)
= \frac{\hbar}{2}\left(\Delta - \sqrt{\Delta^2 + \Omega^{2}(t)} \right),\\
\lambda_{2}(t) & = & \hbar\Theta(t)\cos^{2}\phi(t)
= \frac{\hbar}{2}\left(\Delta + \sqrt{\Delta^2 + \Omega^{2}(t)} \right). \end{aligned}$$
The system evolution is realized by the change of the so called tipping angle $\phi(t)$ \[see Fig. \[fig:omega\](a)\], i.e., by the change of the pulse amplitudes. The condition for adiabaticity is a slow change of $\phi(t)$ in comparison with the rate of the adiabatic motion given by the effective Rabi frequency, $|\dot{\phi(t)}|\ll \Theta(t)$. If the adiabatic condition is met, the state of the system (initially a combination of $|0\rangle$ and $|1\rangle$) remains in the subspace spanned by the two eigenstates $|a_{0}\rangle$ and $|a_{1}\rangle$ during the whole process.
The evolution operator $U_{\mathrm C}(t)$ in the absence of the environment perturbation (in the basis $|D\rangle$, $|B\rangle$, $|2\rangle$) has the form $$\label{Uan}
U_{\mathrm{C}} (t) =
\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & e^{-i\Lambda_{1}(t)} \cos\phi(t) & e^{-i\Lambda_{2}(t)} \sin\phi(t) \\
0 & -e^{-i\Lambda_{1}(t)} \sin\phi(t) & e^{-i\Lambda_{2}(t)} \cos\phi(t)
\end{array}\right),$$ where $$\Lambda_{n}(t) = \frac{1}{\hbar}
\int_{t_{0}}^{t} d\tau \lambda_{n}(\tau), \;\;\; (n = 1,2).$$ From Eq. (\[Uan\]), it is clear that after the operation, when the pulses are switched off ($\phi\to 0$), an arbitrary initial electron spin state will have acquired a phase in the $|B\rangle$ component with respect to the orthogonal dark superposition $|D\rangle$. The resulting unitary transformation in the qubit subspace can be written as $$U_{\mathrm{C}}(\infty) = e^{-\frac{i}{2}\Lambda_{1}\vec \sigma \cdot \vec n} =
\cos\frac{\Lambda_{1}}{2}\mathbf{I} -i \sin\frac{\Lambda_{1}}{2} \vec
\sigma \cdot \vec n,$$ where $\mathbf{I}$ is the unit operator and $\vec{\sigma}$ is the vector of Pauli matrices in the original qubit basis $|0\rangle$, $ |1\rangle$. This transformation corresponds to a rotation through an angle $\Lambda_{1}(\infty)$ about the axis $\vec n$, $$\vec n = [ -\cos \gamma \sin(2 \beta), \sin \gamma\sin(2\beta), -\cos(2\beta)],$$ which depends on the ratio of the pulse envelopes and on the relative phase of the pulses.
For convenience, we write the general initial state of the spin qubit in the form $$\label{psi0}
|\psi_{0}\rangle = \cos \frac{\vartheta}{2}|B\rangle
+e^{i\varphi} \sin\frac{\vartheta}{2} |D\rangle,$$ where $\vartheta$ and $\varphi$ are angles on a Bloch sphere. We will also need two states orthogonal to $|\psi_{0}\rangle$, which we choose in the form $$|\psi_{1}\rangle = \sin \frac{\vartheta}{2}|B\rangle
- e^{i\varphi} \cos\frac{\vartheta}{2} |D\rangle,
\;\;\;\;\;\;\; |\psi_{2}\rangle = |2\rangle.$$
We assume that the rotation is performed using Gaussian control pulses $$\Omega(t) = \Omega_{*} \exp{\left(-\frac{t^{2}}{2\tau_{\rm
p}^{2}}\right)},$$ where $\Omega_{*}$ is the amplitude of the control pulse and $\tau_{\rm p}$ its duration. In the following discussion, we will treat the pulse duration $\tau_{\rm p}$ and the detuning $\Delta$ as tunable parameters, while the pulse amplitude $\Omega_{*}$ will be adjusted to achieve the desired rotation of the qubit. The amplitude as a function of detuning for a $\pi/2$ rotation about the $z$ axis \[growth direction, see Fig. \[fig:stirap\](a)\] is plotted for different pulse durations in Fig. \[fig:omega\](b).
Imperfections of the unitary evolution {#sec:imperfections}
======================================
Before we study the impact of the environment, let us discuss the limitations imposed on the driving by the requirement of adiabatic evolution, as well as the effect of the oscillatory terms contained in $H_{\mathrm{C}}'$ in Eq. (\[hc\]) and neglected in the discussion presented in the previous section (the former has also been studied in Ref. ).
A perfectly adiabatic evolution does not transfer the qubit states $|0\rangle$ and $|1\rangle$ into the trion state $|2\rangle$, which is only slightly occupied during the gating. In realistic experiments, the parameters cannot be changed infinitely slowly, so that there is a nonzero probability of a jump from the ideal instantaneous (adiabatic) state to one of the other states. Representing the exact system state in terms of the adiabatic eigenstates \[Eqs. (\[basis\]-c)\], $$|\psi\rangle=\sum_{n}c_{n}(t)e^{-i\Lambda_{n}(t)}|a_{n}(t)\rangle,$$ one finds the equation for the probability amplitudes [@messiah66], $$\dot{c}_{m}(t) = -\sum_{n}e^{i\left[\Lambda_{m}(t)-\Lambda_{n}(t)\right]}
\langle a_{m}(t) | \dot{a}_{n}(t) \rangle c_{n}(t).$$ The state $|a_{0}\rangle$ is time independent and represents the dark state $|D\rangle$ decoupled from the trion state $|2\rangle$. Moreover, $\langle a_{0}(t)|\dot{a}_{1}(t)\rangle=0$. Therefore, to the leading order, the only unwanted transition is to the state $|a_{2}\rangle$, which becomes the trion state after switching off the pulses. The corresponding amplitude is $$c_{2}^{\mathrm{na}}\approx -\int_{-\infty}^{\infty} dt \;
e^{i\left[\Lambda_{2}(t)-\Lambda_{1}(t)\right]}
\langle a_{2}(t) | \dot{a}_{1}(t) \rangle c_{1}(0).$$ If the initial state is $|\varphi_{0}(t)\rangle$, as given by Eq. (\[psi0\]), then $c_{1}(0)=\cos(\vartheta/2)$ and $$\label{eq:cna}
c_{2}^{\mathrm{na}}\approx \cos \frac{\vartheta}{2}
\int_{-\infty}^{\infty} dt \; e^{i\left[\Lambda_{2}(t)-\Lambda_{1}(t)\right]} \dot{\phi}(t).$$
The other source of imperfections in the controlled evolution are the rotating (off-resonant) terms contained in the Hamiltonian $H_{\mathrm{C}}'$, which reads in the new basis $$\begin{aligned}
\lefteqn{H_{\mathrm{C}}' = } \\
&& \frac{\hbar}{2}\Omega(t) \left[
e^{i(\Delta_{\rm B} t - \gamma)} \sin^{2}\beta
- e^{-i(\Delta_{\rm B} t - \gamma)} \cos^{2}\beta \right] |D {\rangle\!\langle}2| \\
&& + \frac{\hbar}{2} \Omega(t) \cos(\Delta_{\rm B} t - \gamma)
\sin(2 \beta) |B {\rangle\!\langle}2| + {\rm H.c.}\end{aligned}$$ We assume that the correction to the unitary evolution resulting from these terms is small and treat them perturbatively. The effects of the additional Hamiltonian $H'_{\mathrm{C}}$ may be of two kinds: additional unitary rotation within the computational space and leakage to the trion state. The former can be taken into account when designing the control pulses and compensated by a suitable modification of the control parameters. Therefore, we treat only the latter as an error. The amplitude for the trion excitation is given by
$$\begin{aligned}
c_{2}^{\mathrm{off}} & = & \int_{-\infty}^{\infty}dt
\langle 2|H_{\mathrm{C}}'(t) |\psi_{0}\rangle = \\ &&
\frac{1}{2} \cos \frac{\vartheta}{2} \sin(2\beta)
\int_{-\infty}^{\infty}dt \; e^{i\left[\Lambda_{2}(t)-\Lambda_{1}(t)\right]} \Omega (t)
\cos (\Delta_{\rm B} t - \gamma) \cos [2 \phi(t)] \\
&& + \frac{1}{2} e^{i \varphi} \sin \frac{\vartheta}{2}
\int_{-\infty}^{\infty}dt \; e^{i \Lambda_{2}(t)}\Omega (t) \cos\phi(t)
\left[ e^{-i(\Delta_{\rm B}t - \gamma)} \sin^{2}\beta
- e^{i(\Delta_{\rm B}t - \gamma)} \cos^{2}\beta \right],\end{aligned}$$
where $H_{\mathrm{C}}'(t)=U^{\dag}_{\rm C}(t)H_{\mathrm{C}}'U^{\phantom{\dag}}_{\rm C}(t)$ is the relevant Hamiltonian in the interaction picture with respect to the perfect evolution described by the evolution operator in Eq. (\[Uan\]).
The error due to both unitary corrections described above is given by the total probability of leakage to the trion state, $$\delta_{\mathrm{u}}
=\left|c_{2}^{\mathrm{na}}+c_{2}^{\mathrm{off}}\right|^{2},$$ and is plotted in Fig. \[fig:error-u\] for a $\pi/2$ rotation about the $z$ axis as a function of detuning for two values of the Zeeman splitting and different pulse durations. If the values of the detuning approach the Zeeman splitting $\Delta_{\rm B}$, the error becomes very large, since one spin state is then almost resonantly coupled to the trion state. This induces a large trion occupation which inhibits a coherent spin rotation in this scheme. If the evolution is fast (short pulse durations $\tau_{\rm p}$), the adiabatic condition is not met for small detunings, which results in larger errors.
In addition, the unitary error shows oscillations, visible in Figs. \[fig:error-u\](a,b), which are due to the nonadiabatic contribution. In order to understand their origin, let us note that, in spite of the smooth Gaussian pulse envelope, the tipping angle evolves in a step-wise manner \[see Fig. \[fig:omega\](a)\] (especially for strong pulses). Thus, $\dot{\phi}(t)$ has two peaks of opposite signs at $t=\pm t_1$, where $t_1$ is a certain time, depending on the pulse duration $\tau_{\rm p}$ ($t_1\approx 10$ ps in Fig. \[fig:omega\](a)\]). Hence, according to Eq. (\[eq:cna\]), the probability for a non-adiabatic jump is approximately proportional to $\sin^{2}[\Lambda_{2}(t_1)-\Lambda_{1}(t_1)-\Lambda_{2}(-t_1)+\Lambda_{1}(-t_1)]$ and is therefore an oscillating function of the control parameters. This oscillation of the transition probability reflects the interference of the amplitudes for non-adiabatic jumps when the trion is switched on and off.
For reasonably long pulses, one can indicate two regimes of the detuning where the operation on the qubit has a high fidelity. The values of the detuning have to be chosen either above the Zeeman splitting or below it and larger than $0.1$ meV. To take advantage of the available fast optical control methods and insure the adiabaticity of the evolution, one can perform the operation with short pulse durations of several picoseconds and detunings of a few meV.
Environment perturbation during operation on the qubit {#sec:perturbation}
======================================================
In this section, we summarize the general method for describing the effects of a perturbation due to the environment on an arbitrary operation on a qubit. A full account of this approach is given in earlier works[@grodecka05; @roszak05]. Here, we give a brief overview for the sake of completeness and for reference in the following sections.
The evolution in the absence of the perturbation due to the environment is described by the unitary evolution operator $U_{0}(t) = U_{\mathrm C}(t) \otimes e^{-i H_{\mathrm{env}} t}$. The effect of the interaction with the environment is calculated using the second-order Born expansion of the evolution equation for the density matrix. We include the effect of the driving field non-perturbatively and treat the carrier-environment interaction within a perturbation theory.
The reduced density matrix of the qubit reads $$\label{dens-mat}
\rho (t) = U_{0}(t) [\rho_{0}+\rho^{(2)}(t)] U_{0}^{\dag}(t),$$ where $\rho^{(2)}(t)$ is the correction to the density matrix resulting from the interaction with the environment (in the interaction picture) and $\rho_{0}$ the initial state of the qubit, which is assumed to be pure, $\rho_{0} = |\psi_{0} {\rangle\!\langle}\psi_{0} |$. The initial state of the system (qubit together with the environment) has the form $\rho_{0}\otimes \rho_{T}$, where $\rho_{T}$ is the thermal equilibrium state of the environment bath.
To quantify the quality of the operation on a qubit, we use the *fidelity* [@nielsen00] $$\label{fidel}
F = \langle \psi_{0} |U_{0}^{\dag}(\infty) \rho (\infty) U_{0}(\infty) |
\psi_{0} \rangle^{1/2},$$ which is a measure of the overlap between the ideal (pure) final state without perturbation, $U_{0}(\infty)|\psi_{0}\rangle$, and the actual final state of the system given by the density matrix $\rho (\infty)$. If the procedure is performed ideally, i.e. without discrepancies from the desired qubit operation, then $F=1$. The fidelity loss $\delta = 1 - F^{2}$ is referred to as the *error* of the quantum gate. Inserting Eq. (\[dens-mat\]) into Eq. (\[fidel\]), with $\rho_{0}=|\psi_{0}{\rangle\!\langle}\psi_{0}|$, one finds $$\label{error}
\delta = -\langle \psi_{0} | \rho^{(2)} (\infty) | \psi_{0} \rangle.$$
The density matrix correction is calculated from a perturbation expansion [@cohen98]: $$\rho^{(2)}(t) =
-\frac{1}{\hbar^{2}}\int_{t_{0}}^{t}d\tau\int_{t_{0}}^{\tau}d\tau'
\operatorname{Tr}_{\mathrm{R}}[V(\tau),[V(\tau'),\rho_{0}]],$$ where $\operatorname{Tr}_{\rm R}$ is the trace with respect to the reservoir degrees of freedom and $V(t) = U_{0}^{\dag}(t) V U_{0}(t)$ is the carrier-environment Hamiltonian in the interaction picture. It can always be written in the general form $$V = \sum_{nn'} S_{nn'} \otimes R_{nn'},$$ where $S_{nn'}$ acts on the carrier subsystem, $R_{nn'}$ acts on the environment of phonons or photons, and $R_{nn'}=R_{n'n}^{\dag}$, $S_{nn'}=S_{n'n}^{\dag}$. It is easy to see that Eqs. (\[hcph\]) and (\[hcrad\]) have this structure.
It is convenient to introduce two sets of spectral functions. The first is a family of spectral densities of the reservoir (phonons or photons), defined as $$\label{spectral}
R_{nn',mm'}(\omega)
= \frac{1}{2\pi}\int dt \; e^{i \omega t} \langle R_{nn'}(t) R_{mm'} \rangle,$$ with the operator $R_{nn'}$ transformed into the interaction picture $R_{nn'}(t) = U^{\dag}_{0}(t) R_{nn'} U_{0}(t)$. The functions from the second set are nonlinear spectral characteristics of the driven evolution, $$\label{S}
S_{nn',mm'}(\omega) =
\sum_{i} \langle \psi_{0} | Y_{n'n}^{\dag}(\omega) |
\psi_{i}{\rangle\!\langle}\psi_{i}|Y_{mm'}|\psi_{0}\rangle,$$ where $|\psi_{i}\rangle$ span the subspace orthogonal to the initial state $|\psi_{0}\rangle$ and $$\label{Y}
Y_{nn'}(\omega) = \int dt \; S_{nn'}(t) e^{-i \omega t},$$ with $S_{nn'}(t) = U^{\dag}_{0}(t) S_{nn'} U_{0}(t)$. The various terms in Eq. (\[S\]) describe transitions to different states orthogonal to the desired state $|\psi_{0}\rangle$. In the long time limit for a time-independent system, all of them either vanish or turn into energy-conserving Dirac delta functions, restoring Fermi’s golden rule for transition probabilities [@alicki02]. In the general case, they are broadened due to time dependence.
Using the definitions in Eqs. (\[spectral\]) and (\[S\]), the error \[Eq. (\[error\])\] can be written in the form [@grodecka05; @roszak05] $$\label{error-w}
\delta = \sum_{nn',mm'}\int d\omega \; R_{nn',mm'}(\omega) S_{nn',mm'}(\omega).$$ The error can thus be expressed as an overlap between spectral functions, which are the “building blocks” for the calculation of the environment effects on the quantum evolution. A detailed derivation of Eq. (\[error-w\]) can be found in Refs. and .
The perturbative approach described above obviously yields only an approximate description of decoherence. In the case of phonon-induced dephasing, comparisons with exact results (for ultrashort laser pulses) [@axt05] and with correlation expansion results [@krugel05] show that the perturbative results are very accurate as long as the overall dephasing effect is weak: the inaccuracy is of the order of $\delta^{2}$, where $\delta$ is some measure of decoherence, e.g., the fidelity loss, Eq. (\[error-w\]). The same holds for the decoherence induced by the radiative decay of the trion, as we show in Appendix \[app:lindblad\].
Thus, to calculate the error of the quantum gate due to the interaction with the environment, one needs to derive the two spectral functions. These will yield a transparent spectral interpretation for the various contributions to the qubit dephasing and provide a possibility to seek the optimal conditions depending on the system properties.
Decoherence mechanisms {#sec:decoherence}
======================
In this section, the different kinds of decoherence mechanisms (the coupling of the carriers to phonons and photons) are studied. We apply the general theory introduced in the former section to calculate the total error of a spin rotation through an angle of $\pi/2$ about the $z$ axis. The quantitative estimates are given for charged self-assembled InAs/GaAs quantum dot.
Interaction with the phonons
----------------------------
It follows from Eq. (\[Hc-ph\]) that the carrier-phonon interaction is described by just one pair of operators, $S_{22}=|2{\rangle\!\langle}2|$ and $R_{22}={\sum_{\bm{k}}}F_{22}({\bm{k}})({b_{\bm{k}}}^{\phantom{\dag}}+{b_{-\bm{k}}^{\dag}})$. Therefore, one needs only two spectral functions: the spectral density of the phonon reservoir $R_{22,22}(\omega)\equiv R_{\rm ph}(\omega)$ and the spectral characteristics of the driving $S_{22,22}(\omega)\equiv S_{\rm ph}(\omega)$ to calculate the phonon-induced error $$\label{error-ph}
\delta_{\rm ph} = \int d\omega R_{\rm ph}(\omega) S_{\rm ph}(\omega).$$ Using Eq. (\[spectral\]), one finds the explicit form of the former: $$\begin{aligned}
\label{Rph}
\lefteqn{R_{\rm ph}(\omega) = } \\ \nonumber
&& \frac{1}{\hbar^{2}} \left[n_{\rm B}(\omega) + 1\right] {\sum_{\bm{k}}}|F_{22}({\bm{k}})|^{2}
\left[ \delta(\omega-{\omega_{\bm{k}}}) + \delta(\omega+{\omega_{\bm{k}}}) \right],\end{aligned}$$ where $n_{\rm B}(\omega)$ is the Bose distribution function.
The function $R_{\rm ph}(\omega)$ (see Appendix \[app:couplings\] for a detailed derivation) is plotted in Fig. \[fig:spectral-phonons\] for different temperatures $T$. The negative frequency part of the function corresponds to phonon absorption by the carriers and is nonzero only for finite temperatures, while the positive part represents the emission processes and always has finite values. The phonon spectral density has a cut-off at the frequency $\omega \approx c_{\rm l}/l$, which corresponds to the inverse of the time phonons need to traverse the quantum dot.
The carrier part of the interaction Hamiltonian $S_{\mathrm{ph}} = |2{\rangle\!\langle}2|$ leads to a spectral characteristics containing two parts corresponding to the two orthogonal states: $$\begin{aligned}
\label{sph}
\lefteqn{S_{\mathrm{ph}}(\omega) =
s_{1}^{\mathrm{ph}}(\omega) + s_{2}^{\mathrm{ph}}(\omega)} \\
\nonumber
&& = \frac{1}{4} \sin^{2}\vartheta \left|
\int dt \; e^{i\omega t} \sin^{2} \phi(t) \right|^{2} \\
\nonumber
&& \phantom{11} + \frac{1}{4} \cos^{2}\frac{\vartheta}{2} \left| \int dt \; e^{i\omega t}
e^{-i\left[\Lambda_{2}(t)-\Lambda_{1}(t)\right]} \sin \left[2 \phi(t)\right] \right|^{2}. \end{aligned}$$ The initial state, Eq. (\[psi0\]), can be an arbitrary superposition of the states $|0\rangle$ and $|1\rangle$, and the error depends on the choice of the initial qubit state. In order to obtain representative error evaluations, we average the error (thus, the contributing spectral function) over the angles $(\vartheta, \varphi)$ on the Bloch sphere of the initial states, according to $$s_{i\mathrm{(av)}}(\omega)=\frac{1}{4\pi}
\int_{0}^{\pi} d\vartheta\sin\vartheta
\int_{0}^{2\pi} d\varphi \; s_{i}(\omega).$$ The averaged contributions to the spectral characteristics, corresponding to the two terms in Eq. (\[sph\]), read $$\begin{aligned}
s_{1\mathrm{(av)}}^{\mathrm{ph}}(\omega) & = &
\frac{1}{6} \left| \int dt \; e^{i\omega t} \sin^{2}\phi(t)\right|^{2},\\ \nonumber
s_{2\mathrm{(av)}}^{\mathrm{ph}}(\omega) & = & \frac{1}{8}
\left|\int dt \; e^{i\omega t} e^{-i\left[\Lambda_{2}(t)-\Lambda_{1}(t)\right]}
\sin\left[ 2 \phi(t) \right] \right|^{2}.\end{aligned}$$ Both contributions to the phonon-induced error are independent of $\beta$.
We can derive approximate analytical formulas for the spectral characteristics under the condition that the control pulses are much smaller than the detuning ($\Omega \ll \Delta$), which is met for detunings of several meV \[see Fig. \[fig:omega\](b)\]. Thus, in this detuning regime, we obtain
$$\begin{aligned}
s_{1}^{\mathrm{ph}}(\omega) & \approx &
\frac{\pi}{96} \frac{\Omega_{*}^{4} \tau_{\rm p}^{2}}{\Delta^{4}}
\exp{\left(-\frac{1}{2} \tau_{\rm p}^{2} \omega^{2} \right)},\\
s_{2}^{\mathrm{ph}}(\omega) & \approx &
\frac{\pi}{4} \frac{\Omega_{*}^{2} \tau_{\rm p}^{2}}{\Delta^{2}}
\left\{ \exp{\left[-\frac{1}{2} \tau_{\rm p}^{2} (\Delta +\omega)^{2} \right]}\right.\\ \nonumber
&& \left. \phantom{aaaaaaa} -\frac{\Omega_{*}^{2}}{2\sqrt{3}\Delta^{2}}
\exp{\left[-\frac{1}{6} \tau_{\rm p}^{2} (\Delta + \omega)^{2} \right]} \right\}^{2}.\end{aligned}$$
The symmetric function $s_{1}^{\mathrm{ph}}(\omega)$ \[see Fig. \[fig:driving12\](a,b)\] is centered at $\omega=0$ and, for longer pulse durations, covers the low frequency part (broadening $\approx 1/\tau_{\rm p}$). The broadening of this function is independent of the detuning, and, for a fixed $\tau_{\mathrm{p}}$, its area decreases for larger detunings. This part of the spectral characteristics corresponds to pure dephasing effects [@krummheuer02; @forstner03; @krugel05; @alicki04; @machnik04]. The resulting error \[Eq. (\[error-ph\])\] will decrease for longer pulse durations as well as for larger detunings.
The second part of the spectral characteristics $s_{2}^{\mathrm{ph}}(\omega)$ \[see Fig. \[fig:driving12\](c,d)\] is centered at $\omega \approx -\Delta $, and its area grows with time. Thus, the corresponding error contribution is proportional to the spectral density of the phonon reservoir around the frequency corresponding to the detuning of the laser frequency from the trion transition, with some broadening due to time dependence. Moreover, the error increases for longer operations. Therefore, this contribution may be interpreted as a real transition and describes the error due to phonon-assisted trion generation.
In order to see this more directly, let us note that the function $s_{2}^{\mathrm{ph}}(\omega)$ is relatively strongly localized around $\omega=-\Delta$, compared to the range of variation of $R(\omega)$. Therefore, the corresponding integral in Eq. (\[error-ph\]) may be approximated by its Markovian limit, $$\delta^{\mathrm{ph}}_{2}
=R_{\mathrm{ph}}(-\Delta)\int d\omega s_{2}^{\mathrm{ph}}(\omega).$$ The area of the spectral function $s_{2}^{\mathrm{ph}}(\omega)$ appearing in this formula may be calculated by noting that for any function $h(t)$, one has $$\begin{aligned}
\int d\omega
\lefteqn{ \left|
\int dt \; e^{i \omega t}h(t)\right|^{2}=} \nonumber \\
&&\int d\omega\int dt\int dt' e^{i \omega (t-t')}h^{*}(t)h(t')\nonumber\\
& & =2\pi \int dt \left| h(t)\right|^{2}. \label{s-area}\end{aligned}$$ Thus, we find $$\delta^{\mathrm{ph}}_{2}= 2 \pi \int dt \; R_{\mathrm{ph}}(-\Delta)\frac{1}{4}
\cos^{2}\frac{\vartheta}{2} \sin^{2} \left[2\phi(t)\right].$$ The expression under the integral is exactly the Fermi’s golden rule formula for the probability that a transition from the state $|a_{1}\rangle$ to $|a_{2}\rangle$ will take place during the time $dt$ due to the (diagonal) carrier-phonon coupling given in Eq. (\[Hc-ph\]). Since the state $|a_{2}\rangle$ becomes the trion state after the laser pulse is switched off, this process indeed represents a phonon-assisted transition to the trion state.
The resulting phonon-induced errors, averaged over all initial states, as functions of the detuning and pulse duration are shown in Fig. \[fig:error-phonon\]. The error due to pure dephasing $\delta_{1}^{\rm ph}$ \[corresponding to $s_{1}^{\rm ph}(\omega)$\] favors longer pulse durations and larger detunings and strongly depends on the temperature \[Fig. \[fig:error-phonon\](a,b)\]. To perform the operation with an error smaller than $10^{-4}$, which allows for coherent quantum operation on a qubit, one needs a detuning larger than 0.06 meV (at $T = 5$ K). It is possible to avoid the pure dephasing error even for a fast evolution realized by pulse durations of several picoseconds and a detuning of $1$ meV.
The contribution to the total error related to the phonon-assisted trion generation $\delta_{2}^{\rm ph}$ \[resulting from $s_{2}^{\rm ph}(\omega)$\] has a different behavior in comparison with the previous one \[Fig. \[fig:error-phonon\](c,d)\] and depends even stronger on the temperature. At low temperatures ($T<1$ K), it decreases with growing detuning, but for higher temperatures ($T = 1$, $5$ and $10$ K), it initially grows with detuning and later decreases for considerably large detunings. The maximum values correspond to the situation when the spectral function $s_{2}^{\mathrm{ph}}(\omega)$ covers the maximum of the phonon density, and the error vanishes if it lies beyond the phonon density cut-off. The pulse duration dependence of this error differs from the one for the pure dephasing error. For relatively large detunings ($\hbar\Delta>1$ meV), this error favors shorter pulse durations, which is typical for real transitions.
In order to properly choose the conditions for the spin rotation that lead to a high fidelity of the operation, one needs to take into account these two sources of error resulting from the carrier coupling to the phonon reservoir. The sum of these two errors is plotted in Fig. \[fig:error-phonon\](e,f). To achieve values of the error below $10^{-4}$ for $\tau_{\rm p}=10$ ps, detunings of several meV are needed. For a small detuning ($\hbar\Delta=0.066$ meV), the error decreases with growing pulse duration, but for larger detunings ($\hbar\Delta=0.66$ meV and $\hbar\Delta=1.32$ meV), shorter pulse durations are more favorable. Choosing the detunings larger than several meV can suppress the influence of the phonon environment ($\hbar\Delta\gtrsim 4$ meV at $T=5$ K).
It is also possible to excite the system above the transition energies by choosing a negative detuning. In this case, the phonon-induced total error is larger in comparison with the case for positive detuning (see Fig. \[fig:negative\]). The corresponding spectral characteristics $s_{2}^{\mathrm{ph}}(\omega)$ is now centered at the positive frequency part of the phonon density, which has larger values than the negative part, finite even at $T=0$ K (see Fig. \[fig:spectral-phonons\]). This corresponds to emission of a phonon, which is strongly temperature dependent and possible at zero temperature. Especially at low temperatures, the phonon-induced error is up to four orders of magnitude larger than the one for positive detunings. To suppress the influence of the phonon reservoir, one needs larger detunings. The dependence of this error on the pulse duration also differs. Only for small detunings ($\hbar\Delta \lesssim 1$ meV), it is advantageous to use longer pulses. For relatively large detunings ($\hbar\Delta > 1$ meV), short pulses are favorable.
Interaction with the photon field
---------------------------------
Since the considered procedure for spin rotation requires a trion occupation during the evolution, the radiative decay of the trion will result in an additional error. We calculate the effect of the photon interaction in the same manner as for the phonons including perturbatively the carrier-photon interaction.
The relevant photon energies correspond to the semiconductor band gap which is very large compared to the thermal energy. Therefore, one can use the zero temperature approximation. The carrier-photon interaction contains the following operators acting on the carrier subsystem: $S_{02} = |0 {\rangle\!\langle}2| e^{i\omega_{0}t}$, $S_{20} = S_{02}^{\dag}$, $S_{12}=|1{\rangle\!\langle}2|e^{i(\omega_{1}t-\gamma)}$, and $S_{21} =
S_{12}^{\dag}$. All the resulting contributions, calculated according to the general procedure in Sec. \[sec:perturbation\], can be combined into a single spectral characteristics $$S_{\rm rad}(\omega) = \sum_{i}\left| \langle \psi_{0}|
Y^{\dag}_{\rm rad}(\omega)| \psi_{i} \rangle \right|^{2},$$ where $$\begin{aligned}
Y_{\rm rad}(\omega) & = & \frac{1}{\sqrt{2}}\int dt e^{-i\omega t}
U^{\dag}_{0}(t)\\
&&\times\left[
\left(\cos\beta |B\rangle + \sin\beta |D\rangle
\right)e^{i\omega_{0}t}
\right.\\
&&+\left.\left(\sin\beta |B\rangle - \cos\beta |D\rangle \right)e^{i\omega_{1}t}
\right] \langle 2| U_{0}(t)\end{aligned}$$ and the sum is taken over all states orthogonal to the initial state $| \psi_{0}\rangle$ \[Eq. (\[psi0\])\].
The spectral density of the photon reservoir reads [@scully97] $$\label{R-rad}
R_{\rm rad}(\omega) =
\frac{|\vec d|^{2} \omega^{3}n_{\mathrm{r}}}{6 \pi^{2} \hbar \epsilon_{0} c^3},
\quad \omega>0,$$ and the contribution to the error due to the photon interaction has the form $$\delta_{\rm rad} = \int d\omega R_{\rm rad}(\omega) S_{\rm rad}(\omega).$$
Using the definition (\[Y\]) and the explicit form of the evolution operator (\[Uan\]), one finds $$\langle \psi_{0}| Y^{\dag}_{\rm rad}(\omega)| \psi_{i} \rangle
= \frac{1}{\sqrt{2}}\int dt \; e^{i \omega t}s_{i}^{\mathrm{rad}}(t),$$ where $$\begin{aligned}
\lefteqn{s_{1}^{\mathrm{rad}}(t)=}\\
&&- \frac{1}{4} \sin \vartheta \sin[2\phi(t)]
(e^{-i \omega_{0} t}\cos \beta + e^{-i \omega_{1} t}\sin \beta ) \\
&& - e^{i[\varphi+\Lambda_{1}(t)]} \cos^{2} \frac{\vartheta}{2} \sin \phi(t)
(e^{-i \omega_{1} t}\cos \beta - e^{-i \omega_{0} t}\sin \beta ) \end{aligned}$$ and $$\begin{aligned}
s_{2}^{\mathrm{rad}}(t) & = &
- e^{-i[\Lambda_{2}(t) - \Lambda_{1}(t)]}
\cos \frac{\vartheta}{2} \sin^{2} \phi(t) \\
&&\times
(e^{-i \omega_{0} t}\cos \beta + e^{-i \omega_{1} t}\sin \beta ).\end{aligned}$$
The spectral function $S_{\rm rad}(\omega)$ is centered at the laser frequency $\omega=\omega_{0,1} \approx 1$ eV, and its width is of the order of 1 meV or less. The photon spectral density $R_{\rm rad}(\omega)$ \[Eq. (\[R-rad\])\] is very broad and may be assumed constant in the area of the overlap with $S_{\rm rad}(\omega)$. Therefore, we use the Markovian approximation and obtain $\delta_{\rm rad} = R_{\rm rad}(\omega_{0}) \int d\omega S_{\rm rad}(\omega)$ with $R_{\rm rad}(\omega_{0})= \Gamma/(2\pi)$, where $\Gamma$ is the trion decay rate $\approx 1$ ns$^{-1}$. The frequency integral can be performed using Eq. (\[s-area\]). Upon averaging over the initial states, one obtains the resulting error due to the carrier-photon interaction:
$$\begin{aligned}
\label{error-rad}
\delta_{\rm rad} & = & \Gamma \int dt \left(
\left\{\frac{1}{24}\sin^{2}[2\phi(t)]+\frac{1}{4}\sin^{4}\phi(t) \right\}
\left\{ 1 +\cos\left[(\omega_{1}-\omega_{0})t\right] \sin(2\beta) \right\}
\right. \\ \nonumber && \left. \phantom{aaaaaaa}
+\frac{1}{3}\sin^{2}\phi(t) \left\{ 1 -\cos \left[ (\omega_{1}-\omega_{0})t
\right] \sin (2\beta)\right\}\right).\end{aligned}$$
For large enough detunings ($\Omega \ll \Delta$), we can again obtain an approximate equation for the resulting error: $$\begin{aligned}
\lefteqn{\delta_{\rm rad} \approx} \\ \nonumber
&& \Gamma \left( \frac{\sqrt{\pi} \Omega^{2}\tau_{\rm p}}{8 \Delta^{2}} \left\{
\frac{5}{3}-\exp{\left[-\frac{1}{4} \tau_{\rm p}^{2} (\omega_{1}-\omega_{0})^{2} \right]} \right\}
\right. \\ \nonumber && \left. \phantom{\frac{1}{1}}+
\frac{\sqrt{\pi} \Omega^{4}\tau_{\rm p}}{8\sqrt{2} \Delta^{4}}\left\{
5-\exp{\left[-\frac{1}{8} \tau_{\rm p}^{2} (\omega_{1}-\omega_{0})^{2} \right]} \right\}
\right).\end{aligned}$$
The resulting radiative error \[Eq. (\[error-rad\])\] as a function of detuning and pulse duration is plotted in Fig. \[fig:error-rad\](a,b). This error decreases with growing detuning since the trion occupation is reduced. For small detunings, the trion occupation is relatively large since the system is excited near the resonance, and the resulting error is growing with pulse duration due to the growing probability of the radiative decay of the trion. In this regime, the error is linear in the pulse duration, $\delta_{\rm rad} \approx \tau_{\rm p} \Gamma$. The contribution to the error due to the finite trion lifetime depends strongly on the pulse duration only for relatively small detunings. For detunings of several meV, this error is constant in time. To insure a small radiative error, one has to properly choose a large detuning, while the pulse duration can be arbitrarily short.
Interplay of the different kinds of errors {#sec:interplay}
==========================================
In the previous sections, we studied the detuning and pulse duration dependence of the contributions to the total error of the spin-based quantum gate. In this section, we calculate the resulting total error and discuss the interplay of and possible optimization against the constituent sources of the error.
We start the discussion with the dependence of the contributions to the total error of the considered spin rotation on the detuning for a fixed temperature ($T=5$ K) and two pulse durations ($\tau_{\rm p}=30$ ps and $\tau_{\rm p}=5$ ps) \[see Fig. \[fig:error-contr\](a,c)\]. For small detunings, the trion occupation is large, and the dominant source of the error is the radiative decay of the trion. This contribution decreases with growing detuning, and the phonon-induced error becomes the most important source of dephasing. If the values of the detuning approach the Zeeman splitting, one spin state is almost resonantly coupled to the trion state, and the probability of the leakage to the trion state is high, which leads to a large error due to the unitary corrections, especially in the case of a short pulse ($\tau_{\rm p}=5$ ps). In the detuning regime between $0.09$ meV and $2$ meV, the error resulting from the phonon coupling becomes dominant and is one order of magnitude larger than the radiative error. For detunings of several meV above the value of the cut-off of the phonon error, the only contribution which inhibits the coherent spin rotation is the trion radiative decay and has values between $10^{-3}$ and $10^{-4}$. To achieve errors smaller than $10^{-4}$, large detuning of several or tens of meV are needed. In this detuning regime, the limitation results from the fact that one cannot choose an arbitrarily large detuning, in particular not with a frequency corresponding to that of optical phonons, assumed to be well off-resonant in this paper. Thus, the interplay of the different kinds of error leads to a nontrivial detuning dependence of the total error, which in general is dominated by the error due to the interaction with the phonon and photon reservoirs
Let us now discuss the dependence of the particular error contributions on the pulse duration for a fixed temperature ($T=5$ K) \[see Fig. \[fig:error-contr\](b,d)\]. For a small detuning ($\hbar\Delta=0.33$ meV) and fast driving fields (short pulse durations of several picoseconds), the adiabatic condition is not fulfilled, and the probability of the leakage to the trion state becomes very high, which results in large errors. To minimize the influence of the errors due to the unitary corrections, one has to choose pulse durations of at least a few picoseconds. In this regime, the total error is limited by the phonon-induced contribution. The second dominant source of the error is the radiative decay of the trion. Moreover, these two contributions are almost constant in time. For a large detuning ($\hbar\Delta=6.6$ meV), the evolution is perfectly adiabatic, and the error due to unitary corrections does not appear. The phonon-induced error is very small and vanishes for pulse durations of a few picoseconds, since the detuning is far above the value of the cut-off of the phonon spectral density. The only important contribution to the total error is due to the finite trion lifetime which is constant in pulse duration. One can see that it is advantageous to excite with larger detunings, but the variation of the pulse duration is not of great importance as soon as the adiabatic condition is met and the detuning is not close to the Zeeman splitting.
In order to summarize the study of the various sources of decoherence, we calculated the total error of the spin-based qubit rotation for growing detuning \[Fig. \[fig:error-total\](a)\] and pulse duration \[Fig. \[fig:error-total\](b)\]. For detunings smaller than $1.3$ meV, the error strongly depends on the pulse duration and is relatively large. Thus, to perform a rotation with a high fidelity, detunings of several meV are needed. Furthermore, for such detunings, the error is constant for growing pulse duration, which opens the possibility to perform the qubit rotation with pulse durations three orders of magnitude smaller than the lifetime of the trion ($\Gamma^{-1} \approx 1$ ns). For $\hbar\Delta\sim 5$ meV, the error may be as low as $10^{-4}$ and is independent of the pulse duration.
Conclusions {#sec:conclusion}
===========
We have studied a theoretical proposal of the optical control of a single spin in a single doped quantum dot [@chen04]. We have investigated the sources of error of a quantum operation on a spin-based qubit and have given quantitative estimates for the implementation of spin rotation through an angle of $\pi/2$ about the $z$ axis in a self-assembled InAs/GaAs system. The dephasing mechanisms resulting from the interaction of the carriers with phonon and photon reservoirs as well as the imperfections of the unitary evolution have been considered.
We have shown that the interplay of the constituent sources of the error leads to a nontrivial dependence of the total error on the detuning, which is in general dominated by the errors due to the coupling of the carriers to the phonon and photon environments. Furthermore, small detunings or detunings approaching the Zeeman splitting should be avoided since they lead to large trion occupations which suppress an adiabatic and coherent control. Taking into account all contributions to the total error, we showed that errors as low as $10^{-4}$ can be achieved for large detunings ($\sim 5$ meV), while the pulse durations can in principle be arbitrary (but at least of a few picoseconds).
Finally, it should be noted that the calculations were performed using simple Gaussian pulses with the pulse duration and intensity as the only tunable parameters. Further reduction of the errors is very likely to be possible with pulse optimization [@wenin06; @hohenester04; @axt05].
We thank C. Emary and M. Richter for fruitful discussions. A.G. acknowledges financial support from the DAAD. This work was partly supported by Grant No. N20207132/1513 of the Polish MNiSW.
Phonon couplings, spectral density {#app:couplings}
==================================
In this Appendix, we derive the effective coupling element for the carrier-phonon interaction $F_{22}({\bm{k}})$ as well as the resulting spectral density of the phonon reservoir $R_{\rm ph}(\omega)$ for the studied QD system.
The general interaction Hamiltonian for confined states (restricted to the ground states of the carriers) reads $$\begin{aligned}
H_{\mathrm{int}} & = & {\sum_{\bm{k}}}\sum_{\sigma}
\left[D_{\mathrm{e}}\mathcal{F}_{\mathrm{e}}({\bm{k}})
a_{\mathrm{e},\sigma}^{\dag} a_{\mathrm{e},\sigma}
-D_{\mathrm{h}}\mathcal{F}_{\mathrm{h}}({\bm{k}})
a_{\mathrm{h},\sigma}^{\dag} a_{\mathrm{h},\sigma}\right]\\
&& \times\sqrt{\frac{\hbar k}{2 \rho V c_{\rm l}}}
\left(b_{{\bm{k}}}^{\phantom{\dag}} + b_{-{\bm{k}}}^{\dag}\right),\end{aligned}$$ where $a_{\mathrm{e(h)},\sigma},a_{\mathrm{e(h)},\sigma}^{\dag}$ are the annihilation and creation operators, respectively, for an electron or a hole in the confined ground state with spin $\sigma$, and the form factors are given by $$\mathcal{F}_{\mathrm{e(h)}}({\bm{k}}) = \int_{-\infty}^{+\infty} d^{3}\bm{r} \;
\Psi_{\mathrm{e(h)}}^{*}(\bm{r}) e^{i\bm{kr}} \Psi_{\mathrm{e(h)}}(\bm{r}) =
\mathcal{F}_{\mathrm{e(h)}}^{*}(-{\bm{k}}).$$ For Gaussian wave functions as in Eq. (\[wavefunction\]), one explicitly finds by a simple integration $$\mathcal{F}_{\mathrm{e(h)}}({\bm{k}}) =
\exp{\left(-\frac{1}{4} k_{\perp}^{2}l_{\mathrm{e(h)}}^{2}
-\frac{1}{4} k_{z}^{2}l_{z}^{2} \right)}.$$
For a single electron state $|0\rangle=a_{\mathrm{e},\uparrow}^{\dag}|\mathrm{vac}\rangle$ or $|1\rangle=a_{\mathrm{e},\downarrow}^{\dag}|\mathrm{vac}\rangle$ ($|\mathrm{vac}\rangle$ is the empty dot state), one immediately finds $$\begin{aligned}
\langle 0|H_{\mathrm{int}}|0\rangle & = &
\langle 1|H_{\mathrm{int}}|1\rangle \\
& = &
{\sum_{\bm{k}}}D_{\mathrm{e}}\sqrt{\frac{\hbar k}{2 \rho V c_{\rm l}}}
\mathcal{F}_{\mathrm{e}}({\bm{k}})\left({b_{\bm{k}}}+{b_{-\bm{k}}^{\dag}}\right).\end{aligned}$$ For the trion state $|2\rangle=a_{\mathrm{e},\uparrow}^{\dag}
a_{\mathrm{e},\downarrow}^{\dag}
a_{\mathrm{h},\uparrow}^{\dag}|\mathrm{vac}\rangle$, one has $$\begin{aligned}
\lefteqn{\langle 2|H_{\mathrm{int}}|2\rangle=}\\
&&{\sum_{\bm{k}}}\sqrt{\frac{\hbar k}{2 \rho V c_{\rm l}}}
\left[ 2D_{\mathrm{e}}\mathcal{F}_{\mathrm{e}}({\bm{k}})
-D_{\mathrm{h}}\mathcal{F}_{\mathrm{h}}({\bm{k}}) \right]
\left({b_{\bm{k}}}+{b_{-\bm{k}}^{\dag}}\right).\end{aligned}$$
In order to further simplify the calculations, we neglect the difference between the localization sizes of the electron and hole wave functions. Taking into account the small variation of the electron and hole confinement widths $l_{\rm e}$ and $l_{\rm h}$ leads only to inessential quantitative corrections [@grodecka06]. This leads to the form factors $\mathcal{F}_{\mathrm{e}}({\bm{k}})=\mathcal{F}_{\mathrm{h}}({\bm{k}})
=\mathcal{F}({\bm{k}})$ \[Eq. (\[formfactor\])\] and to Eq. (\[hcph\]) with $f_{00}=f_{11}= \sqrt{\frac{\hbar k}{2 \rho V c_{\rm l}}}D_{\mathrm{e}}
\mathcal{F}({\bm{k}})$ and $f_{22}= \sqrt{\frac{\hbar k}{2 \rho V c_{\rm l}}}
(2D_{\mathrm{e}}-D_{\mathrm{h}})\mathcal{F}({\bm{k}})$, and thus to Eq. (\[f22\]).
With the isotropic acoustic phonon dispersion, we obtain the spectral density of the phonon reservoir $R_{\rm ph}(\omega)$ \[Eq. (\[Rph\])\]: $$\begin{aligned}
R_{\rm ph}(\omega) & = & [n_{\rm B}(\omega)+1] \frac{V}{(2\pi)^{3}\hbar^{2}}
\int_{0}^{2\pi} d\eta \int_{-\pi/2}^{\pi/2} d\zeta
\cos\zeta \\
&& \times \int dk \; k^{2}\left| F_{22} ({\bm{k}}) \right|^{2}
\left[ \delta(\omega-{\omega_{\bm{k}}}) + \delta(\omega+{\omega_{\bm{k}}}) \right],\end{aligned}$$ where the angles $\eta$ and $\zeta$ denote the orientation of the ${\bm{k}}$ vector. We can rewrite it in the form $$R_{\rm ph}(\omega) = R_{0} [n_{\rm B}(\omega)+1] \omega^{3} g(\omega),$$ where $$R_{0} = \frac{\hbar (D_{\rm e}-D_{\rm h})^{2}}{8 \pi^{2} \rho c_{\rm l}^{5}}$$ and the function $g(\omega)$ is defined as $$\begin{aligned}
\lefteqn{g(\omega) =}\\
&& \int_{-\pi/2}^{\pi/2} d\zeta \cos\zeta
\exp \left[ -\frac{l^{2}\omega^{2}}{2 c_{\rm l}^{2}} \left(
\cos^{2}\zeta + \frac{l_{z}^{2}}{l^{2}} \sin^{2}\zeta \right) \right].\end{aligned}$$
Lindblad master equation for the trion recombination channel {#app:lindblad}
============================================================
In this Appendix, we derive the results for the trion recombination channel in the Lindblad formalism and compare them with those calculated by means of the perturbative method discussed in this paper.
From the interaction Hamiltonian $H_{\rm c-rad}$ \[Eq. (7)\], we derive the Lindblad equation [@breuer02] in the form $$\dot\rho = \Gamma \left( \sigma_{-} \rho \sigma_{+} - \frac{1}{2}
\sigma_{+}\sigma_{-} \rho -\frac{1}{2} \rho \sigma_{+}\sigma_{-} \right)
- i \left[ H_{\rm ad} , \rho \right],$$ where $$\begin{aligned}
\lefteqn{\sigma_{+} = } \\
&& \frac{1}{2}\left[ e^{-i\omega_{0}t} \left(
|2{\rangle\!\langle}B| + |2{\rangle\!\langle}D| \right) + e^{-i\omega_{1}t} \left(
|2{\rangle\!\langle}B| - |2{\rangle\!\langle}D| \right) \right]\end{aligned}$$ and $H_{\rm ad} = i \dot U_{\rm C}(t) U_{\rm C}^{\dag}(t)$ is the Hamiltonian generating the adiabatic evolution, with $U_{\rm C}$ given by Eq. (\[Uan\]). This equation is consistent with the perturbative approximation in the sense that the latter is reproduced upon transforming to the interaction picture and performing an expansion in carrier-phonon coupling.
The results from the Lindblad equation together with those calculated with the perturative theory are plotted in Fig. 11 for the initial state $|\psi_{0}\rangle = |B\rangle$ for the $\pi/2$ rotation about the $z$ axis. For small detunings ($\hbar\Delta < 0.1$ meV), where the error is relatively large, the perturbative method yields slightly larger errors, while in the case of larger detunings the results are the same.
[10]{}
T. Calarco, A. Datta, P. Fedichev, E. Pazy, and P. Zoller, Phys. Rev. A [**68**]{}, 12310 (2003).
A. Imamoglu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin, and A. Small, Phys. Rev. Lett. [**83**]{}, 4204 (1999).
M. Feng, I. D’Amico, P. Zanardi, and F. Rossi, Phys. Rev. A [**67**]{}, 014306 (2003).
R. Hanson, B. Witkamp, L. M. K. Vandersypen, L. H. Willems van Beveren, J. M. Elzerman, and L. P. Kouwenhoven, Phys. Rev. Lett. [**91**]{}, 196802 (2003).
E. Pazy, E. Biolatti, T. Calarco, I. D’Amico, P. Zanardi, F. Rossi, and P. Zoller, Europhys. Lett. [**62**]{}, 175 (2003).
C. Emary and L. J. Sham, J. Phys. Cond. Matt. [**19**]{}, 056203 (2007).
S. E. Economou, L. J. Sham, Y. Wu, and D. G. Steel, Phys. Rev. B [**74**]{}, 205415 (2006).
A. Nazir, B. W. Lovett, S. D. Barrett, T. P. Spiller, and G. A. D. Briggs, Phys. Rev. Lett. [**93**]{}, 150502 (2004).
B. W. Lovett, A. Nazir, E. Pazy, S. D. Barrett, T. P. Spiller, and G. A. D. Briggs, Phys. Rev. B [**72**]{}, 115324 (2005).
M. V. G. Dutt, J. Cheng, B. Li, X. Xu, X. Li, P. R. Berman, D. G. Steel, A. S. Bracker, D. Gammon, S. E. Economou, R.-B. Liu, and L. J. Sham, Phys. Rev. Lett. [**94**]{}, 227403 (2005).
A. Greilich, R. Oulton, E. A. Zhukov, I. A. Yugova, D. R. Yakovlev, M. Bayer, A. Shabaev, A. L. Efros, I. A. Merkulov, V. Stavarache, D. Reuter, and A. Wieck, Phys. Rev. Lett. [**96**]{}, 227401 (2006).
M. V. G. Dutt, J. Cheng, Y. Wu, X. Xu, D. G. Steel, A. S. Bracker, D. Gammon, S. E. Economou, R.-B. Liu, and L. J. Sham, Phys. Rev. B [**74**]{}, 125306 (2006).
M. Atatüre, J. Dreiser, A. Badolato, and A. Imamoglu, Nature Phys. [**3**]{}, 101 (2007).
, C. Piermarocchi, L. J. Sham, D. Gammon, and D. G. Steel, Phys. Rev. B [**69**]{}, 075320 (2004).
F. Troiani, E. Molinari, and U. Hohenester, Phys. Rev. Lett. [**90**]{}, 206802 (2003).
D. Parodi, M. Sassetti, P. Solinas, P. Zanardi, and N. Zanghi, Phys. Rev. A [**73**]{}, 052304 (2006).
K. Roszak, A. Grodecka, P. Machnikowski, and T. Kuhn, Phys. Rev. B [**71**]{}, 195333 (2005).
A. Grodecka, L. Jacak, P. Machnikowski, and K. Roszak, in [*Quantum Dots: Research Developments*]{}, edited by P. A. Ling (Nova Science Publishers, NY, 2005), p. 47.
X. Caillet and C. Simon, Eur. Phys. J. D [**42**]{}, 341 (2007).
M. O. Scully and M. S. Zubairy, [*Quantum Optics*]{} (Cambridge University Press, Cambridge, 1997).
V. N. Golovach, A. Khaetskii, and D. Loss, Phys. Rev. Lett. [**93**]{} 016601 (2004).
A. Vagov, V. M. Axt, and T. Kuhn, Phys. Rev. B [**66**]{}, 165312 (2002).
L. Jacak, P. Machnikowski, J. Krasnyj, and P. Zoller, Eur. Phys. J. D [**22**]{}, 319 (2003).
P. Machnikowski, V. M. Axt, and T. Kuhn, Phys. Rev. A [**75**]{}, 052330 (2007).
B. Krummheuer, V. M. Axt, and T. Kuhn, Phys. Rev. B [**65**]{}, 195313 (2002).
A. Messiah, [*Quantum Mechanics*]{} (North-Holland, Amsterdam, 1966).
M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, 2000).
C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, [*Atom-Photon Interactions*]{} (Wiley-Interscience, New York, 1998).
R. Alicki, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. A [**65**]{}, 062101 (2002).
V. M. Axt, P. Machnikowski, and T. Kuhn, Phys. Rev. B [**71**]{}, 155305 (2005).
A. Krügel, V. M. Axt, T. Kuhn, P. Machnikowski, and A. Vagov, Appl. Phys. B [**81**]{}, 897 (2005).
R. Alicki, M. Horodecki, P. Horodecki, R. Horodecki, L. Jacak, and P. Machnikowski, Phys. Rev. A [**70**]{}, 010501(R) (2004).
P. Machnikowski and L. Jacak, Phys. Rev. B [**69**]{}, 193302 (2004).
J. Förstner, C. Weber, J. Danckwerts, and A. Knorr, Phys. Rev. Lett. [**91**]{}, 127401 (2003).
M. Wenin and W. Potz, Phys. Rev. A [**74**]{}, 022319 (2006).
U. Hohenester and G. Stadler, Phys. Rev. Lett. [**92**]{}, 196801 (2004).
A. Grodecka and P. Machnikowski, Phys. Rev. B [**73**]{}, 125306 (2006).
H.-P. Breuer and F. Petruccione, [*The Theory of Open Quantum Systems*]{} (Oxford University Press, Oxford, 2002).
|
[**Lie symmetries and exact solutions\
of variable coefficient mKdV equations:\
an equivalence based approach**]{}
Introduction
============
In the last decade there is an explosion of research activity in the investigation of different generalizations of the well-known equations of mathematical physics such as the KdV and mKdV equations, the Kadomtsev–Petviashvili equation, nonlinear Schrödinger equations, etc. A number of the papers devoted to the study of variable coefficient KdV or mKdV equations with time-dependent coefficients were commented in . Common feature of the commented papers is that results were obtained mainly for the equations which are reducible to the standard KdV or mKdV equations by point transformations. Moreover we noticed that usually even for such reducible equations the authors do not use equivalence transformations and perform complicated calculations of systems involving a number of unknown functions using computer algebra packages. It appears that the usage of equivalence transformations allows one to obtain further results in a simpler way.
The aim of this paper is to demonstrate this fact and to present the correct group classification of a class of variable coefficient mKdV equations. Namely, we investigate Lie symmetry properties and exact solutions of variable coefficient mKdV equations of the form $$\label{vc_mKdV}
u_t+u^2u_x+g(t)u_{xxx}+h(t)u=0,$$ where $g$ and $h$ are arbitrary smooth functions of the variable $t$, $g\neq0.$ It is shown in Section 2 that using equivalence transformations the function $h$ can be always set to the zero value and therefore the form of $h$ does not affect results of group classification. So, at first we carry out the exhaustive group classification of the subclass of class singled out by the condition $h=0$. Then using the classification list obtained and equivalence transformations we present group classification of the initial class .
Moreover, equivalence transformations appear to be powerful enough to present the group classification for much wider class of variable coefficient mKdV equations of the form $$\label{EqvcmKdV}
u_t+f(t)u^2u_x+g(t)u_{xxx}+h(t)u+(p(t)+q(t)x)u_x+k(t)uu_x+l(t)=0,$$ where all parameters are smooth functions of the variable $t$, $fg\neq0$ and the parameters $f, h, k$ and $l$ satisfy the condition $$\label{condition_reduce}
2lf=k_t+kh-k\frac{f_t}f.$$ This result can be easily obtained due to the fact that the group classification problem for class can be reduced to the similar problem for class with $h=0$ if and only if condition holds. Namely, equations whose coefficients satisfy are transformed to equations from class with $h=0$ by the point transformations (see Remark 1 for details). Equations from class are important for applications and, in particular, describe atmospheric blocking phenomenon .
An interesting property of the above classes of differential equations is that they are normalized, i.e., all admissible point transformations within these classes are generated by transformations from the corresponding equivalence groups. Therefore, there are no additional equivalence transformations between cases of the classification lists, which are constructed using the equivalence relations associated with the corresponding equivalence groups. In other words, the same lists represent the group classification results for the corresponding classes up to the general equivalence with respect to point transformations.
Recently the authors of [@john10a] obtained a partial group classification of class (the notation $a$ and $b$ was used there instead of $h$ and $g$, respectively.) The reason of failure was neglecting an opportunity to use equivalence transformations. This is why only some cases of Lie symmetry extensions were found, namely the cases with $h={\mathop{\rm const}\nolimits}$ and $h=1/t.$
In this paper we at first carry out the group classification problems for class and for the subclass of singled out by condition up to the respective equivalence groups. (Throughout the paper we use the notation$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ for the latter subclass.) Then using the obtained classification lists and equivalence transformations we present group classifications of classes and$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ without the simplification of both equations admitting extensions of Lie symmetry algebras and these algebras themselves by equivalence transformations. The extended classification lists can be useful for applications and convenient to be compared with the results of [@john10a].
Note that in group classifications for more general classes that include class were carried out. Nevertheless those results obtained up to very wide equivalence group seem to be inconvenient to derive group classifications for classes and .
In Section 5 we show how equivalence transformations can be used to construct exact solutions for those equations from class and its subclass which are reducible to the standard mKdV equation.
Equivalence transformations and mapping of class (1)\
to a simpler one
=====================================================
An important step under solving a group classification problem is the construction of the equivalence group of the class of differential equations under consideration. The usage of transformations from the related equivalence group often gives an opportunity to essentially simplify a group classification problem and to present the final results in a closed and concise form. Moreover, sometimes this appears to be a crucial point in the exhaustive solution of such problems [@IPS2007a; @VJPS2007; @VPS_2009].
There exist several kinds of equivalence groups. The *usual equivalence group* of a class of differential equations consists of the nondegenerate point transformations in the space of independent and dependent variables and arbitrary elements of the class such that the transformation component for the variables do not depend on arbitrary elements and each equation from the class is mapped by these transformations to equations from the same class. If any point transformation between two fixed equations from the class belongs to its (usual) equivalence group then this class is called *normalized* in the usual sense. See theoretical background on normalized classes in .
We find the equivalence group $G^\sim_{1}$ of class using the results obtained in for more general class of variable coefficient mKdV equations. Namely, in a hierarchy of normalized subclasses of the general third-order evolution equations was constructed. The equivalence group for normalized class of variable coefficient mKdV equations (without restriction on values of arbitrary elements) as well as criterion of reducibility of equations from this class to the standard mKdV equation were found therein.
The equivalence group $G^\sim$ of class consists of the transformations $$\label{EqvcKdVEquivGroup}
\tilde t=\alpha(t),\quad
\tilde x=\beta(t)x+\gamma(t),\quad
\tilde u=\theta(t)u+\psi(t),\quad$$ where $\alpha$, $\beta$, $\gamma$, $\theta$ and $\psi$ run through the set of smooth functions of $t$, $\alpha_t\beta\theta\ne0$. The arbitrary elements of are transformed by the formulas $$\label{EqvcKdVEquivGroupArbitraryElementTrans}
\begin{split}&
\tilde f=\frac{\beta}{\alpha_t\theta^2}f, \quad
\tilde g=\frac{\beta^3}{\alpha_t}g, \quad
\tilde h=\frac1{\alpha_t}\left(h-\frac{\theta_t}\theta\right), \\&
\tilde q=\frac1{\alpha_t}\left(q+\frac{\beta_t}\beta\right), \quad
\tilde p=\frac1{\alpha_t}\left(\beta p-\gamma q+\beta\frac{\psi^2}{\theta^2} f-\beta\frac\psi\theta k+\gamma_t-\gamma\frac{\beta_t}\beta\right), \\&
\tilde k=\frac\beta{\alpha_t\theta}\left(k-2\frac\psi\theta f\right), \quad
\tilde l=\frac1{\alpha_t}\left(\theta l-\psi h-\psi_t+\psi\frac{\theta_t}\theta\right).
\end{split}$$ The criterion of reducibility to the standard mKdV equation obtained in reads as follows.
An equation of form is similar to the standard (constant coefficient) mKdV equation if and only if its coefficients satisfy the conditions $$\label{EqvcmKdVEquivToKdV}
2h-2q=\frac{f_t}f-\frac{g_t}g, \quad
2lf=k_t+kh-k\frac{f_t}f.$$
Class is a subclass of class singled out by the conditions $f=1$ and $p=q=k=l=0.$ Substituting these values of the functions $f, p, q, k$ and $l$ to we obtain the following assertion.
An equation from class is reduced to the standard mKdV equation by a point transformation if and only if $$2h=-\frac{g_t}g,$$ i.e. if and only if $g(t)=c_0\exp(-2\int h(t) dt),$ where $c_0$ is an arbitrary nonzero constant.
As class (2) is normalized , its equivalence group $G^\sim$ generates the entire set of admissible (form-preserving) transformations for this class. Therefore, to describe of the set of admissible transformations for class we should set $\tilde f=f=1,$ $\tilde p=p=\tilde q=q=\tilde k=k=\tilde l=l=0$ in and solve the resulting equations with respect to transformation parameters. It appears that projection of the obtained transformations on the space of the variables $t, x$ and $u$ can be applied to an arbitrary equation from class . It means that set of admissible transformations of class is generated by transformations from its equivalence group and therefore this class is also normalized.
Summing up the above consideration, we formulate the following theorem.
Class is normalized. The equivalence group $G^{\sim}_1$ of this class consists of the transformations $$\begin{array}{l}
\tilde t=\beta\int\dfrac{dt}{\theta(t)^2},\quad
\tilde x=\beta x+\gamma, \quad \tilde u=\theta(t)u,\\[1ex]
\tilde h=\dfrac{\theta}{\beta}\left(\theta h-\theta_t\right), \quad
\tilde g=\beta^2\theta^2 g,
\end{array}$$ where $\beta$ and $\gamma$ are arbitrary constants, $\beta\neq0$ and the function $\theta$ is an arbitrary nonvanishing smooth function of the variable $t$.
The parameterization of transformations from the equivalence group $G^{\sim}_1$ by the arbitrary function $\theta(t)$ allows us to simplify the group classification problem for class via reducing the number of arbitrary elements. For example, we can gauge arbitrary elements via setting either $h=0$ or $g=1$. Thus, the gauge $h=0$ can be made by the equivalence transformation $$\label{gauge_h=0}
\tilde t=\int e^{-2\int h(t)\, dt}dt,\quad \tilde x=x, \quad
\tilde u=e^{\int h(t)\, dt}u,$$ that connects equation with the equation $\tilde u_{\tilde t}+\tilde u^2\tilde u_{\tilde x}+\tilde g(\tilde t){\tilde u}_{\tilde x\tilde x\tilde x}=0.$ The new arbitrary element $\tilde g$ is expressed via $g$ and $h$ in the following way: $$\tilde g(\tilde t)=e^{2\int h(t)\, dt}g(t).$$
This is why without loss of generality we can restrict the study to the class $$\begin{gathered}
\label{vc_mKdV_h=0}
u_t+u^2u_{x}+g(t)u_{xxx}=0,\end{gathered}$$ since all results on symmetries and exact solutions for this class can be extended to class with transformations of the form .
The equivalence group for class can be obtained from Theorem 1 by setting $\tilde h=h=0$. Note that class is also normalized.
The equivalence group $G^{\sim}_0$ of class is formed by the transformations $$\begin{array}{l}
\tilde t=\dfrac{\delta_2}{\delta_4^{\,2}} t+\delta_1,\quad \tilde x=\delta_2x+\delta_3,\quad
\tilde u=\delta_4u, \quad
\tilde g=\delta_2^{\,2}\delta_4^{\,2}g,
\end{array}$$ where $\delta_j,$ $j=1,\dots,4,$ are arbitrary constants, $\delta_2\delta_4\not=0$.
The equivalence algebra $\mathfrak g^\sim$ of class is spanned by the operators ${\partial}_t$, ${\partial}_x$, $t{\partial}_t-\frac12u{\partial}_u-g{\partial}_g$ and $t{\partial}_t+x{\partial}_x+2g{\partial}_g$.
An equation from class is reducible to an equation from class by a point transformation if and only if its coefficients $f, h, k$ and $l$ satisfy the second condition of , i.e., condition . The corresponding transformation from $G^{\sim}$ has the form $$\begin{gathered}
\label{gauge2}
\begin{array}{l}\arraycolsep=0ex
\tilde t=\int fe^{-\int(q+2h)dt}dt,\quad \tilde x=e^{-\int q dt}x-
\int \left(p-\frac{k^2}{4f}\right)e^{-\int q dt}dt,\\
\tilde u=e^{\int h dt}\left(u+\tfrac k{2f}\right),\quad\tilde g=\dfrac gf e^{2\int(h-q)dt}.
\end{array}\end{gathered}$$ In particular, condition implies that all equations from class with $k=l=0$ are reducible to equations from class .
Lie symmetries
==============
We at first carry out the group classification of class up to $G_0^\sim$-equivalence. In this way we simultaneously solve the group classification problems for class up to $G^\sim_1$-equivalence and for the class$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ up to $G^\sim$-equivalence (see explanations below). Then using the obtained classification lists and equivalence transformations we are able to present group classifications of classes and$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ without the simplification of equations with wider Lie invariance algebras by equivalence transformations. These extended classification lists can be useful for applications and convenient to be compared with the results of [@john10a].
Group classification of class is carried out in the framework of the classical approach [@Ovsiannikov1982]. All required objects (the equivalence group, the kernel and all inequivalent cases of extension of the maximal Lie invariance algebras) are found.
Namely, we look for the operators of the form $Q=\tau(t,x,u)\partial_t+\xi(t,x,u)\partial_x+\eta(t,x,u)\partial_u$, which generate one-parameter groups of point symmetry transformations of equations from class . These operators satisfy the necessary and sufficient condition of infinitesimal invariance, i.e. action of the $r$-th prolongation $Q^{(r)}$ of $Q$ to the ($r$-th order) differential equation (DE) results in identical zero, modulo the DE under consideration. See, e.g., for details. Here we require that $$\label{c1}
Q^{(3)}\big(u_t+u^2u_{x}+g(t)u_{xxx}\big)=0$$ identically, modulo equation .
After elimination of $u_t$ due to , condition (\[c1\]) becomes an identity in eight variables, namely, the variables $t$, $x$, $u$, $u_x$, $u_{xx}$, $u_{tx}$, $u_{txx}$ and $u_{xxx}$. In fact, equation (\[c1\]) is a multivariable polynomial in the variables $u_x$, $u_{tx}$, $u_{xx}$, $u_{txx}$ and $u_{xxx}$. The coefficients of the different powers of these variables must be zero, giving the determining equations on the coefficients $\tau$, $\xi$ and $\eta$. Since equation has a specific form (it is a quasi-linear evolution equation, the right hand side of is a polynomial in the pure derivatives of $u$ with respect to $x$ etc), the forms of the coefficients can be simplified. That is, $\tau=\tau(t),$ $\xi=\xi(t,x)$ and, moreover, $\eta=\zeta(t,x)u$. Then splitting with respect to $u$ leads to the equations $\zeta_t=\zeta_x=0,$ $\xi_t=\xi_{xx}=0$, $\tau_t-\xi_x+2\zeta=0$ and $\tau g_t=(3\xi_x-\tau_t)g$. Therefore, we obtain the coefficients of the infinitesimal operator $Q$ in the form $$\tau=c_1t+c_2,\quad
\xi=c_3 x+c_4, \quad
\eta=\frac12(c_3-c_1)u,$$ and the *classifying* equation which includes arbitrary element $g$ $$\label{Eqvc_mKdV_h=0ClassifyingEq}
\left(c_1t+c_2\right)g_t=(3c_3-c_1)g.$$ The study of the classifying equation leads to the following theorem.
The kernel $\mathfrak g^\cap$ of the maximal Lie invariance algebras of equations from class coincides with the one-dimensional algebra $\langle\partial_x\rangle$. All possible $G_0^\sim$-inequivalent cases of extension of the maximal Lie invariance algebras are exhausted by the cases 1–3 of Table 1.
\[TableLieSymHF\] **Table .** The group classification of the class $u_t+u^2u_{x}+g\,u_{xxx}=0$, $g\neq0$.\
N $g(t)$ Basis of $A^{\max}$
-------------------------- ----------------------- ------------------------------------------------------------------
\[TableLieSym\_ker\] $\forall$ $\partial_x$
\[TableLieSym\_2op\] $\delta t^n,\,n\neq0$ $\partial_x,\,6t\partial_t+2(n+1)x\partial_x+(n-2) u\partial_u$
\[TableLieSym\_3op\] $\delta e^{t}$ $\partial_x,\,6\partial_t+2x\partial_x+u\partial_u$
\[TableLieSymHF\_const\] $\delta $ $\partial_x,\,\partial_t,\,3t\partial_t+x\partial_x-u\partial_u$
\
As class is normalized, it is also convenient to use a version of the algebraic method of group classification or combine this method with the direct investigation of the classifying equation . The procedure which we use is the following. We consider the projection $\mathrm P\mathfrak g^\sim$ of the equivalence algebra $\mathfrak g^\sim$ of class to the space of the variables $(t,x,u)$. It is spanned by the operators ${\partial}_t$, ${\partial}_x$, $D^t=t{\partial}_t-\frac12u{\partial}_u$ and $D^x=x{\partial}_x+\frac12u{\partial}_u$. For any $g$ the maximal Lie invariance algebra of the corresponding equation from class is a subalgebra of $\mathrm P\mathfrak g^\sim$ in view of the normalization of this class and contains the kernel algebra $\mathfrak g^\cap=\langle\partial_x\rangle$. The algebra $\mathrm P\mathfrak g^\sim$ can be represented in the form $\mathrm P\mathfrak g^\sim=\mathfrak g^\cap{\mathbin{\mbox{$\lefteqn{\hspace{.77ex}\rule{.4pt}{1.2ex}}{\in}$}}}\mathfrak g^{\rm ext}$, where $\mathfrak g^\cap$ and $\mathfrak g^{\rm ext}=\langle D^t,D^x,{\partial}_t\rangle$ is an ideal and a subalgebra of $\mathrm P\mathfrak g^\sim$, respectively. Therefore, each extension of the kernel algebra $\mathfrak g^\cap$ is associated with a subalgebra of $\mathfrak g^{\rm ext}$. In other words, to classify Lie symmetry extensions in class up to $G_0^\sim$-equivalence it is sufficient to classify $G_0^\sim$-inequivalent subalgebras of $\mathfrak g^{\rm ext}$ and then check what subalgebras are agreed with the classifying equation and corresponds to a maximal extension. The complete list of $G_0^\sim$-inequivalent subalgebras of $\mathfrak g^{\rm ext}$ is exhausted by the following subalgebras: $$\begin{gathered}
\mathfrak g_0 =\{0\},\quad
\mathfrak g_{1.1}^a=\langle D^t+aD^x \rangle,\quad
\mathfrak g_{1.2}^b=\langle D^x+b{\partial}_t \rangle,\quad
\mathfrak g_{1.3} =\langle {\partial}_t \rangle,\quad
\mathfrak g_{2.1} =\langle D^t,D^x \rangle,\\
\mathfrak g_{2.2}^a=\langle D^t+aD^x,{\partial}_t\rangle,\quad
\mathfrak g_{2.3} =\langle D^x,{\partial}_t \rangle,\quad
\mathfrak g_3 =\langle D^t,D^x,{\partial}_t \rangle,\quad\end{gathered}$$ where the parameter $b$ can be scaled to any appropriate value if it is nonzero. We fix a subalgebra from the above list and substitute the coefficients of each basis element of this subalgebra into the classifying equation . As a result, we obtain a system of ordinary differential equations with respect to the arbitrary element $g$. The systems associated with the subalgebras $\mathfrak g_{1.2}^0$, $\mathfrak g_{2.2}^a$, where $a\ne1/3$, $\mathfrak g_{2.3}$ and $\mathfrak g_3$ are not consistent with the condition $g\ne0$. The extensions given by the subalgebras $\mathfrak g_{1.3}$ and $\mathfrak g_{1.1}^{1/3}$ are not maximal since the maximal Lie invariance algebra in the case $g_t=0$ coincides with $\mathfrak g_3$. The subalgebras $\mathfrak g_0$, $\mathfrak g_{1.1}^a$, $\mathfrak g_{1.2}^b$ and $\mathfrak g_{2.2}^{1/3}$, where $a\ne1/3$ and $b\ne0$, correspond to cases 0, 1, 2 and 3, respectively. The parameter $n$ appearing in case 2 is connected with the parameter $a$ via the formula $n=3a-1$, in case 3 the parameter $b$ is scaled to the value $b=3.$
For any equation from class there exists an imaged equation in class with respect to transformation (resp. in class$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ with respect to transformation ). The equivalence group $G_0^\sim$ of class is induced by the equivalence group $G^\sim_1$ of class which, in turn, is induced by the equivalence group $G^\sim$ of class . These guarantee that Table 1 presents also the group classification list for class up to $G^\sim_1$-equivalence (resp. for the class$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ up to $G^\sim$-equivalence). As all of the above classes are normalized, we can state that we obtain Lie symmetry classifications of these classes up to general point equivalence. This leads to the following corollary of Theorem 3.
An equation from class (resp. class ) admits a three-dimensional Lie invariance algebra if and only if it is reduced by a point transformation to constant coefficient mKdV equation, i.e., if and only if $g(t)=c_0\exp(-2\int h(t) dt),$ where $c_0$ is an arbitrary nonzero constant (resp. if and only if conditions hold).
There exists a connection between cases 1 and 2 of Table 1 which is realized via a limit process called contraction. Examples of such connections arising as limits between equations and their Lie invariance algebras are presented in . The precise definition and mathematical background for contractions of equations, algebras of symmetries and solutions were first formulated in [@IPS2007b]. To make the limit process from case 1 to case 2, we apply equivalence transformation $\mathcal T$: $\tilde t =n(t-1),$ $\tilde x=n^{1/3}x,$ $\tilde u=n^{-1/3}u$ to the equation $u_t+u^2u_{x}+\delta t^n u_{xxx}=0$ (case 1 of Table 1), which results in the equation $\tilde u_{\tilde t}+{\tilde u}^2\tilde u_{\tilde x}+\delta \left({\tilde t}/n+1\right)^n {\tilde u}_{\tilde x\tilde x\tilde x}=0.$ Then we proceed to the limit $n\rightarrow +\infty$ and obtain the equation $\tilde u_{\tilde t}+{\tilde u}^2\tilde u_{\tilde x}+\delta e^{\tilde t} {\tilde u}_{\tilde x\tilde x\tilde x}=0$ (case 2 of Table 1). The same procedure allows one to obtain contraction between the corresponding Lie invariance algebras.
To derive group classification of class which are not simplified by equivalence transformations, we at first apply equivalence transformations from the group $G_0^\sim$ to the classification list presented in Table 1 and obtain the following extended list:
0\. arbitrary $\tilde g\colon$ $\langle\partial_{\tilde x}\rangle$;
1\. $\tilde g=c_0(\tilde t+c_1)^n\colon$ $\langle\partial_{\tilde x},\,6(\tilde t+c_1)\partial_{\tilde t}+2(n+1)
{\tilde x}\partial_{\tilde x}+(n-2){\tilde u}\partial_{\tilde u}\rangle$;
2\. $\tilde g=c_0e^{m\tilde t}\colon$ $\langle\partial_{\tilde x},\,6\partial_{\tilde t}+2m{\tilde x}\partial_{\tilde x}
+m{\tilde u}\partial_{\tilde u}\rangle$;
3\. $\tilde g=c_0\colon$ $\langle\partial_{\tilde x},\,\partial_{\tilde t},\,3\tilde t\partial_{\tilde t}+
{\tilde x}\partial_{\tilde x}-{\tilde u}\partial_{\tilde u}\rangle$.
Here $c_0$, $c_1$, $m$ and $n$ are arbitrary constants, $c_0m n\neq0$.
Then we find preimages of equations from class $\tilde u_{\tilde t}+\tilde u^2\tilde u_{\tilde x}+\tilde g(\tilde t){\tilde u}_{\tilde x\tilde x\tilde x}=0$ with arbitrary elements collected in the above list with respect to transformation . The last step is to transform basis operators of the corresponding Lie symmetry algebras. The results are presented in Table 2.
\[TableLieSymHF2\] **Table .** The group classification of the class $u_t+u^2u_{x}+g\,u_{xxx}+h\,u=0$, $g\neq0$.\
N $h(t)$ $g(t)$ Basis of $A^{\max}$
--------------------------- ----------- ------------------------------------------------------------------ -----------------------------------------------------------------------------------------------------------------
\[TableLieSym\_ker2\] $\forall$ $\forall$ $\partial_x$
\[TableLieSym\_2op2\] $\forall$ $c_0 \left(\int e^{-2\int h\, dt}dt+c_1\right)^ne^{-2\int h dt}$ $\partial_x,\,H\partial_t+2(n+1)x\partial_x+(n-2-hH) u\partial_u$
\[TableLieSym\_3op2\] $\forall$ $c_0 e^{\int\left( m e^{-2\int h\, dt}-2 h\right) dt}$ $\partial_x,\,6e^{2\int h dt}\partial_t+2m x\partial_x+\left(m-6he^{2\int h dt}\right)u\partial_u$
\[TableLieSymHF\_const2\] $\forall$ $c_0 e^{-2\int h dt}$ $\partial_x,\,e^{2\int h dt}\left(\partial_t-hu\partial_u\right),\,H\partial_t+2x\partial_x-(2+hH) u\partial_u$
\
Now it is easy to see that Table 2 includes all cases presented in [@john10a] as partial cases.
In a similar way, using transformations we obtain group classification of class$~\eqref{EqvcmKdV}|_\eqref{condition_reduce}$ without simplification by equivalence transformations. The corresponding results are collected in Table 3.
\[TableLieSymHF2\] **Table .** The group classification of the class $u_t+f\, u^2u_x+g\,u_{xxx}+h\,u+(p +q\,x)u_x+k\,uu_x+\frac1{2f}\left(k_t+kh-k\frac{f_t}f\right)=0$, $fg\neq0$.\
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
N $g(t)$ Basis of $A^{\max}$
----------------------- ---------------------------------------------------------------- ------------------------------------------------------------------------------------------
\[TableLieSym\_ker2\] $\forall$ $e^{\int q dt}\partial_x$
\[TableLieSym\_2op2\] $c_0 fe^{2\int(q- h) dt}\left(\dfrac HF\right)^n$ $e^{\int q dt}\partial_x,\,H\partial_t+
\Bigl[(qH+2n+2)x+H\left(
p-\frac{k^2}{4f}\right)-$
$2(n+1)Q\Bigr]\partial_x+\left[(n-2-hH) u+\frac k{2f}(n-2)-lH\right]\partial_u$
\[TableLieSym\_3op2\] $c_0 fe^{\int\left( m fe^{-\int(q+2h)\, dt}+2q-2 h\right) dt}$ $e^{\int q dt}\partial_x,\,F\partial_t+\Bigl[(qF+2m)x+F\left(
p-\frac{k^2}{4f}\right)-2mQ\Bigr]\partial_x+$
$\left[(m-hF)u+\frac m2\frac kf-lF\right]\partial_u$
3 $c_0 fe^{2\int(q- h) dt}$ $e^{\int q dt}\partial_x,\,F\Bigl[\partial_t+\left(qx+p-\frac{k^2}{4f}\right)\partial_x-
\left(hu+l\right)\partial_u\Bigr],\,H\partial_t+$
$
\Bigl[(qH+2)x+H\left(
p-\frac{k^2}{4f}\right)-2Q\Bigr]\partial_x-\Bigl[(2+hH) u+\frac k{f}+lH\Bigr]\partial_u$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\
Construction of exact solutions using\
equivalence transformations
======================================
A number of recent papers concern the construction of exact solutions to different classes of KdV- or mKdV-like equations using e.g. such methods as “generalized $(G'/G)$-expansion method”, “Exp-function method”, “Jacobi elliptic function expansion method”, etc. A number of references are presented in . Moreover, we have noticed that usually authors of the above papers did not use equivalence transformations and carry out complicated calculations for solution of systems involving a number of arbitrary functions using computer algebra packages. Nevertheless, almost in all cases exact solutions were constructed only for equations which are reducible to the standard KdV or mKdV equations by point transformations and usually these were only solutions similar to the well-known one-soliton solutions. In this section we show that the usage of equivalence transformations allows one to obtain more results in a simpler way.
The $N$-soliton solution of the mKdV equation in the canonical form $$\label{canonical_mKdV}
U_t+6U^2U_{x}+U_{xxx}=0$$ were constructed as early as in the seventies using the Hirota’s method . The one- and two-soliton solutions of equation have the form $$\begin{gathered}
\label{sol1soliton}
U=a+\frac{k_0^2}{\sqrt{4a^2+k_0^2}\cosh z+2a}, \quad z=k_0x-k_0(6a^2+k_0^2)t+b,\\[2ex]\label{sol2soliton}
U=\frac{e^{\theta_1}\left(1+\dfrac{A}{4a_2^2}\,e^{2\theta_2}\right)+e^{\theta_2}\left(1+
\dfrac{A}{4a_1^2}\,e^{2\theta_1}\right)}{\left(
\dfrac1{2a_1}\,e^{\theta_1}+\dfrac1{2a_2}\,e^{\theta_2}\right)^2+\left(1-\dfrac{A}{4a_1a_2}\,e^{\theta_1+\theta_2}\right)^2},\end{gathered}$$ where $k_0, a, b, a_i, b_i$ are arbitrary constants, $\theta_i=a_ix-a_i^3t+b_i,$ $i=1,2;$ $A=\left(\dfrac{a_1-a_2}{a_1+a_2}\right)^2$. Rational solutions which can be recovered by taking a long wave limit of soliton solutions are also known for a long time . Thus, the one- and two-soliton solutions give the rational solutions $$\begin{gathered}
\label{sol_rational}
U=a-\frac{4a}{4a^2z^2+1}\quad\mbox{and}\quad
U=a-\frac{12a\left(z^4+\dfrac{3}{2a^2}z^2-\dfrac{3}{16a^4}-24tz\right)}{4a^2\left(
z^3+12t-\dfrac{3}{4a^2}\,z\right)^2+9\left(z^2+\dfrac1{4a^2}\right)^2},\end{gathered}$$ respectively, where $z=x-6a^2t$ and $a$ is an arbitrary constant. These solutions can be found also in . Note that solution and the second solution of are presented in with misprints.
Combining the simple transformation $\tilde u=\sqrt6 U$ that connects the form of the mKdV equation with the form $$\label{mKdV_canonical}
\tilde u_{\tilde t}+{\tilde u}^2\tilde u_{\tilde x}+\tilde u_{\tilde x\tilde x\tilde x}=0$$ and transformation , we obtain the formula $$\textstyle u=\sqrt{6}e^{-\int h(t)dt}\,U\left(\int e^{-2\int h(t)\, dt}dt,\,x\right).$$ Using this formula and solutions – we can easily construct exact solutions for the equations of the general form $$\label{mKdV_canonical_preimage}
u_t+u^2u_{x}+e^{-2\int h\, dt}u_{xxx}+hu=0,$$ which are preimages of with respect to transformation . Here $h=h(t)$ is an arbitrary nonvanishing smooth function of the variable $t$.
For example, the two-soliton solution leads to the following solution of $$\begin{gathered}
u=\sqrt{6}e^{-\int h\, dt}\frac{e^{\theta_1}\left(1+\dfrac{A}{4a_2^2}\,e^{2\theta_2}\right)+e^{\theta_2}\left(1+
\dfrac{A}{4a_1^2}\,e^{2\theta_1}\right)}{\left(
\dfrac1{2a_1}\,e^{\theta_1}+\dfrac1{2a_2}\,e^{\theta_2}\right)^2+\left(1-\dfrac{A}{4a_1a_2}\,e^{\theta_1+\theta_2}\right)^2},\end{gathered}$$ where $a_i, b_i$ are arbitrary constants, $\theta_i=a_ix-a_i^3\int e^{-2\int h\, dt}dt+b_i,$ $i=1,2$; $A=\left(\dfrac{a_1-a_2}{a_1+a_2}\right)^2$. In a similar way one can easily construct one-soliton and rational solutions for equations from class .
More complicated transformation of the form $$\begin{gathered}
\textstyle
u=\sqrt{6}e^{-\int hdt}\,U\left(\int fe^{-\int(q+2h)dt}dt,\,e^{-\int q dt}x-
\int \left(p-\frac{k^2}{4f}\right)e^{-\int q dt}dt\right)-\dfrac k{2f}\end{gathered}$$ allows us to use solutions (11)–(13) of equation for construction of exact solutions of equations of the form $$\label{l_eq}
u_t+f u^2u_x+fe^{2\int(q-h)dt}u_{xxx}+h\, u+(p+q x)u_x+k\, uu_x+\frac1{2f}\left(\!k_t+kh-k\frac{f_t}f\!\right)=0,$$ which are preimages of equation with respect to transformation . Here $f, h, k, p$ and $q$ are arbitrary smooth functions of the variable $t$, $f\neq0.$
For example, the solution of obtained from the one-soliton solution (11) has the form $$\begin{gathered}
u=\sqrt{6}e^{-\int hdt}\left(a+\frac{k_0^2}{\sqrt{4a^2+k_0^2}\cosh z+2a}\right)-\dfrac k{2f}.\end{gathered}$$Here $z=k_0e^{-\int q dt}x-
k_0\int \left(p-\frac{k^2}{4f}\right)e^{-\int q dt}dt-k_0(6a^2+k_0^2)\int fe^{-\int(q+2h)dt}dt+b,$ where $k_0, a$ and $b$ are arbitrary constants. In a similar way one can easily construct two-soliton and rational solutions for equations from class .
Conclusion
==========
In this paper group classification problems for class and two more classes of variable coefficient mKdV equations which are reducible to class by point transformations are carried out with respect to the corresponding equivalence groups. Using the normalization property it is proved that these classifications coincide with the ones carried out up to general point equivalence. The classification lists extended by equivalence transformations are also presented. Such lists are convenient for applications.
It is shown that the usage of equivalence groups is a crucial point for exhaustive solution of the problem. Moreover, equivalence transformations allow one to construct exact solutions of different types in a much easier way than by direct solving. These transformations can also be utilized to obtain conservation laws, Lax pairs and other related objects for equations reducible to well-known equations of mathematical physics by point transformations without direct calculations.
Acknowledgments {#acknowledgments .unnumbered}
---------------
The author thanks Prof. Roman Popovych for useful discussions and valuable comments.
[99]{} =-.3ex
M.J. Ablowitz and J. Satsuma, Solitons and rational solutions of nonlinear evolution equations, [*J. Math. Phys.*]{} [**19**]{} (1978) 2180–2186.
M.J. Ablowitz and H. Segur, [*Solitons and Inverse Scattering Transform*]{}, Cambridge University Press, 1981.
G.W. Bluman and S. Kumei, [*Symmetries and Differential Equations*]{}, Springer, New York, 1989.
G.W. Bluman, G.J. Reid and S. Kumei, New classes of symmetries for partial differential equations, [*J. Math. Phys.*]{} [**29**]{} (1988) 806–811.
E. Dos Santos Cardoso-Bihlo, A. Bihlo and R.O. Popovych, Enhanced preliminary group classification of a class of generalized diffusion equations, [*Commun. Nonlinear Sci. Numer. Simulat.*]{} [**16**]{} (2011) 3622–3638, arXiv:1012.0297.
F. Güngör, V.I. Lahno and R.Z. Zhdanov, Symmetry classification of KdV-type nonlinear evolution equations, [*J. Math. Phys.*]{} [**45**]{} (2004) 2280–2313, arXiv:nlin/0201063.
N.M. Ivanova, R.O. Popovych and C. Sophocleous, Group analysis of variable coefficient diffusion–convection equations. I. Enhanced group classification, [*Lobachevskii J. Math.*]{} [**31**]{} (2010) 100–122, arXiv:0710.2731.
N.M. Ivanova, R.O. Popovych and C. Sophocleous, Group analysis of variable coefficient diffusion–convection equations. II. Contractions and Exact Solutions, 19 p., arXiv:0710.3049.
A.G. Johnpillai and C.M. Khalique, Lie group classification and invariant solutions of mKdV equation with time-dependent coefficients, [*Commun. Nonlinear Sci. Numer. Simulat.*]{} [**16**]{} (2011) 1207–1215.
J.G. Kingston, On point transformation of evolution equations, [*J. Phys. A: Math. Gen.*]{} [**24**]{} (1991) L769–L774.
J.G. Kingston and C. Sophocleous, On form-preserving point transformations of partial differential equations, [*J. Phys. A: Math. Gen.*]{} [**31**]{} (1998) 1597–1619.
V.I. Lahno, S.V. Spichak, V.I. Stognii, [*Symmetry analysis of evolution type equations*]{}, Institute of Computer Science, Moscow-Izhevsk, 2004 (in Russian).
B.A. Magadeev, On group classification of nonlinear evolution equations, [*Algebra i Analiz*]{} [**5**]{} (1993) 141–156 (in Russian); translation in [*St. Petersburg Math. J.*]{} [**5**]{} (1994) 345–359.
P. Olver, [*Applications of Lie groups to differential equations*]{}, Springer-Verlag, New York, 1986.
H. Ono, Algebraic soliton of the modified Korteweg-de Vries equation, [*J. Phys. Soc. Jpn.*]{} [**41**]{} (1976) 1817–1818.
L.V. Ovsiannikov, [*Group analysis of differential equations*]{}, Academic Press, New York, 1982.
A.D. Polyanin and V.F. Zaitsev, [*Handbook of Nonlinear Partial Differential Equations*]{}, Chapman & Hall/CRC Press, Boca Raton, 2004.
R.O. Popovych, Classification of admissible transformations of differential equations, in [*Collection of Works of Institute of Mathematics*]{} (Institute of Mathematics, Kyiv, Ukraine) [**3**]{}, no. 2 (2006) 239–254. (Available at http://www.imath.kiev.ua/$\sim$appmath/Collections/collection2006.pdf)
R.O. Popovych and N.M. Ivanova, Potential equivalence transformations for nonlinear diffusion-convection equations, [*J. Phys.A.*]{} [**38**]{} (2005) 3145–3155, arXiv:math-ph/0402066.
R.O. Popovych, M. Kunzinger and H. Eshraghi, Admissible point transformations and normalized classes of nonlinear Schrödinger equations, [*Acta Appl. Math.*]{} [**109**]{} (2010) 315–359, arXiv:math-ph/0611061.
R.O. Popovych and O.O. Vaneeva, More common errors in finding exact solutions of nonlinear differential equations: Part I, [*Commun. Nonlinear Sci. Numer. Simulat.*]{} [**15**]{} (2010) 3887–3899, arXiv:0911.1848.
X.Y. Tang, J. Zhao, F. Huang, S.Y. Lou, Monopole blocking governed by a modified KdV type equation, [*Studies in Appl. Math.*]{} [**122**]{} (2009) 295–304, arXiv:0812.0134.
O.O. Vaneeva, A.G. Johnpillai, R.O. Popovych and C. Sophocleous, Enhanced group analysis and conservation laws of variable coefficient reaction–diffusion equations with power nonlinearities, [*J. Math. Anal. Appl.*]{} [**330**]{} (2007) 1363–1386; arXiv:math-ph/0605081.
O.O. Vaneeva, R.O. Popovych and C. Sophocleous, Enhanced group analysis and exact solutions of variable coefficient semilinear diffusion equations with a power source, [*Acta Appl. Math.*]{} [**106**]{} (2009) 1–46, arXiv:0708.3457.
|
---
abstract: 'We analyze SDSS spectra of 568 obscured luminous quasars. The \[OIII\]$\lambda$5007Å emission line shows blueshifts and blue excess, indicating that some of the narrow-line gas is undergoing an organized outflow. The velocity width containing 90% of line power ranges from 370 to 4780 km/sec, suggesting outflow velocities up to $\sim$2000 km/sec, and is strongly correlated with the radio luminosity among the radio-quiet quasars. We propose that radio emission in radio-quiet quasars is due to relativistic particles accelerated in the shocks within the quasar-driven outflows; star formation in quasar hosts is insufficient to explain the observed radio emission. The median radio luminosity of the sample of $\nu L_{\nu}$\[1.4GHz\]$=10^{40}$ erg/sec suggests a median kinetic luminosity of the quasar-driven wind of $L_{\rm wind}=3\times 10^{44}$ erg/sec, or about 4% of the estimated median bolometric luminosity $L_{\rm bol}=8\times 10^{45}$ erg/sec. Furthermore, the velocity width of \[OIII\] is positively correlated with mid-infrared luminosity, which suggests that outflows are ultimately driven by the radiative output of the quasar. Emission lines characteristic of shocks in quasi-neutral medium increase with the velocity of the outflow, which we take as evidence of quasar-driven winds propagating into the interstellar medium of the host galaxy. Quasar feedback appears to operate above the threshold luminosity of $L_{\rm bol}\sim 3\times 10^{45}$ erg/sec.'
author:
- 'Nadia L. Zakamska'
- 'Jenny E. Greene'
bibliography:
- 'master.bib'
title: 'Quasar feedback and the origin of radio emission in radio-quiet quasars'
---
Introduction {#sec:intro}
============
Black hole feedback – the strong interaction between the energy output of supermassive black holes and their surrounding environments – is routinely invoked to explain the absence of overly luminous galaxies, the black hole vs. bulge correlations and the similarity of black hole accretion and star formation histories [@tabo93; @silk98; @spri05; @hopk06]. After years of intense observational effort, specific examples of black-hole-driven winds have now been identified using a variety of observational techniques, both at low and at high redshifts [@nesv06; @arav08; @nesv08; @moe09; @dunn10; @alex10; @harr12; @fabi12].
How these outflows are launched near the black hole and established over the entire host galaxy remains a topic of active research. In particular, it is becoming clear that as the outflow impacts an inhomogeneous interstellar medium of the galaxy, the winds are expected to contain gas at a wide range of physical conditions, and these different “phases” of the winds require different types of observations [@veil05]. As a result, the determination of the physical parameters of these outflows – including such basic parameters as the mass, the momentum and the kinetic energies they carry – remains challenging. In the last few years, a lot of progress in this area has been made by observations of the coldest components of the outflows which are in the form of neutral or even molecular gas [@fisc10; @feru10; @rupk11; @stur11; @aalt12; @veil13; @rupk13a; @rupk13b; @cico14; @sun14]. It remains unclear how common this cold component is in winds driven by a powerful active nucleus and how the mass, momentum and energy carried by the wind are distributed across the different phases.
Several years ago, we embarked on an observational program to determine whether radio-quiet, luminous quasars have observable effects on their galaxy-wide environment. One of our lines of investigation is to determine the extent and kinematics of the warm ($T\sim 10^4$ K) ionized gas – the so-called narrow-line region of quasars. We simplify the observational task by looking at obscured quasars [@zaka03; @reye08] – those where the line of sight to the nucleus is blocked by intervening material, allowing us to study the distribution of matter in the galaxy unimpeded by the bright central source. We find strong evidence that ionized gas is extended over scales comparable to or exceeding that of the host galaxy; furthermore, it is kinematically disturbed and is not in equilibrium with the gravitational potential of the galaxy [@gree09; @gree11; @hain13].
More recently, we surveyed a sample of obscured radio-quiet quasars using a spectroscopic integral field unit [@liu13a; @liu13b]. We found extended ionized gas encompassing the entire host galaxy (median diameter of nebulae of 28 kpc), suggestive of wide-angle outflows, and determined kinetic energies of these outflows to be well in excess of $10^{44}$ erg/sec, with a median 2% conversion rate from the bolometric luminosity to the kinetic energy of warm ionized gas; more energy can be carried by other components. Furthermore, we identified several candidate objects where the wind has “broken out” of the denser regions of the galaxy and is now expanding into the intergalactic medium, sometimes in bubble-like structures [@gree12; @liu13b].
These observations demonstrate the presence of extended ionized gas in host galaxies of type 2 quasars, which is apparently out of dynamical equilibrium with the host galaxy and is likely in an outflow on the way out of the host. In this paper, we examine spectra of several hundred obscured quasars and we study the relationships between gas kinematics and other physical properties of these objects. In Section \[sec:data\] we describe the sample selection, the dataset and the measurements. In Section \[sec:optical\], we conduct kinematic analysis of the optical emission lines. In Section \[sec:multiwv\] we discuss the relationships between multi-wavelength properties of quasars and kinematic measures of their ionized gas nebulae. In Section \[sec:compo\] we present composite spectra and discuss trends in weak emission lines. We present qualitative models for radio emission and emission lines in Section \[sec:discussion\], and we summarize in Section \[sec:conclusions\].
We use a $h$=0.7, $\Omega_m$=0.3, $\Omega_{\Lambda}$=0.7 cosmology throughout this paper. SDSS uses vacuum wavelengths, but for consistency with previous literature we use air wavelengths in Angstroms to designate emission lines. Wavelengths are obtained from NIST [@kram13] and Atomic Line List[^1] and converted between air and vacuum as necessary using @mort91. Objects are identified as SDSS Jhhmm+ddmm, with full coordinates given in the catalog by @reye08. We use ‘1D’, ‘2D’ and ‘3D’ abbreviations for one-, two-, and three-dimensional values.
Data and measurements {#sec:data}
=====================
Sample selection and host galaxy subtraction {#sec:selection}
--------------------------------------------
The obscured quasar candidates studied here were selected from the spectroscopic data of the Sloan Digital Sky Survey [@york00] based on their emission line ratios and widths to be the luminous analogs of Seyfert 2 galaxies [@zaka03]; the most recent sample contains 887 objects at $z<0.8$ [@reye08]. Infrared observations demonstrate that these sources have high bolometric luminosities (up to $10^{47}$ erg/sec, @zaka04 [@zaka08; @liu09]). Chandra and XMM-Newton observations show that they contain luminous X-ray sources with large amounts of obscuration along the line of sight [@zaka04; @ptak06; @vign10; @jia13]. HST imaging and ground-based spectropolarimetry demonstrate the presence of scattered light – a classical signature of a buried broad-line active nucleus [@anto85; @zaka05; @zaka06]. In other words, all follow-up observations thus far are consistent with these objects being luminous obscured quasars. Our estimate of the number density of these objects suggests that they are at least as common as unobscured quasars at the same redshifts and line luminosities [@reye08].
In this paper we examine the kinematic structure of narrow emission line gas in 568 objects (out of the entire sample of 887 by @reye08) selected to have \[OIII\] luminosities above $10^{8.5}L_{\odot}$. Their distribution in \[OIII\] luminosity / redshift space is shown in Figure \[pic\_sample\]. At the median redshift of the sample presented here $z=0.397$, the SDSS fiber (3 in diameter) covers the galaxies out to 8 kpc away from the center.
![The distribution of the entire @reye08 sample of type 2 quasars in the redshift / \[OIII\] luminosity space (grey) and the 568 objects with kinematic analysis in this paper (black).[]{data-label="pic_sample"}](picture_selection1.eps)
Ideally, we would like to measure the kinematics of the ionized gas relative to the host galaxy potential. The SDSS spectroscopic pipeline provides a high-quality redshift based on fits of observed spectra to a variety of library templates. In the cases of our objects, this pipeline latches onto the strong emission lines, so the redshifts are affected by narrow line kinematics and may be offset from the redshifts of the host galaxies. Thus, our first step is to determine the host galaxy redshifts based on the absorption features produced in stellar photospheres.
Even though the quasars are obscured, the continuum from the host galaxy is very difficult to detect. One component of the continuum is due to the stars in the host galaxy. Furthermore, while the direct emission from the quasar is completely blocked in most cases, some quasar light reaches the observer after scattering off of the interstellar medium of the host galaxy and becomes an important continuum contribution at high quasar luminosities. Finally, in very luminous cases the Balmer continuum produced by free-bound transitions in the extended narrow emission line gas is also seen [@zaka05], and the emission lines tend to confuse the search for stellar features in the continuum.
We use the stellar velocity dispersion code described in detail in @gree06 and @gree09 to model the host galaxy continuum and to establish the systemic velocity. The continuum of each quasar is modeled as the linear combination of three stellar models plus a power-law component to mimic a possible scattered light contribution [@zaka06; @liu09]. For templates, we use @bruz03 single stellar population models rather than individual stellar spectra.
We first shift the spectra into the approximate rest-frame by using the SDSS pipeline redshifts. Then we fit the continuum over the wavelength range 3680-5450Å, allowing for the stellar models to have a velocity of up to 300 km/sec relative to the SDSS frame and to be broadened with a Gaussian function that represents the stellar velocity dispersion of the host. When the host galaxy is well detected, typical stellar absorption features visible in the spectra include Ca H+K, G-band, and Mg I[*b*]{} lines. After this procedure, the entire host continuum is subtracted and the spectrum is shifted into the fine-tuned host galaxy frame.
We are able to identify some host galaxy features in 271 objects. In most of these cases, the absorption features are so weak that we do not consider the reported host velocity dispersions to be accurate. The greater benefit of the host subtraction procedure is that in these 271 objects we can analyze narrow line kinematics relative to an accurately determined host frame. In the remaining objects we find no evidence for stellar features, so we subtract a featureless continuum. For the majority of objects, our workable wavelength range covers \[OII\]$\lambda\lambda$3726,3729, \[OIII\]$\lambda$5007 and everything in between.
Fitting functions and non-parametric measurements
-------------------------------------------------
We aim to use non-parametric measures that do not depend strongly on the specific fitting procedure. We need robust measures or robust analogs of the first four moments of the line profile: typical average velocity, velocity dispersion, and the skewness and the kurtosis of the velocity distribution. We fit the profiles with one to three Gaussian components in velocity space, but in principle other fitting functions could be used. We assign no particular physical significance to any of the parameters of the individual components; rather, the goal is to obtain a noiseless approximation to the velocity profile.
We use relative change in reduced $\chi^2$ values to evaluate which fit should be accepted; if adding an extra Gaussian component leads to a decrease in $\chi^2$ of $<$10%, we accept the fit with a smaller number of components. The single-Gaussian fit is accepted for 36 objects, a two-Gaussian fit is accepted for 132 and the remaining 400 objects are fit with three Gaussians. Almost all objects that have high signal-to-noise observations have reduced $\chi^2$ values that are too high to be statistically acceptable and thus would require either a larger number of components or different fitting functions to be fitted to statistical perfection. Fortunately, the non-parametric measures that we derive are rather robust: adding the third Gaussian component changes our second moment measure $w_{80}$ by less than 10% in 83% of objects. Examples of line fits are shown in Figure \[pic\_example\]. The objects selected for this figure have the top ten highest values of $w_{80}$ (our analog of the velocity dispersion defined below).
![Spectra of the \[OIII\]$\lambda\lambda$4959,5007 doublet in the ten objects with the highest $w_{80}$ values ($2314\le w_{80}\le 2918$ km/sec), with their multi-Gaussian fits. The two lines in the doublet are fit simultaneously, under the assumption that the kinematic structure of both lines is the same and that the ratio of amplitudes is 0.337. Dashed lines show the positions of $v_{10}$, $v_{50}$ and $v_{90}$. []{data-label="pic_example"}](picture_selection11.eps)
Armed with fitting functions performed in velocity space $f(v)$, we construct the normalized cumulative velocity distribution $F(v)=\int_{-\infty}^vf(v'){\rm d}v'/\int_{-\infty}^{+\infty}f(v'){\rm d}v'$. Since the velocity profile is a noiseless non-negative function, $F(v)$ is strictly monotonically increasing. We then determine the velocities at which 5%, 25%, 50%, 75% and 95% of the line flux accumulates. The median velocity $v_{50}$ is the solution of the equation $F(v)=0.5$. The width comprising 90% of the flux is $w_{90}=v_{95}-v_{05}$, the width at 80% is $w_{80}=v_{90}-v_{10}$ and the width comprising 50% of the flux is $w_{50}=v_{75}-v_{25}$. All these values have dimensions of velocity (km/sec). For a Gaussian profile, the value $w_{80}$ is close to the conventionally used full width at half maximum ($w_{80}=2.563\sigma=1.088$FWHM; $w_{90}=3.290\sigma$). For a typical object in our sample (median $w_{80}=752$ km/sec) the instrumental dispersion of the SDSS ($\sigma_{\rm inst}=70$ km/sec) contributes only a few per cent to the line width [@liu13b], and as the line profiles are typically non-Gaussian, we do not attempt to deconvolve the resolution except a couple of cases noted explicitly when we use Gaussian components individually.
We can measure the asymmetry of the velocity profile relative to the median velocity by computing a dimensionless relative asymmetry $R=((v_{95}-v_{50})-(v_{50}-v_{05}))/(v_{95}-v_{05})$. Negative values correspond to cases where the blueshifted wing of the line extends to higher velocities than the redshifted one, and positive values correspond to cases where the redshifted wing dominates. This measure is a non-parametric analog of skewness and is equal to 0 for any symmetric profile (including a single Gaussian). Furthermore, we can measure the prominence of the wings of the profile, or the non-parametric analog of the kurtosis, by computing $r_{9050}\equiv w_{90}/w_{50}$. For a Gaussian profile, this value is equal to 2.4389. Values higher than this indicate profiles with relatively more extended wings than a Gaussian function: for example, a Lorentzian profile $f(v)=1/(\gamma^2+v^2)$ (where $\gamma$ is the measure of the profile width) has $r_{9050}=6.3138$. Values lower than the Gaussian value indicate a profile with a stronger peak-to-wings ratio and are rarely encountered in our sample.
Finally, we compute the absolute asymmetry of the profile, which is $A=$(flux($v>0$)-flux($v<0$))/total flux. In terms of the normalized cumulative velocity distribution, $A=1-2F(0)$. This asymmetry is dimensionless and it is positive for profiles with more flux at redshifted wavelengths than at blueshifted wavelengths.
Values $A$ and $v_{50}$ critically depend on an accurate determination of the host galaxy redshift, because this is what we use to fix the $v=0$ point. If no absorption features in the composite stellar light of the host are detected, the redshift can only be determined from the emission lines themselves, which renders the absolute velocity and skewness meaningless. Values of $R$, $w_{90}$, $w_{80}$, $w_{50}$, and $r_{9050}$ include only differences between velocities and do not hinge on the accurate determination of the host velocity.
Robustness of non-parametric measures {#sec:robust}
-------------------------------------
In this section we evaluate the performance of the non-parametric measures. The theoretical advantage of the non-parametric measures is in their relative insensitivity to the fitting functions used. We test this assumption by repeating all the fits using a set of one, two or three Lorentzian ($f(v)=1/(\gamma^2+v^2)$) profiles. The Lorentzian function has significantly more flux in the faint wings than does the Gaussian function, and this shape is not borne out in the observations of the line profiles in our sample. The quality of the fits with Lorentzian profiles is significantly poorer than of those with Gaussian profiles, and therefore all our final non-parametric measures are based on multi-Gaussian fitting as described in the previous subsection.
Nevertheless, we carry out the comparison between the non-parametric measures derived from the two methods. We find that both sets of fits yield nearly identical absolute asymmetries and median velocities $v_{50}$, which is not surprising because both these measures are most sensitive to the correct identification of the line centroid. All other measures (the widths, relative asymmetry and $r_{9050}$) are strongly correlated between the two sets of fits, but the specific values are systematically different. The line width $w_{80}$ as measured from the Lorentzian fits is about 25% higher than that from the Gaussian fits; the relative asymmetry is significantly weaker as measured by Lorentzian profiles than the one measured by the Gaussian ones; and $r_{9050}$(Lorentzian) is approximately equal to $r_{9050}$(Gaussian)$+2$. All these differences are as expected from the fitting functions with different amounts of power in the extended wings.
The conclusions we derive from this comparison are two-fold. First, since the multi-Lorentzian fits are not only statistically but also visibly inferior to the multi-Gaussian ones, the real systematic uncertainty on the $w_{80,90}$ – the key measurements discussed in this paper – is significantly smaller than the 25% difference between line widths calculated from these two methods. This is very encouraging. (For the majority of objects, $w_{80}$ is accurate to 10% or better, as measured from the comparison of non-parametric measures derived from two-Gaussian and three-Gaussian fits.) Second, we confirm that the non-parametric measures are relatively robust: although the Lorentzian profiles do not yield statistically good fits, they nevertheless give reasonable estimates of the non-parametric measures.
We perform an additional test to determine the effect of the signal-to-noise ratio (S/N) of the spectra on our measurements. A narrow Gaussian emission line with a weak broad base observed with a high S/N is represented by two Gaussians in our multi-Gaussian fit, and its non-parametric measures include the power contributed by the weak broad base. On the contrary, if the same object is observed in a lower quality observation, the weak base is not necessarily recognized as such because the $\chi^2$ of the two-Gaussian fit may be indistinguishable from the one-Gaussian one and thus the latter will be preferred.
We conduct the following Monte Carlo test to explore the effect of noise on our measurements. We take eight of the highest S/N objects in our catalog, four with $w_{80}>1000$ km/sec and four with $w_{80}<500$ km/sec. We then downgrade the quality of these spectra by adding progressively higher Gaussian random noise to the original (essentially noiseless) spectra and conduct all our multi-Gaussian and non-parametric measures in the manner identical to that used for real science observations. The results are shown in Figure \[pic\_noise\]. Both absolute asymmetry and $v_{50}$ are relatively insensitive to the noise and do not show systematic trends. The line width $w_{80}$ does not depend on the noise for the four objects with relatively narrow ($w_{80}<500$ km/sec) lines. But for the broad-line objects the measured $w_{80}$ noticeably declines as the quality of observations worsens, reflecting the ‘missing broad base’ phenomenon. Measurements with peak S/N$>10$ are relatively safe from this phenomenon: only one of the eight objects shows a noticeable decline of $w_{80}$ at S/N$\la 30$. The relative asymmetry and the kurtosis-like $r_{9050}$ quickly drop to single-Gaussian values as the S/N decreases below 20 or so.
Kinematic analysis of integrated spectra {#sec:optical}
========================================
Outflow signatures {#sec:outflows}
------------------
Interpreting the line-of-sight gas kinematic measurements in terms of physical 3D motions of the gas is highly non-trivial. This is true even when spatial information is available, for example, via integral-field observations [@liu13b], but is even more so when we have to rely only on one spatially integrated spectrum. The reason is that if an object exhibits a spherically symmetric optically thin outflow, then its emission line profiles are symmetric and peaked at zero velocity, since there is a large amount of gas moving close to the plane of the sky. Therefore, there is simply no “smoking gun” outflow signature in the emission line profile of such a source. If an outflow has non-zero optical depth, then its presence can be inferred by its absorption of the background light at wavelengths blueshifted relative to the quasar rest-frame, as happens in quasar absorption-line systems [@cren03; @arav08], but this is not our case. Therefore, proving that a given line profile is due to gaseous outflow is often difficult and relies on indirect arguments [@liu13b].
To investigate the relationship between the observed line-of-sight velocity dispersion of the lines and the typical outflow velocity, we consider for the moment a spherically symmetric outflow with a constant radial velocity $v_0$. Because different streamlines have different inclinations to the line of sight, the observer sees a range of velocities – in this simplest case of the spatially integrated spectrum, the line profile is a top-hat between $-v_0$ and $v_0$, so that $w_{80}=1.6v_0$. In @liu13b, we consider an outflow with a power-law luminosity density and calculate the velocity profiles in a spatially resolved observation, finding a typical $w_{80}\simeq 1.3v_0$ in the outer parts of the narrow line region. If a flow consists of clouds moving on average radially with velocity $v_0$, but also having an isotropic velocity dispersion $\sigma$, then the observed $w_{80}$ can be approximately calculated as $w_{80}\simeq \sqrt{(1.6v_0)^2+(2.42\sigma)^2}$, as long as $v_0$ and $\sigma$ are not too dissimilar.
Another simple case is a radial flow, in which at every point there are clouds with a range of radial velocities. As an example, we consider the results of the 2D simulations of quasar feedback by @nova11, in which at every distance from the quasar the higher density regions (‘clouds’) have a wide distribution of radial velocities (G.Novak, private communication) which ranges from 0 to $\ga 1000$ km/sec, with a median among all clouds of $v_0=220$ km/sec. We use the velocities of clouds straight from these simulations and several different luminosity density profiles to produce mock emission line profiles.
The resulting profiles are shown in Figure \[pic\_novak\]; they are insensitive to the adopted luminosity density profile because the velocity distribution of clouds does not change appreciably as a function of distance in these simulations. The profiles are peaked at zero velocity because there is a large population of clouds with small radial velocities, while the velocity dispersion of clouds is neglected in our calculations. If the clouds have an isotropic velocity dispersion which does not vary with distance, a more faithful profile can be obtained by convolving the profiles in Figure \[pic\_novak\] with a Gaussian, which would make the profiles broader and less peaky. The measured velocity width of the profile is $w_{80}\simeq 1.4 v_0$, similar to the scaling obtained in other simple cases, despite the broadness of the velocity distribution in this example.
![Mock emission line velocity profiles constructed from the cloud velocity distribution from @nova11, using six different emissivity profiles (from linearly declining, to Gaussian, to flat, to centrally-tapered power law). All six curves are essentially on top of one another because the velocity distribution of the clouds hardly varies with the distance from the quasar in these simulations. The actual median radial velocity of the clouds is 220 km/sec, and we measure $w_{80}=310$ km/sec and $w_{90}=470$ km/sec for the simulated profile. []{data-label="pic_novak"}](picture_science18.eps)
To sum up, (i) there is no tell-tale outflow signature in an optically thin, spherically symmetric outflow; (ii) deviations from spherical symmetry (and moreover from axial symmetry) are required to produce asymmetric line profiles; and (iii) the velocity width of the emission line can be used to estimate the outflow velocity, $w_{80}\simeq (1.4-1.6)\times v_0$. The most natural way in which the symmetry may be expected to be broken is due to dust obscuration, either by dust embedded in the outflow itself or by dust concentrated in the galactic disk. In either case, the redshifted part is more affected by extinction, and thus excess blueshifted emission is considered a sufficient indicator of an outflow [@heck81; @dero84; @whit85a; @wils85]. In such case, the apparent $w_{80}$ decreases typically by $\la 30\%$ if the extinction is $\la 2.5$ mag and concentrated in a disk [@liu13b], reducing $w_{80}/v_0$ by the same amount. Thus for a given $w_{80}$, an asymmetric profile indicates a somewhat higher $v_0$ than a symmetric one.
Double-peaked profiles are expected in some geometries for a bi-conical outflow or more complex outflow kinematics [@cren00], but can also be due to the rotation of the galaxy disk or two (or more) active nuclei in a merging system of galaxies, each illuminating its own narrow-line region. Distinguishing these possibilities usually requires follow-up observations at high spatial resolution, and the relative frequency of these scenarios remains a matter of debate [@come09a; @liu10a; @shen11; @fu12; @barr13; @blec13], but it appears that outflows dominate over dual active nuclei. It is likely that complex outflow kinematics is responsible for the majority of split-line profiles in our sample, and we conduct non-parametric kinematic measurements of such objects in the same way we do for the rest of the sample and include them in all our analyses.
Analysis of \[OIII\] kinematics {#sec:analysis}
-------------------------------
In Figure \[pic\_np\] we present the results of the non-parametric measurements of the \[OIII\] line in our sample of type 2 quasars. Both the relative asymmetry $R$ and the absolute asymmetry $A$ demonstrate slight preference for negative values, i.e., for blue excess. The blue asymmetries indicate that there is at least some outflow component in the \[OIII\]-emitting gas in type 2 quasars. The sample means and standard deviations for these values are $A=-0.03 \pm 0.15$ and $R=-0.08\pm 0.15$. Line widths, relative asymmetries, kurtosis parameters and (to a lesser extent) median velocities are correlated with one another, in the sense that objects with broader lines also have a more pronounced blue excess (negative $R$), higher $r_{9050}$ and more negative $v_{50}$ (for the latter, the sample mean and standard deviation are $-14\pm 75$ km/sec). Thus, as we discussed in the previous section, it is plausible that the \[OIII\] velocity width can serve as a proxy for the outflow velocity. Similar relationships have been reported by other authors in type 1 quasars, e.g., by @stei13, who find a stronger blueshift of \[OIII\] relative to broad H$\beta$ and \[OII\]$\lambda\lambda$3726,3729 as a function of \[OIII\] width.
![image](picture_science4.eps)
From these correlations, a picture emerges in which the outflow component, or at least the component of the outflow which is more affected by the obscuration, tends to be broad. We further explore this notion in Figure \[pic\_vel\], where we split the line profiles into a ‘broad’ and a ‘narrow’ component. For objects with 2-Gaussian fits, the designation is straightforward, but the majority of objects require three Gaussians, in which case we pick the two most luminous ones and designate them ‘broad’ and ‘narrow’ according to their velocity dispersions. In Figure \[pic\_vel\], we show that indeed the broader of the Gaussian components is the one that tends to be blueshifted relative to the narrow ones. The narrow cores tend to be well-centered in the host galaxy frame; the mean and standard deviation of the centroids of the narrower Gaussian components is $3\pm 150$ km/sec for the 271 objects with accurately determined host redshifts. On the contrary, the broader components tend to be slightly blueshifted, both relative to the host galaxy frame (velocity centroid of $-60\pm 210$ km/sec in the 271 objects with accurate host redshifts) and relative to the narrow components ($v_{\rm c, broad}-v_{\rm c, narrow}=-90 \pm 270$ km/sec for the entire sample of 568 sources).
![Velocity offset between the centroids of the broad component from the narrow component (black for profiles decomposed into two Gaussians, grey for the two Gaussians that dominate the flux in three-Gaussian decompositions). $v_c$ is for the velocity centroids. Broad components tend to be blueshifted relative to the narrow ones. The velocity dispersions of individual Gaussian components have been corrected for the instrumental resolution ($\sigma_{\rm inst}=70$ km/sec subtracted in quadrature).[]{data-label="pic_vel"}](picture_science2.eps)
Because the narrow component is well-centered in the host frame, it is tempting to postulate that the narrow Gaussian component tends to be produced by gas in dynamical equilibrium with the host galaxy, e.g., in rotation in the galaxy disk, and is simply illuminated by the quasar, whereas the broad component is due to the outflow. However, we hesitate to make this inference, as there is no particular reason to assign any physical meaning to the individual parameters of the Gaussian components. We again draw a lesson here from Figure \[pic\_novak\], where the mock emission line profile is due entirely to the outflowing clouds and can be decomposed into several Gaussian components, none of which correspond to the gas in rotation in the host galaxy.
We go further in Figure \[pic\_stellar\], where we show that neither the overall line width nor the width of the narrower Gaussian component show any correlation with the stellar velocity dispersion. Since even among the 271 objects where the host galaxy was detected many of the stellar velocity dispersions are rather poorly determined for the reasons discussed in Section \[sec:selection\], for this figure we use only the better determined stellar velocity dispersions from @gree09. That the overall line width (left panel) shows no relationship with stellar velocity dispersion is not surprising if most of the line width arises due to the outflow. But the narrow cores do not appear to show any relationship with the stellar velocity dispersion either. As a result, it seems likely to us that virtually none of the \[OIII\]-emitting gas in type 2 quasars is in dynamical equilibrium with the host galaxy. This is in contrast to the situation in lower luminosity active galaxies in which \[OIII\] width strongly correlates with galaxy rotation and / or bulge velocity dispersion [@wils85; @whit92b; @nels96; @gree05o3]. In such objects, it appears that the gas motions are in accord with the gravitational forces in the galaxy, and the gas is simply illuminated and photo-ionized by the active nucleus to produce the narrow-line region.
![Taking just the objects with well-determined stellar velocity dispersions from @gree09, we plot the overall width of the \[OIII\] emission line in the left panel and the dispersion of just the narrow component in the right panel as a function of the stellar velocity dispersion (grey points for the narrower of the two dominant components in a three-Gaussian fit, black points for the narrower of the two components in a two-Gaussian fit). The left panel is similar to Fig 7 of @gree09 even though the exact non-parametric measures of dispersion used in that paper were defined differently. The error bars on $w_{80}$ reflect non-Gaussianity of lines and are calculated by converting from $w_{50}$ and $w_{90}$ assuming a Gaussian profile; thus, for a Gaussian profile they would be zero. Stellar velocity dispersions and $\sigma_{\rm narrow}$ have been corrected for instrumental dispersion; since lines are non-Gaussian, $w_{80}$ values are plotted as observed (typical correction is a few per cent).[]{data-label="pic_stellar"}](picture_science3a.eps)
The relative shifts between the narrower and the broader components could arise if the narrow component is produced on all scales in the host galaxy, where it is less likely to suffer from strong extinction, whereas the broader component is produced closer to the nucleus, where it is more likely to be affected by extinction. This is consistent with an outflow that is driven close to the nucleus and then gradually slowed down by the interactions with the interstellar medium [@wagn13]. Furthermore, this picture is consistent with the apparent decline of the line width in the outer parts of the outflow seen in the integral field unit observations of type 2 quasars [@liu13b], although the effect is small, with velocity width declining only by 3% per projected kpc.
The sample mean and standard deviation of line width is $w_{90}=1230\pm 590$ km/sec, with a median of 1060 km/sec, minimum of 370 km/sec and maximum of 4780 km/sec ($w_{80}=880\pm 430$ km/sec, median 752 km/sec, min 280 km/sec, max 2918 km/sec), much higher than that of local ultraluminous infrared galaxies (ULIRGs; median $w_{90}\simeq 800$ km/sec) and especially of those without a powerful active nucleus in their center (median $w_{90}\simeq 600$ km/sec for pure starbursts, @hill14). For ULIRGs, the line widths and the outflow velocities strongly correlate with the power source (higher for active galaxies, lower for starbursts; @rupk13a [@hill14]), and the majority of the objects in our sample show line widths consistent with quasar-driven outflows, as expected. In @liu13b, we estimated that for gas disks rotating in the potential of the most massive galaxies line widths do not exceed $w_{80}\simeq 600$ km/sec. Thus, the line-of-sight gas velocities that we see in our sample are too high to be confined even by the most massive galaxy potential, and this gas cannot be in dynamical equilibrium with the host galaxy.
The range of \[OIII\] luminosities in our sample is not all that large, with 90% of sources between $\log(L{\rm [OIII]}/L_{\odot})=8.5$ and 9.5, and the remaining 10% sources spanning the higher decade in luminosity. There is some tendency of objects with more pronounced outflow signatures (higher width, higher $r_{9050}$) to have higher \[OIII\] luminosity (Spearman rank correlation coefficient $r_{\rm S}\simeq 0.19$ for both relationships, probability of the null hypothesis of uncorrelated datasets $P_{\rm NH}=10^{-5}$), however, no correlations are seen between $L$\[OIII\] and absolute asymmetry, relative asymmetry or median velocity (Table \[tab:pnh\]). Furthermore, any correlations between $L$\[OIII\] and kinematic measures can be strongly affected by the cutoff in the \[OIII\] luminosity distribution which is due to our sample selection ($\log(L{\rm [OIII]}/L_{\odot})\ge 8.5$. We report the significance of correlations at face value, without trying to account for the effects of the luminosity cutoff.
\[tab:pnh\]
Luminosity indicator abs. asym. $A$ med. vel. $v_{50}$ width $w_{50}$ width $w_{90}$ rel. asym. $R$ shape par. $r_{9050}$
----------------------------------------------------- ------------------------------- ---------------------------------------------------------------------------------------------------------- ------------------------------ ----------------------- ------------------ -----------------------
$\nu L_\nu$\[1.4 GHz\], upper limits at face value 271; -0.37; $<10^{-5}$ 271; -0.35; $<10^{-5}$ 568; 0.30; $<10^{-5}$ 568; 0.29; $<10^{-5}$ 568; 0.06; 0.18 568; -0.01; 0.88
$\nu L_\nu$\[1.4 GHz\], upper limits decreased by 2 271; -0.36; $<10^{-5}$ 271; -0.35; $<10^{-5}$ 568; 0.34; $<10^{-5}$ 568; 0.34; $<10^{-5}$ 568; 0.04; 0.30 568; 0.02; 0.72
$L$\[OIII\] 271; -0.05; 0.42 271; -0.03; 0.65 568; 0.12; $3\times 10^{-3}$ 568; 0.19; $10^{-5}$ 568; 0.06; 0.18 568; 0.19; $10^{-5}$
$\nu L_\nu$\[5\] 270; -0.15; 0.02 270; -0.16; $7\times 10^{-3}$ 562; 0.36; $<10^{-5}$ 562; 0.37; $<10^{-5}$ 562; -0.11; 0.01 562; 0.04; 0.35
$\nu L_\nu$\[12\] 259; -0.18; $4\times 10^{-3}$ 259; -0.19; $2\times 10^{-3}$; $v_{50}<0$: 151; -0.33; $4\times 10^{-5}$; $v_{50}\ge 0$: 108; 0.19; 0.04 539; 0.41; $<10^{-5}$ 539; 0.44; $<10^{-5}$ 539; -0.10; 0.02 539; 0.10; 0.02
mid-infrared slope $\beta$ (higher is redder) 259; -0.03; 0.59 259; -0.04; 0.54 539; 0.16; 3$\times 10^{-4}$ 539; 0.23; $<10^{-5}$ 539; -0.02; 0.69 539; 0.21; $<10^{-5}$
$L_{\rm X}$, all available objects 24; 0.05; 0.82 24; 0.12; 0.56 54; 0.14; 0.32 54; 0.11; 0.43 54; 0.08; 0.56 54; 0.01; 0.93
$L_{\rm X}$, Compton-thin objects 12; -0.29; 0.36 12; -0.13; 0.68 30; -0.05; 0.78 30; -0.05; 0.79 30; 0.38; 0.04 30; -0.31; 0.09
Fainter lines {#sec:shapes}
-------------
We compare the kinematic structure of the brightest lines, \[OIII\]$\lambda$5007, \[OII\]$\lambda\lambda$3726,3729, and H$\beta$. It has long been known that line kinematics often vary as a function of the ion ionization potential or the line critical density [@whit85c]. On the basis of largely anecdotal evidence, we previously established that in type 2 quasars with highly asymmetric or split-line \[OIII\]$\lambda$5007 profiles the \[OII\]$\lambda\lambda$3726,3729 profiles seemed less complex [@zaka03]. The difficulty of this analysis is illustrated in Figure \[pic\_noise\]: because \[OIII\]$\lambda$5007 is by far the brightest line, it is much easier to miss a weak broad component in \[OII\] than in \[OIII\], giving the impression that the \[OII\] is narrower or lacks kinematic structures present in \[OIII\]. To remedy this problem, in what follows we use only the emission lines detected with peak S/N$>$10 (about 230 objects), and we prefer S/N$>$20 (about 140 objects).
We also need to take into account the doublet nature of \[OII\]$\lambda\lambda$3726,3729. Because the velocity spacing of the doublet (220 km/sec) is smaller than the typical line widths, we usually cannot deblend the two components. Thus observationally the most robust course of action is to measure the non-parametric width of the entire doublet which is then expected to be slightly higher than the width of a single (non-doublet) line with the same kinematic structure. To estimate the magnitude of this bias, we take the best multi-Gaussian fits for \[OIII\]$\lambda$5007 for our entire sample and we simulate noiseless \[OII\]$\lambda\lambda$3726,3729 of the exact same kinematic structure assuming a 1:1, 1:1.2 and 1:1.4 line ratios within the doublet. We then refit the resulting \[OII\] profile, calculate its non-parametric widths without deblending and compare it with the ‘true’ input \[OIII\] non-parametric widths. We find the following relationships: $$\begin{aligned}
w_{50, {\rm apparent}}{\rm [OII]}=\sqrt{w_{50,{\rm true}}^2+(175{\rm km/sec})^2};\nonumber\\
w_{80, {\rm apparent}}{\rm [OII]}=\sqrt{w_{80,{\rm true}}^2+(290{\rm km/sec})^2};\nonumber\\
w_{90, {\rm apparent}}{\rm [OII]}=\sqrt{w_{90,{\rm true}}^2+(340{\rm km/sec})^2}. \label{eq:woii}\end{aligned}$$ The accuracy of these fitting formulae is 7% (standard deviation). Now when we have the observed \[OII\] doublet, we need only measure its overall non-parametric measures and then invert equations (\[eq:woii\]) to correct for the doublet nature of the \[OII\] line. The magnitude of relative asymmetries and the kurtosis parameter $r_{9050}$ of the simulated doublets are both slightly smaller than those of the input \[OIII\]$\lambda$5007 profiles, as expected.
In Figure \[pic\_width\], we show the comparison between \[OIII\], \[OII\] and H$\beta$ line widths. H$\beta$ is slightly (8%) systematically narrower than \[OIII\] on average. Much of the difference is likely attributable to the signal-to-noise effect described in Section \[sec:robust\], since it is easier to miss a weak broad component in a noisy profile of H$\beta$ than in the much higher S/N profile of \[OIII\]. There are only a few cases where the “by eye” examination of the \[OIII\] and H$\beta$ profiles superposed on one another reveals that the \[OIII\] is genuinely significantly broader than the noisier H$\beta$ (Figure \[pic\_wexamples\]).
![Comparison between non-parametric width measurements for \[OIII\]$\lambda$5007, H$\beta$ and \[OII\]$\lambda\lambda$3726,3729. In light grey are sources with estimated peak signal-to-noise of H$\beta$ (left) and \[OII\] (right) between 10 and 20; in black are sources with S/N$>20$. Because of the minimal S/N requirement, 223 sources appear in the left panel and 237 in the right. For \[OII\], the top of each bar corresponds to the non-parametric measure of the width of the entire non-deblended doublet, whereas the bottom of the bar includes the correction for the doublet splitting according to eq. (\[eq:woii\]).[]{data-label="pic_width"}](picture_science13a.eps)
In contrast, \[OII\] is noticeably narrower than \[OIII\] for high-width objects (Figure \[pic\_width\]). The average doublet-corrected width of \[OII\] over all 237 objects with peak S/N\[OII\]$>10$ is only 5% smaller than that of \[OIII\]; however, when we consider only objects with $w_{90}$\[OIII\]$>1500$ km/sec, the difference in width increases to 27%. Overall while \[OII\] sometimes displays asymmetries and complicated profiles, it does not show the extremely broad features seen in \[OIII\], with H$\beta$ demonstrating kinematics that are intermediate between \[OII\] and \[OIII\].
We conduct a similar examination of He II $\lambda$4686 and \[OIII\]$\lambda$4363 profiles, but only a handful of objects with S/N$>10$ in these lines have relatively broad \[OIII\]. In these objects, \[OIII\]$\lambda$5007, He II $\lambda$4686 and \[OIII\]$\lambda$4363 kinematic structures look, within the uncertainties, consistent with one another.
![image](picture_science15a.eps)
Line flux ratios involving \[OIII\]$\lambda$5007, H$\beta$, \[OII\]$\lambda\lambda$3726,3729, He II $\lambda$4686 and \[OIII\]$\lambda$4363 [@oste06; @liu13a] are diagnostic of the ionization state and temperature of the gas. We look for trends in these flux ratios as a function of all kinematic parameters and luminosity of \[OIII\]. Unfortunately, the variations in these line fluxes are subtle enough that high S/N values in the fainter lines are required to measure their fluxes to the required accuracy. In particular, if it is the weak broad components that vary as a function of kinematics and / or luminosity, a S/N ratio of several tens in these lines would be required to detect these trends, as shown in Figure \[pic\_noise\]. As a result, we do not find any definitive trends in any of these ratios even when restricting the analysis to the small number of the highest signal-to-noise objects. Instead, we perform such measurements in Section \[sec:compo\] using composite spectra.
Kinematics and multi-wavelength properties {#sec:multiwv}
==========================================
Kinematic indicators and radio emission {#sec:radio}
---------------------------------------
We cross-correlate our sample within 2 against the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) survey at 1.4 GHz [@beck95], which has typical $5\sigma$ sensitivity of $\sim 1$ mJy. When FIRST coverage is not available (about 6% of objects), we use the NRAO VLA Sky Survey (NVSS) survey at the same frequency [@cond98], which has typical $5\sigma$ sensitivity of $\sim 2.5$ mJy. Out of the 568 objects in our sample, 386 have radio detections above the catalog sensitivity. For every source, we calculate the k-corrected radio luminosity at rest-frame 1.4 GHz, $$\nu L_{\nu}=4 \pi D_L^2 \nu F_{\nu}(1+z)^{-1-\alpha},$$ where $\nu=1.4$ GHz, $F_{\nu}$ is the observed FIRST / NVSS flux (which corresponds to an intrinsically higher frequency in the rest-frame of the source), and $\alpha$ is the slope of the radio spectrum ($F_{\nu}\propto \nu^{\alpha}$), assumed to be $-0.7$.
The exact shape of the radio luminosity function of active nuclei remains a topic of much debate, including whether there is a very broad distribution of intrinsic luminosities or whether there is a true dichotomy in this property [@kell89; @xu99; @ivez02; @jian07a; @cond13]. In any case, the objects on the high end are traditionally called “radio-loud” and are sometimes very extended in the radio band. In these cases, collimated relativistic jets propagate out to several hundred kpc from the host galaxy, and the radio emission of these sources is dominated by the lobes where the jet energy finally dissipates. The majority of our matches are weak (a few mJy) point sources at the 5 resolution of the FIRST survey. Since the optical broad-band emission of type 2 quasars is a poor proxy for their luminosity, we use the distribution of our sources in the \[OIII\] luminosity / radio luminosity plane to define the radio-loud / radio-quiet boundary [@xu99] and find that about 10% qualify as classical radio-loud sources, and a similar fraction are significantly ($\ga 10$ kpc) extended [@zaka04]. For Seyfert galaxies, @hopeng01 find that some radio-loud candidates can be missed when using the spatially integrated optical or infrared luminosity in defining the radio-loud / radio-quiet boundary and suggest using the nuclear properties in such definitions. Luckily in our case the quasars are extremely luminous and dominate the bolometric output, so our definition is unlikely to be affected by this bias.
We find a strong correlation between the \[OIII\] line width and the radio luminosity (Figure \[pic\_radio\]; Table \[tab:pnh\]). For nearby lower-luminosity active galaxies, similar relationships were previously reported by many authors [@heck81; @wils85; @whit92b; @nels96] and by @veil91b whose sample is shown in Figure \[pic\_radio\] for comparison. While there is a small tail of objects with very high radio luminosities, most of the correlation is due to the cloud of points at $\nu L_{\nu}[1.4{\rm GHz}]=10^{39}-10^{41}$ erg/sec. Although these radio luminosities seem high by comparison to those of the local active galaxies (e.g., the red points from the @veil91b sample), we need to keep in mind that the \[OIII\] luminosities of our type 2 quasars are at the extreme end of the luminosity distribution (right panel of Figure \[pic\_radio\]). Thus in the space of \[OIII\] vs radio luminosities [@xu99] most of the type 2 quasars in our sample follow the radio-quiet, rather than the radio-loud, locus. Recently, @mull13 also found a trend of increasing line width with radio luminosity using composite spectra of type 1 quasars with a wide range of \[OIII\] luminosities which overlaps with ours. Similarly, @spoo09 reported a correlation between outflow signatures in mid-infrared emission lines and radio luminosities among ultraluminous infrared galaxies. Because these objects tend to be more dust obscured on galaxy-wide scales and have high rates of star formation which contributes to their radio emission, a direct comparison between our sample and theirs is complicated, but at face value the radio luminosities of the two samples do overlap.
![image](picture_science9b.eps)
Taking all sources and radio upper limits at face value (i.e., assuming that the non-detections are close to the survey limit) yields a Spearman rank correlation coefficient of $r_{\rm S}=0.29$. Excluding radio-loud sources using their distribution in the \[OIII\]-radio plane [@zaka04] yields $r_{\rm S}=0.33$. If the non-detections are significantly below the survey limit, then the correlation is even stronger because there are hardly any non-detections on the high-$w_{90}$ end of the diagram. In Section \[sec:origin\] we demonstrate that the radio fluxes of objects not detected by FIRST are likely within a factor of two of the FIRST survey limit. If we suppress all upper limits by a factor of two (but keep radio-loud sources in), the correlation has $r_{\rm S}=0.34$.
We also find an anti-correlation between radio luminosity and absolute asymmetry, $r_{\rm S}=-0.37$, in the sense that objects with stronger blue asymmetry tend to have stronger radio emission. A similar relationship exists between radio luminosity and the median velocity $v_{50}$. In all these cases the null hypothesis (that the two datasets are uncorrelated) is rejected with $P_{\rm NH}<10^{-5}$. Radio luminosity is not correlated with the other kinematic measures of outflow activity (relative asymmetry and kurtosis $r_{9050}$). For comparison, we also show the weaker trend between $w_{90}$ and \[OIII\] luminosity reported in Section \[sec:analysis\] in the right panel of Figure \[pic\_radio\].
Kinematic indicators and infrared luminosity
--------------------------------------------
We cross-correlate the entire sample of SDSS type 2 quasars [@reye08] against the Wide-field Infrared Survey Explorer (WISE) catalog within 6. Out of 887 objects in the catalog, 876 objects have matches in W1 (3.6) and W2 (4.5); 829 objects in W3 (12); and 773 objects in W4 (22) with signal-to-noise ratio above 2.5. The 11 objects without W1 matches are visually examined; in almost all cases there is an actual detection at the position of the quasar, but it is blended with a brighter nearby object and is thus not reported in the catalog. We interpolate between the WISE fluxes using piece-wise power-laws to calculate $\nu L_{\nu}$ at rest-frame 5 and 12 and the index between these two, $\nu L_{\nu}\propto \lambda^{\beta}$ (higher index means redder spectral energy distribution). As the majority of the sources are well above the detection limit for the survey, the analysis of the WISE matches is not affected by non-detections to the same extent as the analysis of radio emission in the previous section.
The mid-infrared luminosities of type 2 quasars in our sample strongly correlate with their radio luminosities, \[OIII\] velocity widths and \[OIII\] luminosities (Figure \[pic\_wise\]). The relationships between infrared luminosities, radio luminosities and \[OIII\] luminosities in low-luminosity active galaxies have been pointed out by many authors before, most recently by @rosa13. For direct comparison with their work, we show the locus of normal starforming galaxies and the @rosa13 line separating two branches of active nuclei in the left panel of Figure \[pic\_wise\]. Using $\nu L_{\nu}$ at 5 instead results in similar relationships albeit with somewhat larger scatter.
Mid-infrared luminosities $\nu L_{\nu}$\[12\] are correlated with all kinematic measures, in the sense of higher mid-infrared luminosity in objects with stronger outflow signatures, with significance ranging between $P_{\rm NH}=0.02$ and $<10^{-5}$ (Table \[tab:pnh\]). The strongest correlation is with $w_{90}$ ($r_{\rm s}=0.44$, $P_{\rm NH}<10^{-5}$). There is a hint ($P_{\rm NH}=0.04$) that the mid-infrared luminosity correlates positively with positive values of $v_{50}$, while correlating negatively with the negative values, suggesting that either a strong blue asymmetry or a strong red asymmetry may be a sign of an outflow. Objects with higher $w_{90}$ tend to have redder mid-infrared spectral energy distributions (higher $\beta$), with $r_{\rm S}=0.23$ and $P_{\rm NH}<10^{-5}$.
@rosa13 point out that the Seyfert galaxies in their sample lie almost exactly on top of the locus of the star-forming galaxies in the radio / infrared diagrams. These authors conclude that only 15% of the infrared flux of these objects is due to the active nucleus and that the correlation between infrared and radio fluxes seen among Seyfert galaxies is simply a reflection of the standard radio / infrared correlation due to star formation. Type 2 quasars appear to lie on the luminous extension of the locus of the star forming galaxies, and thus it is tempting to postulate that the same arguments apply in our sample, except the star formation rates of the host galaxies must be much higher than those seen in Seyferts by @rosa13.
However, this explanation is unlikely to extend to the objects in our sample. The mid-infrared colors and fluxes of type 2 quasars at these luminosities are dominated by the quasar, not by the host galaxy [@lacy04; @ster05; @zaka08]. Thus the strong correlation between radio and mid-infrared in this regime (and the excess of the radio emission over the amount seen in nearby star-forming galaxies) suggests that the radio emission in radio-quiet quasars is related to the quasar activity, not to the star formation in its host.
Our sample has 54 objects in common with @jia13 who analyzed XMM-Newton and Chandra snapshots of a large sample of obscured quasars deriving their X-ray luminosities, spectral slopes and amount of intervening neutral gas absorption. These objects were either targeted by X-ray observatories or serendipitously lie in the fields of view of other targets. Similarly to @veil91b, we do not find any correlations between any of the kinematic indicators and any of the X-ray spectral fitting parameters. In particular, there is no correlation between the optical line width and the absorption-corrected (intrinsic) X-ray luminosity. Removing the 24 Compton-thick candidates (in which obtaining intrinsic X-ray luminosities is particularly difficult) still reveals no relationship between X-ray parameters and gas kinematics.
One possibility is that the lack of such correlation implies the lack of strong influence of X-ray emission on the launching of the winds. Another possibility (which we find more likely) is that the existing X-ray observations of obscured quasars are not yet of sufficient quality to probe this relationship. The uncertainties in the intrinsic X-ray luminosities are rather high, because the observed fluxes need to be corrected for intervening absorption. As a result, even the correlation between X-ray and mid-infrared luminosities – which are both supposed to be tracers of the bolometric luminosity – is rather weak ($P_{\rm NH}\simeq 0.02$). Among the 30 Compton-thin sources, the X-ray to mid-infrared luminosity ratio has a dispersion of 0.7 dex, similar to the value reported for local Seyfert 2 nuclei [@lama11] who argue that correcting for intrinsic absorption is difficult even when high-quality X-ray observations are available.
We perform a simulation in which we randomly draw 30 points from the infrared vs kinematics correlation in Figure \[pic\_wise\], middle. We find that the correlation is still detected with $P_{\rm NH}\la 0.01$ significance. But if we add a Gaussian random variable with an 0.7 dex dispersion to the log of the infrared luminosity, the correlation is no longer detected. Thus either an intrinsic dispersion or observational uncertainties (related to difficulties of correcting for intervening absorption) of this magnitude are sufficient to destroy a correlation in a sample of 30 objects (the number of Compton-thin obscured quasars with available X-ray luminosities). It will be interesting to probe the connections between the ionized gas kinematics and ultra-violet, optical and X-ray luminosities in type 1 quasars, where correcting for intervening absorption is not a significant problem and where the relative strengths of these correlations could elucidate the primary driving mechanism of the ionized gas outflows.
Composite spectra {#sec:compo}
=================
Constructing composites
-----------------------
To further test the trends we find in Sections \[sec:optical\] and \[sec:multiwv\] and to study weak emission features, we produce several sets of composite spectra. We choose a quantity that is easily measurable in every object (e.g., \[OIII\] line width in our first example) and bin the sample into five equal-size bins (114 objects) in this quantity. We then arithmetically average all host-subtracted spectra within each bin. This allows us to obtain high signal-to-noise composites while being able to tease out the dependencies on the chosen parameter. The composites produced in five bins of the \[OIII\] velocity width are presented in Figure \[pic\_comp\_width\] and the composites in five bins of \[OIII\] luminosity in Figure \[pic\_comp\_lum\]. Below we also discuss composites made in bins of infrared and radio luminosity, although these are not shown.
Even though the spectra are already host-subtracted, for accurate measurements of very weak lines we need a more accurate continuum subtraction. We select a dozen continuum-dominated wavelength intervals and spline-interpolate between them to produce the model of the continuum which is then subtracted from the composite spectrum. Continuum subtraction is the dominant source of systematic uncertainty in measuring the weakest lines.
We then construct the velocity profile of the narrow-line region from the \[OIII\]$\lambda\lambda$4959,5007 lines and we use this profile to fit about 15 emission lines. Several doublets which have wavelength separations that are too small to be resolved in our spectra are fitted with fixed ratios between the components, as follows: \[OII\]$\lambda\lambda$3726,3729 with a 1:1 ratio, \[SII\]$\lambda\lambda$4069,4076 with a 3:1 ratio (the doublet structure of this line is clearly visible in the lower $w_{80}$ composites), and \[NI\]$\lambda\lambda$5198,5200 with a 1:1 ratio. For each emission feature, given the velocity profile from the \[OIII\] line, there is only one adjustable parameter – its amplitude. The fit is linear in all 15 amplitudes.
A few features are close blends: He I $\lambda$3889 is blended with H$\zeta$ (8$\rightarrow$2 transition), and \[NeIII\]$\lambda$3968 is blended with H$\varepsilon$ (7$\rightarrow$2 transition). In both cases, the non-hydrogen emission makes a larger contribution to the blend, but not an overwhelmingly dominant one. To measure He I $\lambda$3889 and \[NeIII\]$\lambda$3968, we estimate the Balmer decrement from H$\beta$, H$\gamma$ and H$\delta$ and use the derived extinction values to estimate H$\varepsilon$ and H$\zeta$, assuming Case B recombination [@oste06]. We then subtract the extrapolated H$\varepsilon$ and H$\zeta$ fluxes from the corresponding blends to obtain He I $\lambda$3889 and \[NeIII\]$\lambda$3968 fluxes separately.
The values of extinction we find using Balmer decrement and the Small Magellanic Cloud extinction curve from @wein01 are in the range $A_V=1.0-1.5$ mag, in agreement with our previous estimates for type 2 quasars [@reye08] and with typical values in the literature for narrow-line regions of Seyfert galaxies [@benn06]. Extinction values are higher ($A_V\simeq 1.5$ mag) for the two highest width composites than for the other ones ($A_V\simeq 1.0-1.1$ mag), somewhat reminiscent of the results by @veil91a who finds higher extinction values for objects with stronger line asymmetries. As a function of luminosity, extinction decreases steeply and monotonically, from $A_V=1.7$ mag to 0.9 mag. Because it is not clear that the extinction values derived from Balmer decrements apply to all other emission features (which may originate in a different spatial region), we do not apply extinction correction to any measurements, unless explicitly stated otherwise.
The optimal composite-making practices depend on the goal of the composite [@vand01; @lacy13]. The typical per-spectroscopic-pixel errors of the SDSS spectra are the result of the overall plate reductions and thus are not too dissimilar from one object to the next, so in an error-weighted average all weights would be roughly the same. Therefore we chose to use simple arithmetic averages to produce our composites [@lacy13]. It is encouraging that the \[OIII\]/\[OII\] line ratios measured from the composites ($\log$\[OIII\]/\[OII\]$=0.67-0.74$, depending on \[OIII\] width) are consistent with the one measured from individual spectra (mean=median and sample standard deviation $0.70\pm 0.23$) and \[OIII\]$\lambda$4959/\[OIII\]$\lambda$5007 is within 1% of its theoretical value in all composites. This means that our composite-making procedure does a reasonable job of preserving line ratios, both for cases when the intrinsic distribution of the line ratio is very narrow (e.g., \[OIII\]$\lambda$4959/\[OIII\]$\lambda$5007) and when it is fairly broad (e.g., \[OIII\]/\[OII\]).
Wolf-Rayet features
-------------------
One striking result is the appearance of a broad complex around He II $\lambda$4686 in the composite with the highest $w_{80}$. This feature is somewhat reminiscent of the broad emission signatures of Wolf-Rayet stars [@brin08; @liu09] – massive young stars which can radiatively drive outflows from their photospheres with velocities reaching 2000 km/sec. A detection (or lack thereof) of young stellar populations is of particular importance in our work, since supernova explosions are capable of driving powerful outflows. Thus, a significant star-forming population could be responsible for at least some of the outflow activity, and therefore this feature deserves particular scrutiny.
The emission-line region photo-ionized by the quasar produces a multitude of forbidden and recombination (‘nebular’) features in the wavelength range between 4600Å and 4750Å [@liu09]. The broad lines produced by the Wolf-Rayet stars at these wavelengths are N V $\lambda$4613, N III $\lambda$4640, C III/IV $\lambda$4650, and He II $\lambda$4686 [@brin08]. Thus the challenge is to disentangle the two sets, made even more difficult by the fact that at the high velocities of the “narrow” lines in our highest $w_{80}$ objects the velocities in the quasar-ionized set are not dissimilar from the velocities in the Wolf-Rayet set.
In Figure \[pic\_wr\] we zoom in on the wavelength range of interest. We find that only four nebular emission features (\[Fe III\]$\lambda$4658, He II $\lambda$4686, \[Ar IV\]$\lambda$4711 and \[Ar IV\]$\lambda$4740) are sufficient to explain most of the observed flux. For each composite, we assume that these four features are associated with the narrow-line region of the quasar and have the same kinematic structure as H$\beta$. We take the H$\beta$ velocity profile from the same composite and we adjust its amplitude to fit each of these four features. We see no evidence that the relative fluxes of the four lines vary from one composite to the next; thus the models shown in Figure \[pic\_wr\] have \[Fe III\]/He II fixed at 0.15 and each \[Ar IV\]/He II fixed at 0.20, making it essentially a one-parameter fit (the overall ratio of He II to H$\beta$). No broad component in He II is required to produce an adequate fit. There is some evidence in the last composite of some emission filling in between the two argon features, possibly due to \[Ne IV\]$\lambda$4725.
Although most of the “broad” feature turns out to be due to a superposition of relatively broad nebular lines, in the last panel we see excess emission centered at around 4640Å. This is close to the wavelength of one of the more prominent Wolf-Rayet features, N III $\lambda$4640. To estimate the flux of this feature, we model it using the same H$\beta$ velocity profile (although in this case there is no particular physical reason to do so, since the N III line is expected to reflect the kinematics of the stellar winds, rather than be produced in the extended low-density medium of the host galaxy and reflect the kinematics of quasar-ionized gas). We find a reasonable fit with N III/He II=0.15, but we rather doubt the identification of this feature as N III associated with the Wolf-Rayet stars in the host galaxy. The feature is noisy and is somewhat offset to the blue from the nominal expected centroid. When all spectra are coadded together into a single composite, the evidence for N III disappears.
We assume for the moment that the N III feature is in fact detected and estimate its median luminosity. To this end, we use the observed N III / \[OIII\] ratios in the composite and the median \[OIII\] luminosity of the objects in the composite $\log L$\[OIII\]/$L_{\odot}=8.91$ to find $L$(N III)$\simeq 3\times 10^6L_{\odot}$. We then use N III fluxes and starburst models from @scha99 and @leit99 to estimate star-formation rates of $\sim 6 M_{\odot}$/year. This is significantly lower than the estimates of star formation rates in type 2 quasar hosts by @zaka08 (a few tens $M_{\odot}$/year). The difference between the two methods is not alarming because there are still significant discrepancies between modeled and observed fluxes of Wolf-Rayet features, especially as a function of metallicity [@brin08]. But the low fluxes of Wolf-Rayet features in our composites – if they are detected at all – further reinforce our understanding that the star formation rates in type 2 quasar hosts are not adequate for producing the observed radio emission. At 10 $M_{\odot}$/year, using calibrations by @bell03 and @rosa13 we find that typical star-forming galaxies would produce $\nu L_{\nu}$\[1.4GHz\]$=2.5\times 10^{38}$ erg/sec and $\nu L_{\nu}$\[12\]$=2.9\times 10^{43}$ erg/sec, more than an order of magnitude below the values seen in our sample.
Line ratios as a function of line width and line luminosity
-----------------------------------------------------------
Figure \[pic\_comp\_ratio\] summarizes our analysis of the line ratios as a function of \[OIII\] width and luminosity. We report the trends we see among the composite spectra at face value, but we caution that many of the lines we discuss are too weak to be detected in individual objects. As a result, in most cases it is not possible at the moment to evaluate whether the trends we see in the composite spectra accurately represent the trends in the population; a couple of exceptions are noted below.
For every composite, we compute errors in the line ratios using bootstrapping: we randomly resample (100 times) with replacement the set of spectra that contribute to each composite, recompute the composite, and recompute the line ratios. The error bars shown in Figure \[pic\_comp\_ratio\] encompass 68% of the resampled line ratios and represent the statistical error; the systematic uncertainties (those due to continuum subtraction, due to fitting procedure and due to assuming the same kinematic shape for all emission lines) are not included. Typical error bars for the line ratios of the brighter lines such as \[OII\] and \[NeIII\] are about $\sim 0.03$ dex, reaching 0.2 dex for fainter lines such as \[FeVII\]. For brighter lines the dispersion of line ratios within the population can be measured from individual objects and is typically $\sim 0.3-0.4$ dex [@ster12]; thus our bootstrapping procedure confirms that the error in the mean is consistent with $\sim 1/\sqrt{N}$ of the dispersion within the population ($N=114$ is the number of the spectra in each composite). For fainter lines measurement uncertainties dominate over the statistical ones.
We find an increase of \[SII\]$\lambda\lambda$4069,4076 (and to a lesser extent, \[NI\]$\lambda\lambda$5198,5200) as a function of \[OIII\] width, which is our proxy for outflow velocity. Over the range of \[OIII\] widths that we probe, these lines increase respectively by factors of 2.6 and 1.6 (or by 0.4 and 0.2 dex) relative to \[OIII\]. Line ratios that vary with kinematics are often a tell-tale sign of shock excitation in star-burst galaxies and LINERs [@veil94; @veil95]. \[SII\] and \[NI\] trace warm weakly ionized gas phase, which is common in shocks that penetrate deep into clouds and that can produce extended low-ionization regions [@tiel05]. The increase in the relative prominence of these lines suggests an increase in shock-ionization contribution, perhaps in direct response to an increase of the quasar wind velocity.
The \[OIII\]$\lambda$4363/\[OIII\]$\lambda$5007 ratio is $\simeq 1.7\times 10^{-2}$ and does not vary in a regular fashion with either \[OIII\] width or luminosity. This value is significantly higher than that predicted by photo-ionization models [@vill08], and is higher still if corrected for extinction, suggesting some contribution from shocks. Combined shock- and photo-ionization models [@moy02] can help explain this ratio, but because photo-ionization makes the dominant contribution to the \[OIII\] emission the lack of dependence on line kinematics is not too surprising.
Both \[SII\]$\lambda\lambda$4069,4076 and especially \[NI\]$\lambda\lambda$5198,5200 appear to decline as a function of \[OIII\] luminosity, by factors 1.3 and 2.4 (0.1 and 0.4 dex), respectively. A similar decline of stronger low-ionization lines such as \[NII\]$\lambda$6583, \[SII\]$\lambda\lambda$6716,6731 and \[OI\]$\lambda$6300 relative to \[OIII\] is seen in type 1 quasars by @ster12. Unfortunately, these particular lines are not accessible in our higher-redshift objects, but applying their scalings to our lines and the range of \[OIII\] luminosities probed by our data, one would predict a decline by factors of 1.3$-$1.8, consistent with our observations. This decline suggests that the low-ionization regions are being destroyed as $L$\[OIII\] increases, likely as a result of the increase in the bolometric luminosity and thus the availability of ionizing photons. The same effect could also be produced by the increase in the opening angle of quasar obscuration, which would increase the size of the photo-ionized regions and lead to obliteration of the low-ionization regions by direct quasar radiation, but the strong positive \[OIII\]-infrared correlation suggests that the bolometric luminosity increase is the more likely driver for the observed correlations.
\
The \[OII\]/\[OIII\] ratio decreases by a factor of 1.4 (0.14 dex) as a function of \[OIII\] luminosity, which was previously reported by @kim06, with our values being in agreement with theirs. Quasar photo-ionization models produce poor fits to this ratio [@vill08]. Explanations usually involve a combination of high-ionization regions photo-ionized by the quasar and low-ionization regions photo-ionized by star formation [@kim06], so the decrease in \[OII\]/\[OIII\] with \[OIII\] luminosity then indicates the decrease in the relative contribution of the lower-ionization regions, which is similar to the behavior of \[SII\] and \[NI\] lines. However, unlike \[SII\] and \[NI\], \[OII\]/\[OIII\] decreases with \[OIII\] line width. We speculate that this trend may be an indirect hint that the quasar-driven wind of increasing velocity (as measured by \[OIII\] width) suppresses star formation in the host galaxy, a phenomenon which on occasion can be observed in action in individual sources [@cano12].
We do not find any noticeable trends in line ratios with radio luminosity, except that the lines are getting broader as the radio luminosity increases. This confirms our previous finding of the correlation between \[OIII\] width and radio luminosity (Figure \[pic\_radio\]), but suggests that radio emission is not one of the principal drivers of the state of the ionized gas. Furthermore, there is no evidence that Wolf-Rayet features appear in higher radio luminosity objects, which is in qualitative agreement with our previous conclusion that radio emission is not dominated by the star formation in the host. We also examine composite spectra made in bins of infrared luminosity and find their behavior similar to those made in bins of \[OIII\] luminosity, except that the width of the lines is a significantly stronger function of infrared than \[OIII\] luminosity, as we already know from Figures \[pic\_radio\] and \[pic\_wise\].
In the highest line width composite, the profile of \[OII\] is visibly narrower than that of \[OIII\], and indeed we find $w_{90}$\[OIII\]$=2132$ km/sec ($w_{80}=1498$ km/sec), whereas that of \[OII\] (corrected for doublet structure using eq. \[eq:woii\]) is 1410 km/sec (1040 km/sec). This confirms the trend we see in Figure \[pic\_width\] and may mean that the \[OII\]/\[OIII\] ratios calculated from the composite spectra are overestimated (as they are computed with a fixed velocity profile of \[OIII\]). For any lines other than \[OII\], even with the high S/N of the composite spectrum it is not clear whether the kinematic structures are consistent from one line to the next. There is a hint that \[NI\]$\lambda\lambda$5198,5200 is narrower than \[OIII\] as well. We do not detect any obvious overall blueshifts of any features relative to \[OIII\].
Another curious tidbit is the increase in the He I $\lambda$3889 / He I $\lambda$4471 ratio with \[OIII\] width. This ratio ranges from 1.2 to 2.1, which is significantly lower than the standard Case B values [@oste06]. If we correct them for the extinction, we find values in the range 1.8 to 2.6 (still somewhat lower than Case B) and increasing monotonically with $w_{80}$ (in the last $w_{80}$ bin, He I $\lambda$3889/He I $\lambda$4471 is likely underestimated becase He I $\lambda$3889 starts to blend with \[NeIII\]$\lambda$3869), without any trend with $L$\[OIII\]. It is possible that He I is affected by self-absorption [@monr13] which declines as a function of line width.
Coronal lines
-------------
We detect two very high-ionization features, \[Fe VII\]$\lambda$3759 and \[Fe VII\]$\lambda$5158, which show a tendency to increase with \[OIII\] width. Such lines, termed “coronal”, arise from ions with high ionization potential, 99 eV in the case of FeVII. The most frequently discussed mechanism for their production is excitation by quasar photo-ionization fairly close to the nucleus, akin to an “intermediate” region between the very compact broad-line region and a more extended narrow-line one [@gran78; @gelb09; @mull09; @mull11]. In photo-ionization models, high electron and photon densities are typically required to produce coronal lines, which explains why they arise close to the nucleus, e.g., at sub-pc scale in the case of a Seyfert galaxy modeled by @mull09. These authors propose that the inner edge of the obscuring material is the plausible location of the coronal emission regions. Similarly, the rapid variability of coronal line emission in another Seyfert galaxy suggests that the size of the emission region is just a few light years [@komo08].
Such small distances present a challenge to the study by @rodr11 who find that the coronal lines are detected at similar rates in type 1 and type 2 objects, indicating that they originate outside the obscuring material, although @rodr11 still require very high densities, $n_e=10^8-10^9$ cm$^{-3}$. If coronal lines originate somewhere near the inner edge of the obscuring material, depending on the orientation we may see them in an occasional type 2 object, but overall one would expect to see lower detection rates of coronal lines in type 2 objects than in type 1s in this scenario.
It would be surprising to see a region close to or inside of the obscuring material in type 2 quasars, and even more surprising to see its flux correlate with the kinematics of much lower-ionization, much more extended gas. Some recent models demonstrate the possibility of producing coronal lines on the surface of narrow-line region clouds via photo-ionization from the quasar [@ster14], and while this approach solves the problem of visibility of coronal lines in type 2 quasars, it may not explain the dependence on line width. Therefore, we suggest that instead of circumnuclear photo-ionization in type 2 quasars some or most of the coronal lines are due to shocks, possibly those produced when the radiatively-accelerated wind from the quasar runs into dense clouds.
We draw inspiration from the observations by @mazz13 who find that the extended ($\la 170$ pc) coronal line emission in NGC 1068 is likely dominated by shock excitation. These authors find that coronal line ratios are more easily explained by shock models than by photo-ionization models. Furthermore, they find that the kinematic structure of coronal lines is similar to that of several lower ionization features. Most intriguingly, the spatial and kinematic distribution of coronal lines is similar to that of very low ionization lines, such as \[FeII\], characteristic of shocks propagating into partly ionized medium. The latter point is particularly interesting as we observe the simultaneous rise of coronal lines and \[SII\] and \[NI\] in our sources.
In NGC 1068, coronal lines are co-spatial with the radio jet, and thus @mazz13 suggest that jet-driven shocks may play a role in line excitation. This is also a possibility for our sources and for type 1 quasars [@huse13; @mull13], especially in light of the fact that radio emission is strongly correlated with line kinematics (Section \[sec:radio\]). However, we do not favor this scenario. While radio luminosity increases with \[OIII\] widths, and so do the fluxes of \[FeVII\], \[SII\] and \[NI\], there is no evidence that the latter lines increase in the composites made in bins of radio luminosity. What this suggests to us is that the primary driver for all these correlations is not the power of the radio emission, but the kinematics of the gas. Higher outflow velocity implies higher velocities of shocks driven into dense clouds, which enhances coronal emission, while the radio emission follows as discussed in Section \[sec:origin\].
What sort of conditions would be required to produce coronal lines by driving low-density wind into high-density clouds? The typical shock velocities inside the shocked clouds implied by the observed coronal line ratios in NGC 1068 are $v_{\rm shock}=300-1000$ km/sec for cloud densities $n_{\rm cloud}=100-300$ cm$^{-3}$ [@mazz13]. To produce the same values in our picture, the incident wind velocity must be even higher, $v_{\rm wind}\sim v_{\rm shock}\sqrt{n_{\rm cloud}/n_{\rm wind}}$, where $n_{\rm wind}$ is the much lower density of the volume-filling component of the wind, so the required $v_{\rm wind}$ can be several thousand km/sec. Such high velocities are not achievable at large distances in low-luminosity active galaxies, where the radiatively-driven wind would get quenched by the interaction with the interstellar medium, but could be typical of quasar-driven winds [@moe09; @zubo12; @fauc12b] even on galaxy-wide scales [@arav13; @liu13b].
Discussion {#sec:discussion}
==========
The origin of the line kinematics / radio / infrared relationships {#sec:origin}
------------------------------------------------------------------
The tantalizing relationship between radio and infrared luminosities and the \[OIII\] line kinematics may contain interesting clues about the structure and the driving mechanism of ionized gas outflows in quasars, but its interpretation is not straightforward. For quite some time, it was thought that the extended ionized gas around active galaxies was much more likely to be found around radio-loud objects than around radio-quiet ones [@stoc87]. This was in line with theoretical ideas: powerful relativistic jets can inflate over-pressured cocoons which in turn sweep up galactic and inter-galactic medium plausibly resulting in narrow-line emission [@bege89].
With more sensitive observations, extended ionized gas emission has now been detected around radio-loud and radio-quiet quasars alike, indicating that gas nebulae around both types have roughly similar sizes [@liu13a; @liu13b; @liu14], which reach 10 kkpc or greater at the highest luminosities [@gree11; @liu13a; @hain13; @hain14; @harr14]. The major difference appears to be that the nebulae around radio-loud objects tend to be more disturbed (lumpy) or more elongated, whereas the nebulae around radio-quiet objects are smooth and featureless [@liu13a]. The gas kinematics are disturbed and show outflow signatures not only in the central brightest parts of the nebulae, but all the way out to several kpc from the nucleus [@gree11; @liu13b].
The recent discovery of the relationship between gas kinematics and radio emission within the radio-quiet population goes to the heart of the origin of the outflows in the majority of quasars, as well as of the origin of radio-quiet emission itself. Within the last year, there have been several new results on these issues [@cond13; @huse13; @mull13]. In this section we discuss pros and cons of various mechanisms proposed to explain the radio-quiet emission of quasars and the correlation between radio emission and gas kinematics.
The observed gas kinematics / radio correlation is reminiscent of the classical work on narrow-line kinematics by @veil91a [@veil91b; @veil91c] who found a correlation between \[OIII\]$\lambda$5007 line width and radio luminosity within a sample of 16 nearby Seyfert galaxies. In these and other nearby objects, high-resolution radio observations sometimes reveal a strong relationship between the position angle and morphology of the radio emission and the ionized gas emission, especially for the kinematically disturbed gas component (e.g., @veil91c [@bowe95; @cape97; @falk98; @cape99]). In some cases the jet appears to be at least in part responsible for the physical conditions in the narrow-line region (e.g., @krae98). Interestingly, @leip06 find a correlation between the size of the narrow-line region and the size of the radio source in nearby radio-quiet type 1 quasars. Thus, we first consider the possibility that the gas kinematics / radio relationship is established via direct interactions between the jet and the gas.
In sources where the jet and the ionized gas emission are aligned, it is tempting to postulate that radio jets are either driving the outflow or provide shock-ionization, making them responsible for the morphology and / or excitation of the ionized gas, which may provide some basis for the radio / kinematics correlation. However, there may be a certain bias in reporting only those objects where high-resolution radio observations reveal a jet, especially if its position angle is not far off the orientation of the line emission. Far from every source shows evidence of jets in high-resolution radio observations [@veil91c; @ho01]. In large unbiased samples the alignment between jets and ionized gas is statistically weak at best [@priv08], although some authors find a stronger relationship [@schm03], especially in compact radio sources [@shih13]. This remains a difficult issue because the morphology and the position angle of the radio emission and the ionized gas emission are a function of scale, and it is unclear which scales are relevant. Additionally, in principle the direction of photo-ionization by the active nucleus (which often determines the orientation of the nebula) and and the jet (which may be responsible for the disturbed kinematics) can be mis-aligned, further complicating the picture.
We examined 20 objects from our sample that are located in Stripe 82 and were mapped by the VLA by @hodg11. These observations are deeper than FIRST (rms$=52\mu$Jy/beam vs 130$\mu$Jy/beam) and have higher spatial resolution (FWHM$=1.8$ vs 5), better matched to the sizes of the quasar host galaxies at the typical redshifts of our sample (1 at $z=0.5$ is 6.1 kpc). Of the 20 sources, one object is a giant FRII radio galaxy in both datasets; 13 are point sources both in FIRST and in Stripe 82; and the remaining 7 are not detected in FIRST, but are all detected in Stripe 82 as point sources with fluxes between 0.4-0.7 mJy, i.e., just a factor of a couple below the catalog limit of the FIRST survey. None of the 19 objects showed resolved morphology in the higher resolution observations. Among type 2 quasars, @lal10 find a mix of unresolved and slightly resolved sources at 0.8 resolution, but only a small fraction ($\sim 1/7$) with linear jet-like signatures. The radio emission of the $\sim 10$ kpc ‘super-bubble’ quasar outflow [@gree12] is compact ($<1$ kpc, J.Wrobel, private communication). Overall we find it unlikely that large-scale ($\ga 5$ kpc) jets are responsible for the correlation between gas kinematics and radio emission we see in our sample.
A more promising hypothesis is that in the radio-quiet objects the radio emission is due to a compact jet ($\la 1$ kpc) not resolved in the current radio observations which is driving the outflow of ionized gas [@spoo09; @huse13; @mull13]. Because the kinematics of the gas is disturbed on galaxy-wide scales (out to $\sim 10$ kpc, @liu13b), the effect of the jet on the gas would need to be indirect in this scenario: the jet would need to inject the energy into the interstellar medium of the galaxy on small unresolved scales, with the energy then converted into a wide-angle outflow that engulfs the entire galaxy as observed. Although the observational test of this hypothesis seems straightforward (every radio-quiet quasar should have a jet), in practice testing this paradigm is difficult, in part because it is not clear that every extended radio structure represents a jet. Furthermore, radio emission of radio-quiet active galaxies tends to be compact [@naga03], so it is often necessary to zoom in all the way into the central several pc in order to resolve the radio emission [@naga05]. On those scales, even when a linear structure seems unambiguously jet-like, it is not clear that it is sufficient to power an energetic galaxy-wide outflow.
A completely different approach is followed by @cond13 who argue, on the basis of the shape of the radio luminosity function, that the radio emission in radio-quiet quasars is mostly or entirely due to star formation in their host galaxies. In our objects, in favor of this hypothesis is the strong correlation between the infrared and the radio fluxes, which furthermore appear to lie on the extension of the locus of normal star-forming galaxies (Figure \[pic\_wise\]). However, at the luminosity of the radio-quiet subsample of $\log(\nu L_{\nu}[1.4{\rm GHz}], {\rm erg/sec})=40.0\pm 0.7$ (mean$=$median and standard deviation), the median star formation rate suggested by the classical radio / star-formation correlation [@helo85; @bell03] would be an astonishing 400 $M_{\odot}$/year. Although the star formation rates of type 2 quasar hosts are among the highest in the population of active galaxies, in the ballpark of a few tens $M_{\odot}$/year [@zaka08], they fall short of the values required to explain the observed radio luminosity, as was also noticed by @lal10. Furthermore, as was already noted above, the mid-infrared fluxes of our objects shown in Figure \[pic\_wise\] are dominated by the emission of the quasar-heated dust. Thus both radio luminosity and mid-infrared luminosity in Figure \[pic\_wise\] are unlikely to be due to star formation.
Although compact jets cannot be ruled out by the existing data, we put forward an alternative suggestion for the origin of the radio-quiet emission. We propose that luminous quasars radiatively accelerate winds [@murr95; @prog00] which then slam into the surrounding medium and drive shocks into the host galaxy. This shock blast initiated by the quasar-driven wind accelerates particles, just like what happens in a supernova remnant, which then produce synchrotron emission [@stoc92; @jian10; @fauc12b; @zubo12]. To estimate the energetics of the wind required in this scenario, we assume for the moment that the efficiency of converting the kinetic energy of the outflow $L_{\rm wind}$ into radio synchrotron emission is similar in starburst-driven and quasar-driven winds. In a galaxy forming $\psi$ $M_{\odot}$/year worth of stars, the kinetic energy of the starburst-driven wind is $7\times 10^{41}\psi$ erg/sec [@leit99]. The same galaxy produces $\nu L_{\nu}$\[1.4GHz\]$=2.5\times 10^{37}\psi$ erg/sec worth of radio emission [@bell03], with efficiency of $3.6\times 10^{-5}$ of converting kinetic energy into radio luminosity.
Applying the same efficiency to our objects, we find that to reproduce our median radio luminosity of $10^{40}$ erg/sec we need $L_{\rm wind}=3\times 10^{44}$ erg/sec. The bolometric luminosities of our objects are poorly known, but using scalings presented by @liu09 we can estimate that for the median \[OIII\] luminosity of our sample ($\log L$\[OIII\]/$L_{\odot}=8.9$) the bolometric luminosity is $8\times 10^{45}$ erg/sec. Thus the median fraction of the bolometric luminosity converted to the kinetic luminosity of the wind is 4%, in rough agreement with our previous estimates based on observations of spatially resolved winds [@liu13b]. These ideas are further supported by our observation that $\nu L_{\nu}$\[1.4GHz\] has a close-to-quadratic dependence on \[OIII\] width (Figure \[pic\_radio\]), suggesting that the relationship between the kinetic energy of the ionized gas and radio luminosity is linear.
In such heavily obscured quasars we cannot directly investigate the effect of the optical luminosity or even X-ray luminosity on gas kinematics or radio emission, which is an interesting direction to pursue using samples of type 1 quasars. In these objects, there is a known correlation between radio luminosity and their accretion power derived from ultra-violet luminosity [@falk95]. While this correlation is usually interpreted as a physical connection between accretion and resulting jets, an alternative explanation is that the accretion power leads to winds which in turn generate radio emission as described above without involving jets.
Radiatively-driven quasar winds
-------------------------------
In Figure \[pic\_model\] we present a schematic of the components of the narrow-line regions of quasars discussed in this paper. The quasar is in the center and is characterized by its bolometric luminosity $L_{\rm bol}$. Obscuring material is distributed anisotropically around it (dark grey “torus”), with $\Omega$ being the opening angle of the obscuration which determines the directions that can be photo-ionized by the direct quasar radiation. The relationship between the two values $L_{\rm bol}$ and $\Omega$ is not well known. Demographic studies of quasars continue to disagree on whether the obscured fraction stays constant or decreases as a function of luminosity [@trei08; @lawr10], but it is likely that at a given luminosity there is a wide distribution of $\Omega$ [@zaka06].
Another component of the model is the quasar-driven wind. The wind may be initially driven anisotropically, e.g., close to the equatorial plane of the accretion disk [@murr95; @prog00], and when it is first launched near the quasar, it is very fast, with velocities up to $\sim 0.1c$. The wind runs into the interstellar medium close to the nucleus and interacts with it, producing shock waves that propagate through the interstellar medium of the galaxy. If this medium is clumpy, the propagation of the wind proceeds along paths of least resistance, and the shape of the large-scale wind is determined largely by the distribution of the interstellar medium [@wagn13], rather than by the initial anisotropies. Thus the wind is shown propagating in all directions, even those that are affected by the circumnuclear obscuration.
![A cartoon of our model for the narrow-line region of a quasar (central 5-point star). The quasar is surrounded by circumnuclear obscuring material (dark grey “torus”) whose opening angle is $\Omega$. The wind propagates more isotropically than the ionizing radiation from the quasar which is concentrated in bi-cones marked “photo-ionized”. Optical emission lines are produced by dense clouds (small circles). []{data-label="pic_model"}](picture_science23.eps)
The wind velocity $v_{\rm wind}$ likely varies as a function of the location in the galaxy and is not well-known observationally. In our data, the line-of-sight velocity dispersion of the optical emission lines is characteristic of the velocities of the clouds $v_0$ (light grey), not the velocity of the wind. The acceleration of clouds and their survival as they are impacted by the wind is a topic of active research [@mell02; @coop09; @aluz12; @bara12; @fauc12a], but it seems natural that the velocity of the wind should be at least as high as the velocity of the clouds that it accelerates ($v_{\rm wind}\ga v_0$) and that they should be strongly related. Theoretical arguments suggest wind velocities of $v_{\rm wind} \ga 1000$ km/sec [@king05; @zubo12; @fauc12b], in rough agreement with our inferred values of $v_0$.
We do not observe $v_0$ directly; rather, we infer these values from the line-of-sight velocity width of the emission lines. The increase in shock-diagnostic lines \[SII\], \[NI\] and possibly \[FeVII\] as a function of \[OIII\] width provides evidence that the observed velocity width is a good proxy for the outflow velocity. Furthermore, the \[OIII\] width positively correlates with mid-infrared luminosity (Figure \[pic\_wise\]), which we take to suggest that the outflow is ultimately driven by the radiative pressure near the quasar [@murr95; @prog00]. After the winds are launched, they interact with the interstellar medium in complex ways, which perhaps explains why the correlation between infrared luminosity and \[OIII\] velocity width (which our observations probe on galaxy-wide scales, @liu13b) has a large scatter.
Conclusions {#sec:conclusions}
===========
Gas kinematics and outflows
---------------------------
In this paper we study the kinematics of ionized gas emission in 568 luminous obscured quasars from the sample of @reye08, primarily using their \[OIII\]$\lambda\lambda$4959,5007 emission lines. For every object, we determine a set of non-parametric measures of line asymmetry, velocity width and shape. We find that objects with blueshifted emission and line asymmetries are more prevalent in our sample, which we take as a signature that outlows in which the redshifted part is affected by dust extinction are common in our sample. The velocity widths we see in our sample (median $w_{80}=752$ km/sec and $w_{90}=1060$ km/sec, max $w_{80}=2918$ km/sec and $w_{90}=4780$ km/sec) are much higher than those in starburst galaxies (median $w_{90}\simeq 600$ km/sec, @rupk13a [@hill14]).
Using the conversion between the line-of-sight velocity width of the line and the outflow velocity from Section \[sec:outflows\], we can estimate outflow velocities as $v_0\simeq w_{80}/1.5$. We find that for about half of the objects in our sample, $v_0$ ranges from $\sim$500 to $\sim$2000 km/sec. It is likely that in every object a range of cloud velocities is present; these estimates should be thought of as the median velocities in each source. From the observations presented here, it is not known which spatial scales dominate these estimates, but from our spatially resolved observations [@liu13b] it is clear that at least some of the clouds maintain similar velocities all the way out to $\sim 10$ kpc from the quasar.
Neither the overall width nor the width of the narrower component in the multi-Gaussian line decomposition correlate with the velocity dispersion of the host galaxy. We therefore find no evidence for significant amounts of gas in dynamical equilibrium with the host (e.g., a component associated with the rotating galaxy disk). The higher velocity gas tends to be on average blueshifted by $\sim 100$ km/sec relative to the lower velocity gas. This may mean that the higher velocity gas is found on somewhat smaller spatial scales which are more prone to dust extinction. This conclusion is in line with the slight observed decrease of line-of-sight velocity dispersions of the gas as a function of the distance from the nucleus seen in spatially resolved observations of quasar winds [@liu13b]. The effect is very subtle (3% decrease per each projected kpc), and overall high gas velocities are maintained over the entire host galaxy [@liu13b]. Other strong lines, such as \[OII\] and H$\beta$, also show outflow signatures, but significantly more mild than those seen in the \[OIII\] line. In the objects with the most extreme \[OIII\] kinematics, \[OII\] is much narrower than \[OIII\], with the kinematics of H$\beta$ being in between the two.
The increase in shock-diagnostic lines \[SII\]$\lambda\lambda$4069,4076 and \[NI\]$\lambda\lambda$5198,5200 as a function of \[OIII\] width strong support for using the \[OIII\] width as a proxy for the outflow velocity. \[OII\] declines both with \[OIII\] width and \[OIII\] luminosity, suggesting that the \[OII\]-emitting regions are over-ionized by photo-ionization and quenched by increasing shock velocity. We find evidence for shock-ionization contribution to coronal lines \[FeVII\].
Outflows and radio emission
---------------------------
\[OIII\] velocity width correlates strongly with radio luminosity. We suggest that the radio emission is a by-product of the outflow activity, with particles accelerated on the shock fronts as the radiatively driven quasar wind propagates into the interstellar medium of the host galaxy (this mechanism was first hypothesized by @stoc92 to explain the low levels of radio emission in broad absorption line quasars). The median radio luminosity in our sample, $\nu L_{\nu}$\[1.4GHz\]$=10^{40}$ erg/sec, requires kinetic energy of the outflow of $L_{\rm wind}=3\times 10^{44}$ erg/sec if the efficiency of conversion in quasar-driven winds is similar to that in supernova-driven winds. The bolometric luminosities of our objects are poorly known, but we estimate that the median value for our sample is $8\times 10^{45}$ erg/sec and thus the kinetic luminosity of the wind is 4% of the bolometric power. This idea is further reinforced by the approximately quadratic dependence $\nu L_{\nu}$\[1.4GHz\]$\propto$(width\[OIII\])$^2$, which implies a linear relationship between radio luminosity and kinetic energy of the outflow. \[OIII\] velocity is positively correlated with the infrared luminosity, which suggests that the ionized gas outflow is ultimately driven by the radiative pressure close to the accreting black hole.
Another possible explanation for the \[OIII\] width / radio correlation is that the mechanical energy of the relativistic jet (which has not yet broken out of its host galaxy) is used to heat an overpressured cocoon [@bege89] which then launches the wind of ionized gas [@mull13]. As long as the jet is still compact, it appears as an unresolved radio core in FIRST observations. We use the scalings between the core radio luminosity and the jet kinetic energy by @merl07 to estimate how much kinetic power would be required to produce the observed median radio luminosity. We find $L_{\rm jet}=3\times 10^{44}$ erg/sec, a value which is almost identical to the wind kinetic power obtained in the previous paragraph, even though they were estimated using completely different methods. This is likely not a coincidence: the efficiency of conversion of mechanical luminosity into synchrotron radiation is determined by the fraction of energy that can be converted into relativistic particles on shock fronts, which is likely independent of the origin of these shocks.
The similarity of energy requirements underscores the difficulty of distinguishing between these two mechanisms. In the jet scenario one expects to see a jet in the high-resolution radio observations, whereas in the wind scenario radio emission is more diffuse and is present everywhere where shocks are propagating; however, in practice the collimated part of the jet does not dissipate energy very efficiently and so can be hard to detect. In both models, the morphology of the ionized gas depends much more strongly on the distribution of the interstellar medium than on the exact driving mechanism [@gaib11; @wagn12; @wagn13]. Thus, the morphology of ionized gas is not necessarily a useful clue.
Radio spectral index could be a useful measurement for identifying recent or on-going particle acceleration (flatter synchrotron spectra for freshly accelerated, more energetic particles), but again particle acceleration may be happening in both scenarios. Interestingly, @lal10 find flatter spectral indices in type 2 quasars than expected for jets in the standard geometry-based unification models, so a combination of spectral and morphological investigations in the radio may be worth pursuing further.
One uncomfortable consequence of the jet scenario is that it requires for every radio-quiet quasar to have a powerful jet, with only a minority of them being active long enough to to break out of the galaxy (otherwise there would be too many extended radio sources). Another problem is that the jet scenario provides no ready explanation for the correlation between \[OIII\] width and infrared luminosity, unless jets are also directly connected to the accretion power [@falk95].
In the wind scenario, if the initial distribution of matter is roughly spherically symmetric, so will be the ionized gas emission and the radio emission. In a disk galaxy, once the size of the wind reaches the vertical scaleheight of the disk it propagates largely along the path of least resistance perpendicular to the disk [@wagn13], forming two symmetric bubbles on either side. Ionized gas emission is concentrated in shells and filaments, whereas the radio emission is filling the bubbles. In low-resolution data, one would see both the line emission and the radio emission oriented roughly in the same direction, so distinguishing this mechanism from jet-induced outflow requires high-quality observations.
Such bubbles are directly seen in the radio in external galaxies [@ceci01; @hota06] and in our own Milky Way, where the bubbles are known as “the microwave haze” [@fink04; @su10]. These structures are also seen in X-rays which often closely trace the morphology of the radio emission and / or the ionized gas filaments [@ceci01; @cros08; @wang09]. In our Galaxy not only X-rays are observed [@sofu00], but the structures as seen in gamma-rays and are known as “the Fermi bubbles” [@su10]. For comparison, the mechanical luminosity necessary to inflate the bubbles in NGC 3079 [@ceci01] is $\sim 30$ times lower than our median $L_{\rm wind}$, whereas that inferred for “the Fermi bubbles” in the Milky Way is $10^4-10^5$ times lower than our $L_{\rm wind}$. It is interesting that all of the examples above host jets [@ceci01; @cros08; @wang09; @su12], but it remains unclear whether this is universally true and whether they are contributing most of the required power.
Feedback in low- and high-luminosity active galaxies
----------------------------------------------------
Our investigation focuses on the most luminous type 2 quasars at $z\la 1$. In these sources, we find strong evidence for quasar-driven winds on galaxy-wide scales, for radio emission associated with these winds, and for ionized gas velocities in excess of escape velocity from the galaxy. In lower luminosity active galaxies, many previous studies demonstrated that ionized gas is in dynamical equilibrium with the host galaxy and that radio emission is likely a bi-product of star-formation processes. Thus quasar-driven feedback may be present above some threshold in luminosity and absent below this threshold. We estimate this threshold by observing that in Figure \[pic\_wise\], our sources separate well from star-forming galaxies at about $\nu L_{\nu}$\[12\]$=2\times 10^{44}$ erg/sec. Using bolometric corrections from @rich06, @liu09 and @liu13b, we estimate the corresponding bolometric luminosity to be $L_{\rm bol}=3\times 10^{45}$ erg/sec (uncertainty $\pm 0.4$ dex).
Given the crudeness of this estimate, we consider it to be consistent with the value of $L_{\rm bol}=2.4\times 10^{45}$ erg/sec (uncertainty $\pm 0.3$ dex) suggested by @veil13 based on incidence of molecular outflows in ultraluminous infrared galaxies [@veil13]. In principle the threshold value should be dependent on the depth of the potential (and therefore the stellar velocity dispersion), as well as the amount of gas that needs to be accelerated [@zubo12], and it will be interesting to see whether these ideas are borne out in future analyses.
NLZ would like to acknowledge useful conversations with M.J.Collinge (who suggested the diffuse nature of radio emission) and S.Tremaine (who suggested the analogy with Fermi bubbles), as well as with B.Groves, J.Krolik, D.Kushnir and J.Ostriker. The authors are grateful to the anonymous referee, H.Falcke, C.Norman, G.Richards, H.Spoon, J.Stern, J.Stocke, S. van Velzen and S.Veilleux for constructive comments during the review process. NLZ is thankful for the continued hospitality of the Institute for Advanced Study (Princeton) where part of this work was performed.
[^1]: <http://www.pa.uky.edu/~peter/atomic/>
|
---
abstract: 'Mutualistic communities have an internal structure that makes them resilient to external perturbations. Late research has focused on their stability and the topology of the relations between the different organisms to explain the reasons of the system robustness. Much less attention has been invested in analyzing the systems dynamics. The main population models in use are modifications of the *r - K* formulation of logistic equation with additional terms to account for the benefits produced by the interspecific interactions. These models have shortcomings as the so called *r - K* formulation diverges under some conditions. In this work, we introduce a model for population dynamics under mutualism that preserves the original logistic formulation. It is mathematically simpler than the widely used type II models, although it shows similar complexity in terms of fixed points and stability of the dynamics. We perform an analytical stability analysis and numerical simulations to study the model behavior in general interaction scenarios including tests of the resilience of its dynamics under external perturbations. Despite its simplicity, our results indicate that the model dynamics shows an important richness that can be used to gain further insights in the dynamics of mutualistic communities.'
author:
- 'Javier García-Algarra'
- Javier Galeano
- Juan Manuel Pastor
- José María Iriondo
- 'José J. Ramasco'
bibliography:
- 'ref-mutualism.bib'
title: Rethinking the logistic approach for population dynamics of mutualistic interactions
---
Introduction {#intro}
============
Despite its long history, there are still several open issues in the research of ecological population dynamics. Some of these questions were highlighted in the 125th anniversary issue of the journal [*Science*]{} [@Kennedy05; @Pennisi05; @Stokstad05]. For example, aspects such as the mechanisms determining species diversity in an ecosystem are under a very active scrutiny by an interdisciplinary scientific community [@williams00; @dunne02; @olensen07; @allesina08; @bascompte09; @saavedra09; @bastolla09; @fortuna2010nestedness; @encinas12]. Quantitative population dynamics goes back to $1202$ when Leonardo Fibonacci, in his [*Liber Abaci*]{}, described the famous series that follows the growth of rabbit population [@Sigler02]. Classical population theory began, however, in $1798$ with Robert Malthus’ [*An Essay on the Principle of Population*]{} [@Malthus98]. Malthus argued that population growth is the result of the difference between births and deaths, and that these magnitudes are proportional to the current population. Mathematically, this translates in the differential equation: $$\frac{dN}{dt}=r_0\, N ,
\label{eq:malthus}$$ where $N$ is the population size, $r_0$ is the [*intrinsic rate*]{} of growth of the population and equals the difference between the rates of birth and death (assuming no migrations).
The Malthusian model predicts an exponential variation of the population, which if $r_0 > 0$ translates into an unbounded growth. In this model, $r_0$ remains constant along the process ignoring thus limiting factors on the population such as the lack of nutrients or space. In $1838$ Verhulst introduced an additional term, proposing the so-called *logistic* equation (see [@Verhulst1845]). The growth rate must decrease as $N$ increases to limit population growth and the simplest way to achieve this is by making $r_0$ a linear function of $N$: $ r_0 = r - \alpha N$, where $r$ is the intrinsic growth rate and $\alpha$ a positive (friction) coefficient that is interpreted as the intraspecific competition. This approach leads to the $r-\alpha$ model: $$\frac{dN}{dt}=r \, N \, - \alpha \, N^2 .
\label{eq:primitiveverhulst}$$ The term with $\alpha$ acts as a biological *brake* leading the system to a point of equilibrium for the dynamics with a population value approaching $ K = r / \alpha$, usually called the *carrying capacity* of the system.
The logistic equation is best known in the form that Raymond Pearl introduced in $1930$ (see [@mallet2012struggle] for an excellent historical review). In this formulation, the carrying capacity appears explicitly, and so it is known as $r-K$:
$$\frac{dN}{dt}=r \, N \, \left(1-\frac{N}{K}\right) .
\label{pearl}$$
The solution of this equation is a sigmoid curve that asymptotically tends to $K$. This formulation has some major mathematical drawbacks [@kuno1991some; @gabriel2005paradoxes]. The most important is that it is not valid when the initial population is higher than the carrying capacity and $r$ is negative. Under those conditions, it predicts an unbounded population growth. This issue was noted by Richard Levins, and consequently is called the Levins’ paradox [@gabriel2005paradoxes]. It is important to stress that all mutualistic models derived from Pearl’s formula inherit its limitations in this sense.
These seminal models of population dynamics did not take into account interactions between species. When several species co-occur in an community there can be a rich set of relationships among them that can be represented as a complex interaction network. In $1926$, Vito Volterra proposed a two-species model to explain the behavior of some fisheries in the Adriatic sea [@Volterra26]. Volterra’s equations describe prey $N(t)$ and predator populations $P(t)$ in the following way: $$\begin{aligned}
%\begin{split}
\displaystyle &\frac{dN}{dt}=N\, \left(a-b \,P\right), \nonumber\\
\displaystyle &\frac{dP}{dt}=P\, \left(c\, N-d\right) ,
\label{myeq1}
%\end{split}\end{aligned}$$ where $a$, $b$, $c$, and $d$ are positive constants. In the Lotka-Volterra model, as it is known today, the prey population growth is limited by the predator population, while the latter benefits from the prey and is bounded by its own growth. This pair of equations has an oscillatory solution that in the presence of further species can even become chaotic.
While prey-predator and competition interactions have been extensively studied, mutualistic interactions, which are beneficial for all the species involved, have received a lower level of attention. Interestingly, back in the XIX century, Charles Darwin had already noticed the importance of a mutualistic interaction between orchids and their pollinators [@Darwin62]. Actually, the relations between plants and their pollinators and seed dispersers are the paradigmatic examples of mutualism. In this context, @Ehr64 alluded to the importance of plant-animal interactions in the generation of Earth’s biodiversity. The simplest mutualistic model without [*’an orgy of mutual benefaction’*]{} was proposed by @may1981models. Each of May’s equations for two species is a logistic model with an extra term accounting for the mutualistic benefit. It is the same idea as in the Lotka-Volterra model but interactions between species always add to the resulting population. May’s equations for two species can be written as $$\begin{aligned}
\frac{dN_1}{dt}=r_1 \,N_1\,\left(1-\frac{N_1}{K_1}\right)+r_1\, N_1\,\beta_{12}\, \frac{N_2}{K_1} , \nonumber \\
\frac{dN_2}{dt}=r_2\, N_2\, \left(1-\frac{N_2}{K_2}\right)+r_2\, N_2\, \beta_{21} \, \frac{N_1}{K_2} ,
\label{myeq2}\end{aligned}$$ where $N_1(N_2)$ is the population of the species $1(2)$; $r_1 \, (r_2)$ is the intrinsic growth rate of population $1\, (2)$ and $K_1\, (K_2)$ the carrying capacity. This is the maximum population that the environment can sustain indefinitely, given food, habitat, water and other supplies available in the environment. Finally, $\beta_{12}\,(\beta_{21})$ is the coefficient that embodies the benefit for population $1\,(2)$ of each interaction with population $2\,(1)$. May’s model major drawback is that it also leads to unbounded growth. This model has been, anyhow, an inspiration for subsequent mutualist models that incorporate terms to solve this problem.
Different strategies to avoid the unlimited growth have been adopted. @Wright89 proposed a two-species model with saturation as a result of restrictions on handling time, $T_H$, which corresponds to the time needed to process resources (food) produced by the mutualistic interaction. The mutualistic term can be included as a type II functional response $$\begin{aligned}
\frac{dN_1}{dt}=r_1\, N_1\, - \alpha_1 \, N_1^2+ \frac{a\, b\, N_1\,N_2}{1+ a\, N_2\,T_H} \nonumber,\\
\frac{dN_2}{dt}=r_2\, N_2\, - \alpha_2 N_2^2 + \frac{a\,b\,N_1\,N_2}{1+a\, N_1\, T_H} ,
\label{eq_typeII}\end{aligned}$$ where $a$ is the effective search rate and $b$ is a coefficient that accounts for the rate of encounters between individuals of species $1$ and $2$. Wright analyzes two possible behaviors of mutualism: *facultative* and *obligatory*. In the facultative case, $r_{1,2}$ are positive, [*i.e.*]{}, mutualism increases the population but it is not indispensable to species subsistence. If $r_{1,2}$ are negative mutualism is mandatory to the species survival. This model has different dynamics depending on the parameter values, but for a very limited region of parameters shows three fixed points. One stable at both species extinction, another also stable at large population values and a saddle point separating both basins of attractions. Using a mutualistic model with a type II functional, @Bastolla05 [@bastolla09] show the importance of the structure of the interaction network to minimize competition between species and to increase biodiversity. The type II models are, however, hard to treat analytically due to the fractional nature of the mutualistic term. Other recent alternatives have been proposed as, for instance, that of @johnson13. Still, these works go in the direction of adding extra features to the type II functional rendering more difficult an eventual analytical treatment.
Recently, the research in this area has focused on system stability, looking for an explanation of the resilience of these communities in the interaction networks [@saavedra09; @bastolla09; @thebault2010stability; @fortuna2010nestedness; @staniczenko2013ghost]. The dynamics is, however, as important since changes in the parameters that govern the equations induced by external factors can lead the systems to behave differently and to modify their resilience to perturbations in the population levels. Here, we revisit the basic model describing the population dynamics and propose a set of new equations that combines simplicity in its formulation with the richness of dynamical behaviors of the type II models.
Once introduced the classical population dynamics equations and the review of mutualistic models, the paper is organized as follows. In Section 2, we propose a modified logistic model for mutualism, along with its stability analysis in Section 3. Numerical simulations of our model studying resilience to external perturbation or to changes in the interaction networks are presented in Section 4. The work is then closed in Section 5 with the conclusions. More technical aspects are considered in Appendices A, B and C with details on the stability analysis and numerical treatment of the equations in stochastic form, as well as the tables with the parameters used for the simulations.
A logistic equation for population dynamics with mutualistic interactions {#model}
=========================================================================
Our basic hypothesis is that mutualism contributes to a variation in the species intrinsic growth rate. This assumption is based on empirical observations in which the growth rate of populations (or the fertility) correlates with the availability of resources (see, for instance, [@stenseth98; @krebs02; @rueness03; @tyler08; @jones08]). In our context, the resources are provided by the mutualistic interactions. The simplest way to express mathematically this idea is by expanding the intrinsic growth rate $r$ in terms of the populations with which the mutualistic interactions occur. To be more specific, let us assume that the community is composed of $n_a$ animal species with populations $\{N_i^a\}$ and $n_p$ plant species with populations $\{N_j^p\}$. The rate of mutualistic interactions between a species $i$ and another $j$ is given by $b_{ij}$, which can be seen as elements of a matrix encoding the mutualistic interaction network. Note that the matrix is not necessarily symmetric if the benefit of the interaction is different for the two species involved. Considering a generic animal species $i$, its growth rate can then be written as $$\begin{aligned}
r_{i} = r_{i}^{0} + \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k,
\label{eq:expr}\end{aligned}$$ where $r_{i}^{0}$ is the initial vegetative growth rate. To avoid unrealistic divergence in the population levels, the effect of mutualism must saturate at certain point. Following Velhurst’s idea for the logistic equation, this implies that the friction term, $\alpha_i$, must also depend on the mutualistic interactions. In order to keep the model simple, we assume that the effect of the mutualism on $\alpha$ is proportional to the benefit. This means that $$\begin{aligned}
\alpha_i = \alpha_{i}^{0}+ c_{i} \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k ,
\stepcounter{equation}\tag{\theequation}
\label{eq:alphavariable}\end{aligned}$$ where $c_{i}$ is a proportionality constant. The expansions for the plants are similar but with the sums running over the animal species instead of on the plants. The expansions of $r$ and $\alpha$ could have been taken to higher orders in $N^{p}_k$. However, the linear version of the model should be enough to capture the qualitative features of the population dynamics as long as the higher order terms contribute in a similar way to $\alpha$ and $r$ (with the same sign).
For the sake of simplicity in the notation whenever there is no possible confusion the zeros will be dropped from $\alpha_{i}^{0}$ and $r_{i}^{0}$. Under these assumptions, the system dynamics is described by the following set of differential equations: $$\begin{aligned}
\frac{1}{N^{a}_{i}}\frac{dN^{a}_{i}}{dt} = r_{i}+ \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k - \left( \alpha_{i}+ c_{i} \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k \right) N^{a}_{i} \nonumber\\
\frac{1}{N^{p}_{j}}\frac{dN^{p}_{j}}{dt} = r_{j}+ \sum_{\ell=1}^{n_{a}} b_{j\ell}\, N^{a}_\ell - \left( \alpha_{j}+ c_{j} \sum_{\ell=1}^{n_{a}} b_{j\ell}\, N^{a}_\ell \right) N^{p}_{j}
\stepcounter{equation}\tag{\theequation}\label{eq:modeloralphaconmut}\end{aligned}$$ The terms on the right-hand side of these equations can be interpreted as a *effective growth rates*. Since we will use this concept later, it is important to define it explicitly. The effective growth rate of an animal species $i$ is defined as $$r_{ef,i} = r_{i}+ \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k - \left( \alpha_{i}+ c_{i} \sum_{k=1}^{n_{p}} b_{ik}\, N^{p}_k \right) N^{a}_{i} .
\label{eq:effrate}$$ The plants effective growth rates are defined equivalently but substituting $a$ by $p$. The *carrying capacities* of the system are given by the non-zero fixed points of Eqs. (\[eq:modeloralphaconmut\]). It is easy to see that in the absence of mutualism $K_i = r_i/\alpha_i$ for species $i$, as in the original logistic equations. On the other hand, under the presence of very strong mutualism $K_i$ tends to $1/c_{i}$. The role of the proportionality constant $c_i$ is thus to establish a maximum population for the species $i$ in the strong interaction limit $c_{i} \sum_{k=1}^{n_p} b_{ik} \, N^p_{k} \gg \alpha_{i}$.
![image](Figure1h.pdf)
Stability Analysis {#stability}
==================
A two species community
-----------------------
For simplicity, we start the stability analysis by considering a 2-species model for which we can obtain full analytical results. Let the plant species correspond to the index $1$ and the animal species to the index $2$. Equations become then $$\begin{aligned}
\frac{dN^p_{1}}{dt} = \left( r_{1}+ b_{12}\, N^a_{2}\right) \ N^p_{1} - \left(\alpha_{1}+ c_{1} \, b_{12} \, N^a_{2} \right) {N^p_{1}}^2 ,\nonumber\\
\frac{dN^a_{2}}{dt} = \left( r_{2}+ b_{21}\, N^p_{1}\right)N^a_{2} - \left(\alpha_{2}+ c_{2} \, b_{21}\, N^p_{1} \right) {N^a_{2}}^2 .
\stepcounter{equation}\tag{\theequation}\label{eq:dos_especies}\end{aligned}$$ Some examples with the flux diagrams for this equation system under different parameter conditions are depicted in Figure \[diagram\].
Setting $\frac{dN^p_{1}}{dt} = \frac{dN^a_{2}}{dt} = 0$, one can find the fixed points for the system dynamics. The first, obvious one is total extinction at $({N^p_{1}}^*,{N^a_{2}}^*) = (0,0)$, which is always a fixed point regardless of the parameter values. If any of the intrinsic growth rates $r_1$, $r_2$ is positive, there exist additional fixed points accounting for partial extinctions. The dynamics of the surviving population with positive $r$ follows a decoupled logistic equation, as can be seen from . Therefore, its population will tend to the limit given by a non interacting system: Either $K_1 = r_1/\alpha_1$ or $K_2 = r_2/\alpha_2$. This means that there are partial extinction fixed points at $(K_1,0)$ or $(0,K_2)$, or both if mutualism is facultative only for species $1$ ($r_1 >0$), only for species $2$ ($r_2 >0$) (see Figure \[diagram\]c) or for both ($r_1>0$ and $r_2 >0$) (see Figure \[diagram\]d), respectively.
Besides total or partial extinction, other non-trivial fixed points may appear whenever the condition $r_{ef,i} = r_{ef,j} = 0$ is satisfied. At those points, the following relations are fulfilled $$\begin{aligned}
{N^p_{1}}^* = \frac{ r_{1}+ b_{12} \, {N^a_{2}}^* }{\alpha_{1}+ c_{1}\, b_{12}\, {N^a_{2}}^* } , \nonumber\\
{N^a_{2}}^* = \frac{ r_{2}+ b_{21}\, {N^p_{1}}^* }{\alpha_{2}+ c_{2} \, b_{21}\, {N^p_{1}}^* } .
\label{eq:puntosfijos}\end{aligned}$$ Substituting the expression for ${N^{a}_2}^*$ on the upper equation, one finds that ${N^p_1}^*$ must satisfy a quadratic equation at the fixed points: $$A\, {{N^p_1}^*}^2 + B \, {N^p_1}^* + C=0 ,
\label{eq:quadra}$$ where the coefficients $A$, $B$ and $C$ are given by $$\begin{aligned}
\displaystyle A &= c_{2}\, b_{21}\, \alpha_{1}+c_{1}\, b_{12}\, b_{21} , \nonumber \\
\displaystyle B &= \alpha_{1}\, \alpha_{2}+ c_{1}\, b_{12}\, r_{2} - c_{2}\, b_{21}\, r_{1} - b_{12}\, b_{21} ,\nonumber\\
\displaystyle C &= - r _{1}\, \alpha_{2} - b_{12}\, r_{2} .
\label{eq:puntos_n1}\end{aligned}$$ The fixed points of ${N^a_2}^*$ are found by substituting in turn ${N^p_1}^*$ into the bottom expression of Equation . There are several possible scenarios depending on the solutions of Equation :
1. Both roots are complex. There are no additional fixed points, except for total or partial extinction.
2. A unique real root. This is a bifurcation point for the system dynamics, the solutions are real but degenerate. In this case, there exists a single fixed point besides extinction. The final system fate depends on the stability of this point. However, the most likely outcome is that the populations get eventually extinct.
3. Both roots are real and different. The situation is similar to the one displayed in Figure \[diagram\]a. There are two non-trivial fixed points, typically one stable, and one saddle points that lies on the boundary between two attraction basins. The position of the saddle point determines the extension of the extinction basin and, therefore, the resilience of the system to external perturbations. We call this point the *extinction threshold* and its position will be denoted by $({N_{1}^{p}}^\bullet,{N_{2}^{a}}^\bullet)$.
In order to study the linear stability of the fixed points, we can expand the Equations in a Taylor series around them and calculate the Jacobian of the system (see Appendix A for details). If the eigenvalues are negative, the fixed point is stable. Otherwise, it can be a saddle point if one is positive and the other negative or unstable if both are negative. Starting by total extinction, the Jacobian can be written as $$J = \left(
\begin{array}{ll}
r_{1} & 0 \\
0 & r_{2}
\end{array}
\right) .\stepcounter{equation}\tag{\theequation}\label{eq:J00}$$ The eigenvalues are $\lambda_{1,2} = r_{1,2}$, which means that the extinction point is linearly stable under the assumption of $ r_{1}<0$ and $r_{2}<0$, i.e. when both species rely on mutualism for survival. Total extinction has in this case an attraction basin for different population values. If the system falls within this population levels, the only possible fate is extinction.
On the other hand, if mutualism is facultative for one or both species, total extinction becomes a saddle or unstable point. However, other two fixed points can appear for partial extinction. In this case, the condition for stability of $(r_1/\alpha_1, 0)$ is that $r_{1}>0$ and $r_{2}<-b_{21}\, r_{1}/\alpha_{1}$. Similarly, $(r_1/\alpha_1, 0)$ is stable only if $r_{2}>0$ and $r_{1}<-b_{12}\, r_{2}/\alpha_{2}$.
The same analysis for the remaining non-trivial fixed points leads us to the Jacobian matrix: $$J = \left(
\begin{array}{ll}
- {N^{p}_{1}}^* \, (\alpha_{1}+ c_{1}\, b_{12} \, {N^a_2}^* ) & {N_{1}^{p}}^* \, b_{12} \, (1 - c_{1}\, {N_{1}^{p}}^* ) \\
{N_{2}^{a}}^* \, b_{21}\, (1 - c_{2}\, {N_{2}^{a}}^* ) & - {N_{2}^{a}}^* \, (\alpha_{2}+ c_{2}\,b_{21}\,{N_{1}^{p}}^* )
\end{array}
\right)\stepcounter{equation}\tag{\theequation}\label{eq:J}$$ Since the parameters $c_{1}$ and $c_2$ are always positive (remember that they are the inverse of the maximum population in the limit of strong mutualism), all the terms of $J$ have the sign shown in Equation . The diagonal terms are negative, while the off-diagonal are always positive. A similar configuration for the Jacobian matrix was observed in mutualistic models in [@goh79]. It implies that the eigenvalues of $J$ are both real and that they can be either both negative (*stable fixed points*) or one positive and another negative (*saddle point*). The condition for the existence of a *saddle point* is that the determinant of the Jacobian matrix at the *extinction threshold* is negative, $J_{11} \, J_{22} < J_{12}\, J_{21}$, which in terms of ${N_{1}^{p}}^\bullet$ and ${N_{2}^{a}}^\bullet$ means that $$1-c_{1}\, {N_{1}^{p}}^\bullet - c_{2}\, {N_{2}^{a}}^\bullet > 0 .$$
![Flow diagram for the dynamics of the type II Equations . The fixed points are marked as red circles, while the color of the arrows indicate the intensity of the flow. Finding this dynamical configuration took a considerable effort in parameter tuning. The equation parameters used here are $r_1 = r_2 = -0.1$, $\alpha_1 = \alpha_2 = 0.001$, $a = 0.066$, $b = 0.2$ and $T_H = 1$.[]{data-label="typeII"}](Figure2.pdf)
All these results for two species show that our model displays a rich dynamics. Still, it is simple enough to understand well its different regimes and where they appear in the parameter space. In this sense, it overcomes shortcomings inherent to the type II formulation. For instance, finding a dynamic configuration as the one shown in Figure \[typeII\] for the model of Equation requires a notable effort in terms of parameter tuning. This dynamical configuration with two attractors and a saddle point is ideal to study issues such as system resilience, capacity to bear a high biodiversity or the evolution of the mutualistic interaction networks (see, for example, [@bastolla09] or [@suweis13]). Such regime appears naturally in our model, as in Figure \[diagram\]a, without the need of an elaborated parameter search.
Survival watershed {#watershed}
------------------
We will refer as *survival watershed* to the repeller limit between trajectories that evolve towards full system capacity or towards extinction. In Figure \[diagram\]a, it corresponds to the curve delimiting the attraction basin of total extinction. The watershed includes the non-trivial saddle point $({N_1^p}^\bullet,{N_2^a}^\bullet)$. Its location in the phase space is important because it determines the fragility or robustness of the system by establishing the extension of the extinction basin. Some characteristics of the points laying on the watershed can be analytically found at least for the case of two species communities. The points of the watershed correspond to population pairs $({N_1^p},{N_2^a})$ for which the system dynamics remains in the watershed and ends at $({N_1^p}^\bullet,{N_2^a}^\bullet)$.
By definition, at $({N_1^p}^\bullet,{N_2^a}^\bullet)$ both effective growth rates are zero. To reach this point from any other initial populations, the effective growth rates of both species need to have different signs and evolve similarly in time. If both had the same sign (positive or negative), the system dynamics would be attracted towards full capacity or towards total extinction. Let us assume that the system is approaching $({N_1^p}^\bullet,{N_2^a}^\bullet)$, that the initial populations were $({N_1^p}^0,{N_2^a}^0)$ on the watershed and that we can write the effective growth rates as $$\begin{aligned}
r_{ef,1} = & \, A \, e^{-\gamma\, t} ,\nonumber\\
r_{ef,2} = & -B\, e^{-\gamma\, t} ,
\label{eq:coeffsreffs}\end{aligned}$$ where $A$, $B$ and $\gamma$ are constants, unknown at the moment. Equations then become $$\begin{aligned}
\frac{dN^p_{1}}{dt} & = N^p_{1} \, A \, e^{-\gamma \, t} , \nonumber \\
\frac{dN^a_{2}}{dt} & = -N^a_{2}\, B \, e^{-\gamma \, t} .
\label{eq:coeffsreffs_2}\end{aligned}$$ Integrating these equations between $t = 0$ and infinity, we find that $$\begin{aligned}
\ln \frac{{N_1^p}^\bullet}{{N_{1}^p}^0} & = \frac{A}{\gamma} , \nonumber\\
\ln \frac{{N_2^a}^\bullet}{{N_{2}^a}^0} & = - \frac{B}{\gamma} .
\label{eq:coeffsreffs_3}\end{aligned}$$ Equating the value of $\gamma$ in both expressions, we get the condition for $({N_1^p}^0,{N_2^a}^0)$ to be part of the survival watershed: $$\begin{aligned}
\frac{1}{B} \ln \left(\frac{{N_{2}^a}^\bullet}{{N_{2}^a}^0} \right) + \frac{1}{A} \ln \left(\frac{{N_1^p}^\bullet}{{N_1^p}^0} \right) = 0 ,
\label{eq:coeffsreffs_4}\end{aligned}$$ which means that the functional form of the watershed is given by the power-law $$\begin{aligned}
{N_2^a}^0 = C\, ({N_1^p}^0)^\frac{-B}{A}.
\label{eq:powerlaw}\end{aligned}$$ $C$ is a constant that taking into account that the watershed includes the fixed point $({N_1^p}^\bullet,{N_2^a}^\bullet)$ can be written as $$\begin{aligned}
C = {N_2^a}^\bullet / ({N_1^p}^\bullet)^\frac{-B}{A} .\end{aligned}$$
To find the value of the exponent $\frac{B}{A}$, we must return to the definition of the effective growth rates, $r_{ef,1}$ and $r_{ef,2}$. According to Equations , at $t=0$ we have that $$\begin{aligned}
A = & \, r_{1}+ b_{12}\, {N_2^a}^0 - (\alpha_{1}+ c_{1} \, b_{12}\, {N_{2}^a}^0) \, {N_1^p}^0 , \nonumber\\
-B = &\, r_{2} + b_{21} \, {N_{1}^p}^0-(\alpha_{2}+ c_{2}\, b_{21}\, {N_{1}^p}^0)\, {N_{2}^a}^0 .
\label{eq:reffs_2especies}\end{aligned}$$ If we know that our initial points are part of the watershed, dividing these two expressions we can obtain the exponent value. Alternatively, if other points in the watershed apart from $({N_1^p}^\bullet,{N_2^a}^\bullet)$ need to be found, we can divide the previous expressions, one by the other, and using Equation reach the following implicit equation $$\begin{aligned}
\frac {\ln \left( \frac{{N_2^a}^\bullet}{{N_2^a}^0} \right)}{\ln \left( \frac{{N_1^p}^\bullet}{{N_1^p}^0} \right)} = \frac{( r_{2}+ b_{21}\, {N_1^p}^0) - (\alpha_{2}+ c_{2} \, b_{21}\, {N_1^p}^0 ) \, {N_1^p}^0}{( r_{1}+ b_{12}\, {N_2^a}^0) - (\alpha_{1}+ c_{1} \, b_{12}\, {N_2^a}^0 ) \, {N_2^a}^0 } .
\label{eq:implicita_watershed}\end{aligned}$$ Solving then numerically this equation we can find other points in the watershed and with them an estimation of $\frac{B}{A}$. Figure \[fig:powerlaw\] shows an example of the watershed and a comparison between the curve obtained with Equations and and numerical estimations integrating the system dynamics.
![Survival watershed for two species. Dots were found performing a numerical scan of the system dynamics, determining for which initial conditions the final outcome was extinction or full capacity. Grey solid line is the power law found with Equations and . In this case, $\frac{B}{A}=1.2944,~{N_1^p}^\bullet=989,~{N_2^a}^\bullet=1232,~b_{12}=0.000041850,~c_{1}=0.00004,~\alpha_{1}=~0.000035,~r_1=-0.016,~b_{21}=0.00008750,~c_{2}=0.0001,~\alpha_{2}=0.000035,~r_2 =-0.02$.[]{data-label="fig:powerlaw"}](Figure3.pdf)
General communities
-------------------
The generalization of the stability analysis for an arbitrary number of species is straightforward. The fixed points of Equations comprise the trivial solution $(N_{i}^p,\cdots, N_{j}^a) = (0, \cdots,0)$, i.e., total extinction, *partial extinction* points if mutualism is facultative for any species, and the populations $({N^{a}_{i}}^*,\cdots,{N^{p}_{j}}^*)$ for which the effective growth rates vanish: $$\begin{aligned}
r^{*}_{ef,i} = (r_{i}+ \sum_{k=1}^{n_{p}}\, b_{ik}\, {N^p_{k}}^*)- (\alpha_{i}+c_{i}\, \sum_{k=1}^{n_{p}} b_{ik}\, {N^{p}_k}^* )\, {N^{a}_{i}}^* = 0 \nonumber ,\\
r^{*}_{ef,j} = (r_{j}+ \sum_{\ell=1}^{n_{a}} b_{j\ell}\, {N^{a}_{\ell}}^*)- (\alpha_{j}+c_{j}\, \sum_{\ell=1}^{n_{a}} b_{j\ell}\, {N^{a}_\ell}^* )\, {N^{p}_{j}}^*
=0 ,
\label{eq:effrate2}\end{aligned}$$ for animals and plants, respectively. These equations can be rewritten as $$\begin{aligned}
{N^{a}_{i}}^* = \frac{r_{i}+\sum_{k=1}^{n_{p}}b_{ik}\, {N^{p}_{k}}^*}{\alpha_{i}+c_{i}\,\sum_{k=1}^{n_{p}}{b_{ik}N^{p}_{k}}^*} =
\frac{r_{i}+r_{i}^{mut}}{\alpha_{i}+c_{i}\, r_{i}^{mut}} =
\frac{r_{i}^{*+}}{r_{i}^{*-}} , \nonumber\\
{N^{p}_{j}}^*=\frac{r_{j}+\sum_{\ell=1}^{n_{a}}b_{j\ell}\, {N^{a}_{\ell}}^*}{\alpha_{j}+c_{j}\,\sum_{\ell=1}^{n_{a}}{b_{j\ell}N^{a}_{\ell}}^*} =
\frac{r_{j}+r_{j}^{mut}}{\alpha_{j}+c_{j}r_{j}^{mut}} =
\frac{r_{j}^{*+}}{r_{j}^{*-}} .\end{aligned}$$ The rates $r_{i}^{mut}$ account for the effect of the mutualism on species $i$, while the rates $r^{*+}$ stand for the terms increasing the population growth and $r^{*-}$ for those decreasing it via intra-specific competition.
Equations can be linearized around the fixed points. The corresponding Jacobian matrix has the same appearance as its counterpart for a two species community (Equation ), with negative entries on the diagonal and positive (and null) entries for the off-diagonal elements. For the non-trivial fixed points (those without total or partial extinctions), the diagonal terms can be written for animals and plants, respectively, as (see Appendix A) $$\begin{aligned}
\displaystyle & J_{ii}= - {N^{a}_{i}}^* \left(\alpha_{i} + c_{i} \, \sum_{k=1}^{n_{p}} b_{ik} {N^{p}_{k}}^* \right), \nonumber\\
\displaystyle & J_{jj}= - {N^{p}_{j}}^* \left(\alpha_{j} + c_{j} \, \sum_{\ell=1}^{n_{a}} b_{j\ell}\, {N^{a}_{\ell}}^*\right).
\label{eq:Jii}\end{aligned}$$ The non-diagonal terms, in turn, are $$\begin{aligned}
\displaystyle & J_{ij}={N^{a}_{i}}^* \, b_{ij}\, \left( 1-c_{i}\, {N^{a}_{i}}^*\right)
\label{eq:Jij1}\end{aligned}$$ for interactions between a generic animal species $i$ and a plant $j$, and $$\begin{aligned}
\displaystyle & J_{ji}={N^{p}_{j}}^* \, b_{ji}\, \left( 1-c_{j}\, {N^{p}_{j}}^*\right)
\label{eq:Jij2}\end{aligned}$$ for the opposite interactions between plant $j$ and animal $i$. Given the invariance of the trace of a matrix to change in the vector basis, the sum of the eigenvalues of the Jacobian matrix must satisfy the relation $$\sum_{k}^{n_{a}+n_{p}} \lambda_{k}= - \left(\sum_{k}^{n_{a}+n_{p}} |J_{kk}| \right) .
\stepcounter{equation}\tag{\theequation}\label{eq:sum_lambdas2}$$ The trace is negative, which means that if there are any positive or null eigenvalues their effect must be compensated by several other negative eigenvalues. Therefore, the non-trivial fixed points can be either stable (if all the eigenvalues are negative) or saddle points, if at least one is positive. They cannot be purely unstable.
Another question to discuss is what occurs in case of partial extinctions. The effect of the extinction of some species in the system is to reduce the dimensionality of the set of Equations . To fix ideas, let us assume, for instance, that one animal species $e$ gets extinct. This implies that the possible fixed points for the system dynamics must include now that ${N_e^a}^* = 0$. The collapse of $e$ can trigger the extinction of some plant species relying on it for reproduction. After these plants, some other animals depending on them can in turn get extinct, and so on forming a cascade extinction event. Note that, although the extinction event can be produced by external factors to the system such as a disease or a famine, the population dynamics for the remaining species is linked to the full system equations. The new non-trivial fixed points correspond to the partial extinction points of the original complete set of equations. The stability of these points can substantially change. The entries of the Jacobian matrix for the extinct species in the new non-trivial fixed points become $J_{ee} = r_e + \sum_{k =1}^{n_p }b_{ek}\,{N_k^p}^*$ in the diagonal and $J_{ej} = 0$ off the diagonal. These terms do not contribute to eigenvalues relevant for the stability analysis. The rest of entries for the Jacobian are given by Equations , , and adapted to the surviving species. This means that the sums of Equations do not run over all the species as before, and that the diagonal terms can be closer to zero. The stability of the new fixed points can thus change depending on the parameters of the equations ruling the population dynamics of the surviving species. Actually, depending on how the interactions between species are in the remaining community, the system can become more robust to external perturbations after a partial extinction event.
Numerical results {#results}
=================
The previous analytical results are general so can be used in any mutualistic community. However, to fix ideas, it is important to focus on a particular example. To be able to follow the system dynamics, a numerical technique to integrate the Equations \[eq:modeloralphaconmut\] is implemented. We have used a stochastic approach to take into account the discrete nature of the individuals in a population. A similar technique has been applied before to epidemiologic studies (see, for instance, [@balcan2009multiscale]). Details on the model implementation are given in \[NumSim\].
![a) Mutualistic community with four species of plants and five species of pollinators. b) Simulation results with the population trends for the different species (each species is color-coded). Numerical solution shows that initial populations are below the *extinction threshold*. In this scenario, the system tends to total extinction. The parameters of the simulation can be found in the \[DataTables\] and table \[tab:experiment1\].[]{data-label="fig:red_exper_stab1"}](Figure4.pdf)
The intrinsic growth rates are fixed in negative values for all the simulations, which implies that mutualism is always obligatory for all species. Figure \[fig:red\_exper\_stab1\]a shows a small mutualistic community created for the purpose of this analysis (see numerical details in \[DataTables\], the simulations parameters are in Table \[tab:experiment1\]). In many empirical studies, the number of interacting species in each class is of the order of tens. The network of this example has less species but already displays the main behaviors of larger communities. The population dynamics for the first simulation is depicted in Figure \[fig:red\_exper\_stab1\]b. The conditions are such that seven out of nine species have negative effective growth rates. This leads to a decrease in all the populations except in those of plant species $1$ and $4$. Still, despite their initial growth, the decline of their mutualistic partners turn negative their $r_{ef,i}$ and they eventually get extinct. This scenario shows how the system is attracted to extinction if the populations that are initially below the extinction threshold.
![Population dynamics and evolution of effective rates for the different species (each species is color-coded). The interaction network is the same as in Figure \[fig:red\_exper\_stab1\]a. Despite the initial negative effective growth rates for some species, the system dynamics in this scenario tends to full capacity. The numerical details on the simulation are in \[DataTables\], table \[tab:experiment2\].[]{data-label="fig:exper_stab2"}](Figure5.pdf)
The next simulation explores other fixed points of the model dynamics. Again, all intrinsic rates are negative but mutualistic interaction weights (terms $b_{ij}$) and initial populations are selected in such a way that the effective growth rates of plants $1$ and pollinators $1$ and $4$ are positive, while the effective growth rates of all the other species are negative (see Table B.2). Despite this initial disadvantage, the population of these species recover and the system dynamics tends to the fixed point at full capacity (Figure \[fig:exper\_stab2\]). The speed of the recovery process is different for all the species and even for some of them there is an initial decline of the population size. This short time tendencies can deceive an observer unless the observation period is long enough to comprehend the full system dynamics.
![ a) A plant-pollinator network with high nestedness. b) Simulation results of population trends obtained with this network. An external perturbation attacks plant species 7, which leads to its extinction. The rest of the community reaches a stationary state at full capacity in the reduced system. Numerical details on the simulation are in \[DataTables\], including the simulation parameters in Table \[tab:exper\_resilience\_strong\].[]{data-label="fig:exper_resilience_strong"}](Figure6.pdf)
System stability analysis are usually performed under the assumption of constant external conditions. However, these conditions may strongly vary in more realistic scenarios due to factors such as diseases, famines or droughts. The resilience of mutualistic networks and foodwebs has been traditionally related to a network property named *nestedness* [@bascompte2003nested]. Two types of species can be found in interaction networks: *generalists*, linked to several instances of the other class, and *specialists*, tied only to a small number of them. In nested networks, there is a core of generalist species that are highly coupled, whereas specialists are much more likely to be connected to generalists than to other specialists. Specialists can suffer more in an adverse scenario, but the core of generalist is able to sustain the community. In the next numerical simulations, we explore the effect of nestedness on the system resilience using our model. The objective is to explore whether its dynamics responds similarly to an increase in the nestedness level of the interaction network. These are simple examples but they already help to fix ideas.
In the first example, a network with seven species of plants and five of pollinators is considered (Fig. \[fig:exper\_resilience\_strong\]a). We are not going to develop a formal justification, but this network is strongly nested with an easy to identify core of *generalist* species and *specialists* tied to *generalists* of the other class. Initial populations have been chosen to be above the survival threshold. The system is evolved until it reaches population capacity until year $100$ (day $36500$, see Fig. \[fig:exper\_resilience\_strong\]b). Then, a disruption is introduced in the form of plague attacking plant species 6. This plant suffers an additional $0.20$ yearly death rate and it becomes extinct. Plant species $6$ is only linked to pollinator species $1$, the most generalist of its class. The effect of its extinction is negligible since mutualistic benefit of the rest of plant species is high enough to balance it.
In the last example, a slight modification of the network is used (Fig. \[fig:red\_exper\_resilience\_weak\]a), that breaks the strong nestedness of previous example. This time plant species $6$ is linked to pollinator species $5$, an specialist. We also remove the link connecting plant $1$ and pollinator $5$ and add a link between plant $7$ and pollinator $5$. Numerical values of the rates for the interaction network are described in \[DataTables\], Table \[tab:exper\_resilience\_weak\]. The simulation is then repeated but this time with less nested network. All initial effective rates eventually turn positive by the growth of the system. At year 100, plant species $6$ suffers the same attack as before, an additional $0.20$ yearly death rate, that triggers its extinction. However, the effect this time is different. Pollinator species $5$ depends for its survival on plant species $6$, so the slope of its population becomes negative and will eventually vanish. Plant $7$, connected to specialist pollinator $5$ and with a weak tie with pollinator $1$, losses its main source of mutualistic benefit and also faces extinction. So, an external event on plant $6$ has dragged plant $7$ to extinction because they were indirectly linked by specialist pollinator $5$. If both plant species share links with a generalist pollinator this cascade effect is more unlikely.
Conclusions {#discuss}
===========
In this work, we have introduced a model derived from the logistic approach to study population dynamics under mutualistic interactions. The proposed equations overcome the drawbacks of May’s model when dealing with negative growth rates, an important issue when the system is far from equilibrium and mutualism is obligatory. Our model also allows for an easier analytical treatment since the nonlinearities are simpler than for instance those of the type II models. This simplicity makes it also easier to estimate from empirical data the different rates involved in the equations or to assign them an ecological interpretation. This is a key point because empirical mutualistic interaction datasets are scarce since its compilation is a painstaking task.
![ a) A low nested interaction network. b) Simulation results depicting population trends in the low nested network. As before, an external perturbation attacks plant species 6. The system, however, does not recover and a small scale extinction event is triggered. Numerical details of the simulation can be found in \[DataTables\], the simulation parameters are included in Table \[tab:exper\_resilience\_weak\].[]{data-label="fig:red_exper_resilience_weak"}](Figure7.pdf)
We have studied the dynamics of the model finding the dynamics fixed points and their stability analytically for a simple case, and numerically for a more involved community. Our model shows the fixed point structure of May’s model with the notable addition of a saddle point that controls the stability of the whole system. In this regard the model is as rich in dynamic behaviors as the type II models but with a much simpler mathematical structure. We have analyzed numerically the resilience of our model to external perturbations introducing perturbations in a simple but relatively involved mutualistic network. As in other communities described in the literature, the system resilience is a function of the structure of the network. We hope that this new model can be used to gain further insights in the mutualistic communities due to its rich dynamics and simplicity.
Acknowledgements {#acknowledgements .unnumbered}
================
We have received partial financial support from the Spanish Ministry of Economy (MINECO) under projects MTM2012-39101, MODASS (FIS2011-24785), LIMITES (CGL2009-07229), and AdAptA(CGL2012-33528); from the project PGUI of Comunidad de Madrid MODELICO-CM/S2009ESP-1691 and from the EU Commmission through projects EUNOIA and LASAGNE. JJR acknowledges funding from the Ramón y Cajal program of MINECO.
Detailed Linear Stability Analysis
==================================
For sake of simplicity, we drop the use of the superscripts for plants and animals. The equations system (\[eq:dos\_especies\]) can be expanded in a Taylor series around the singular point ($N^{*}_{1}, N^{*}_{2}$) as $N_{1}= N^{*}_1+\tilde{N}_{1}$ and $N_{2}= N^{*}_2+\tilde{N}_{2}$ [@murraymathematical]: $$\begin{array}{lcr}
\displaystyle \frac{d\tilde{N}_{1}}{dt} = r_{1}+ b_{12}(N^{*}_2+\tilde{N}_{2})-(\alpha_{1}+ c_{1} b_{12} (N^{*}_2+\tilde{N}_{2}))( N^{*}_1+\tilde{N}_{1})\nonumber\\
\\
\displaystyle \frac{d\tilde{N}_{2}}{dt} = r_{2}+ b_{21}( N^{*}_1+\tilde{N}_{1})-(\alpha_{2}+ c_{2} b_{21}(N^{*}_1+\tilde{N}_{1}))(N^{*}_2+\tilde{N}_{2})
\stepcounter{equation}\tag{\theequation}\label{eq:effrateTaylor}
\end{array}$$
and retaining only the linear terms we get: $$\begin{array}{lcr}
\displaystyle \frac{d\tilde{N}_{1}}{dt}= \tilde{N}_{2}( b_{12} - c_{1} b_{12}\, N^{*}_1)-\tilde{N}_{1}(\alpha_{1}+ c_{1} b_{12} \, N^{*}_2) \equiv f_{1}(\tilde{N}_{1},\tilde{N}_{2}) \nonumber\\
\\
\displaystyle \frac{d\tilde{N}_{2}}{dt} = \tilde{N}_{1}( b_{21} - c_{2} b_{21}\, N^{*}_2)-\tilde{N}_{2}(\alpha_{2}+ c_{2} b_{21} \, N^{*}_1) \equiv f_{2}(\tilde{N}_{1},\tilde{N}_{2})
\stepcounter{equation}\tag{\theequation}\label{eq:effrateTaylor2}
\end{array}$$
The Jacobian matrix entries are:
$$\begin{array}{l}
J_{11}= \frac{\partial f_{1}}{\partial \tilde{N}_{1}} =- N^{*}_{1}\left(\alpha_{1}+ c_{1} b_{12} \, N^{*}_2\right) \\
\\
J_{12}= \frac{\partial f_{1}}{\partial \tilde{N}_{2}} = N^{*}_{1}b_{12}\left(1 - c_{1}\, N^{*}_1\right) \\
\\
J_{21}= \frac{\partial f_{2}}{\partial \tilde{N}_{1}} = N^{*}_{2}b_{21} \left(1 - c_{2}\, N^{*}_2\right) \\
\\
J_{22}= \frac{\partial f_{2}}{\partial \tilde{N}_{2}} = - N^{*}_{2}\left(\alpha_{2}+ c_{2} b_{21}\,N^{*}_{1}\right)
\end{array}
\stepcounter{equation}\tag{\theequation}\label{eq:J11}$$
and it can be written in terms of positive entries $J_{ij}$ as
$$J = \left(
\begin{array}{rr}
-J_{11} & J_{12} \\ J_{21} & -J_{22}
\end{array}
\right)$$
The eigenvalues $\lambda_{1,2}$ can be obtained from: $$\lvert J - \lambda I \rvert =0
\stepcounter{equation}\tag{\theequation}\label{eq:lambda0App}$$
whose solutions are $$\begin{array}{lcl}
\lambda _{1,2}=\frac{1}{2}\left(tr(J)\pm \sqrt{tr^{2}(J)-4\,\mathrm{Det}(J)}\right)\\ =
\frac{1}{2}\left(-\left(J_{11}+J_{22}\right)\pm \sqrt{\left(J_{11}+J_{22}\right)^{2}-4\,\mathrm{Det}(J)}\right)\\ =
\frac{1}{2}\left(-\left(J_{11}+J_{22}\right)\pm \sqrt{\left(J_{11}-J_{22}\right)^{2} +4 \,\left( J_{12}J_{21} \right) }\right)
\end{array}
\stepcounter{equation}\tag{\theequation}\label{eq:lambda12}$$
The last expression indicates that the two eigenvalues are real. In addition, eigenvalues satisfy: $$\prod_{k}\lambda_{k}=\mathrm{Det}(J)$$
so the singular point will be a *saddle point* when $\mathrm{Det}(J)<0$. Expanding the determinant of the Jacobian matrix we obtain a condition for the singular point:
$$1-c_{1}N^{*}_{1}-c_{2}N^{*}_{2} >0$$
The partial extinctions are also singular points, and correspond to $N^{*}_{1,2}=0$. For the sake of simplicity, we only write the equations for the singular point ($N^{*}_{1}=r_{1}/\alpha_{1},N^{*}_{2}=0$). With the Taylor expansion around this point the system equations can be written:
$$\begin{array}{ll}
\displaystyle \frac{d\tilde{N}_{1}}{dt} = & r_{1} N^{*}_{1}-\alpha_{1}N^{*2}_{1}+r_{1}\tilde{N}_{1}+ b_{12}\tilde{N}_{2}N^{*}_1-2\alpha_{1}N^{*}_1\tilde{N}_{1} + \\
\, & - c_{1} b_{12}\tilde{N}_{2}N^{*2}_1\nonumber\\
\displaystyle \frac{d\tilde{N}_{2}}{dt} = & r_{2}\tilde{N_{2}}+ b_{21} N^{*}_1\tilde{N_{2}}
\end{array}
\label{eq:effrateTaylorN2=0}$$
The Jacobian is now
$$J = \left(
\begin{array}{rr}
-r_{1} & b_{12}N^{*}_{1}\left(1-c_{1}N^{*}_{1}\right) \\
0 & r_{2}+b_{21}N^{*}_{1}
\end{array}
\right)$$
The eigenvalues are the diagonal entries. This singular point will be a stable node when $r_{1}>0$ and $r_{2}<-b_{21}r_{1}/\alpha_{1}$. The symmetric solution is ($N^{*}_{1}=0,N^{*}_{2}=r_{2}/\alpha_{2}$) and it will be a stable node when $r_{2}>0$ and $r_{1}<-b_{12}r_{2}/\alpha_{2}$.
The generalization for $n_{a} + n_{p}$ species is
$$\begin{aligned}
\frac{dN_{i}}{dt} = \left( r_{i}+ \sum_{j=1}^{n_{a}} b_{ij}N_{j}\right)N_{i} - \left(\alpha_{i}+ c_{i} \sum_{j=1}^{n_{a}} b_{ij}N_{j} \right) N^{2}_{i} \nonumber\\
\frac{dN_{j}}{dt} = \left( r_{j}+ \sum_{i=1}^{n_{p}} b_{ji}N_{i}\right)N_{j} - \left(\alpha_{j}+ c_{j} \sum_{i=1}^{n_{p}} b_{ji} N_{i} \right) N^{2}_{j}
\stepcounter{equation}\tag{\theequation}\label{eq:N_especies}\end{aligned}$$
where the subscript $i$ runs for all plant species and the subscript $j$ runs for all animal species.
The singular points of this set of equations are: the trivial solution ($N_{i=1\cdots n_{p}}=0, N_{j=1\cdots n_{a}}=0$), i.e. the total extinction point, and the solution of *effective growth rates* equal to zero:
$$\begin{array}{lcr}
\displaystyle r^{*}_{ef,i} =\left(r_{i}+ \sum_{j=1}^{n_{a}} b_{ij}N^{*}_{j}\right)- \left(\alpha_{i}+c_{i}\sum_{j=1}^{n_{a}} b_{ij}N^{*}_j\right)N^{*}_{i}
=0 \nonumber\\
\displaystyle r^{*}_{ef,j} = \left(r_{j}+ \sum_{i=1}^{n_{p}} b_{ji}N^{*}_{i}\right)- \left(\alpha_{j}+c_{j}\sum_{i=1}^{n_{p}} b_{ji}N^{*}_i\right)N^{*}_{j}
=0
\stepcounter{equation}\tag{\theequation}\label{eq:effrateN}
\end{array}$$
that can be rewritten as an implicit equation set.
$$\begin{aligned}
\begin{array}{lcc}
N^{*}_{i}=\frac{r_{i}+\sum_{j=1}^{n_{a}}b_{ij}N^{*}_{j}}{\alpha_{i}+c_{i}\sum_{i=1}^{n_{p}}b_{ij}N^{*}_{j}} =
\frac{r_{i}+r_{i}^{Mut}}{\alpha_{i}+c_{i}r_{i}^{Mut}} =
\frac{r_{i}^{*+}}{r_{i}^{*-}} \nonumber\\
\\
N^{*}_{j}=\frac{r_{j}+\sum_{i=1}^{n_{p}}b_{ji}N^{*}_{i}}{\alpha_{j}+c_{j}\sum_{i=1}^{n_{a}}b_{ij}N^{*}_{i}} =
\frac{r_{j}+r_{j}^{Mut}}{\alpha_{j}+c_{j}r_{j}^{Mut}} =
\frac{r_{j}^{*+}}{r_{j}^{*-}}
\end{array}\end{aligned}$$
where the rates $r^{*+}$ and $r^{*-}$ stand for the *positive effective growth rate* and the *per capita negative effective growth rate*, respectively.
The system \[\[eq:N\_especies\]\] can also be expanded around the singular point.
$$\begin{array}{lcl}
\textstyle \frac{dN_{i}}{dt}=r_{i}+\sum\limits_{j=1}^{n_{a}}b_{ij}(N^{*}_{j}+\tilde{N}_{j})- (\alpha_{i}+c_{i}\sum\limits_{j=1}^{n_{a}}b_{ij}(N^{*}_j+\tilde{N}_{j}))(N^{*}_i+\tilde{N}_{i}) \nonumber\\
\textstyle \frac{dN_{j}}{dt}=r_{j}+\sum\limits_{i=1}^{n_{p}}b_{ji}(N^{*}_{i}+\tilde{N}_{i})-(\alpha_{j}+c_{j}\sum\limits_{i=1}^{n_{p}}b_{ji}(N^{*}_i+\tilde{N}_{i}))(N^{*}_j+\tilde{N}_{j})
\stepcounter{equation}\tag{\theequation}\label{eq:effrateTaylorN}
\end{array}$$
where the subscript $i$ stands for plant species and the subscript $j$ stands for animal species.
The set of $n_{a} + n_{p}$ equations can also be rewritten retaining only the linear terms as: $$\begin{aligned}
\begin{array}{lcl}
\displaystyle \frac{dN_{i}}{dt} = \sum_{j=1}^{n_{a}} \tilde{N}_{j} \left( b_{ij} - c_{i} b_{ij}\, N^{*}_i\right) - \tilde{N}_{i}(\alpha_{i}+ c_{i} \sum_{j=1}^{n_{a}} b_{ij} \, N^{*}_{j})\nonumber\\
\displaystyle \frac{dN_{j}}{dt} = \sum_{i=1}^{n_{p}} \tilde{N}_{i} \left( b_{ji} - c_{j} b_{ji}\, N^{*}_j\right) - \tilde{N}_{j}(\alpha_{j}+ c_{j} \sum_{i=1}^{n_{p}} b_{ji} \, N^{*}_{i})\stepcounter{equation}\tag{\theequation}\label{eq:effrateTaylor2N}
\end{array}\end{aligned}$$
The coefficients of $\tilde{N}_{i,j}$ are the entries of the Jacobian matrix. The absolute values of the diagonal terms (for any i-plant species and any j-animal species) are:
$$\begin{aligned}
\displaystyle & J_{ii}=N^{*}_{i}\left(\alpha_{i} + c_{i} \sum_{j=1}^{n_{a}} b_{ij} N^{*}_{j}\right) \nonumber\\
\displaystyle & J_{jj}=N^{*}_{j}\left(\alpha_{j} + c_{j} \sum_{i=1}^{n_{p}} b_{ji} N^{*}_{i}\right)
\label{eq:Jii2}\end{aligned}$$
and the non-diagonal terms:
$$\begin{aligned}
\displaystyle & J_{ij}=N^{*}_{i}b_{ij}\left( 1-c_{i}N^{*}_{i}\right)\nonumber\\
\displaystyle & J_{ji}=N^{*}_{j}b_{ji}\left( 1-c_{j}N^{*}_{j}\right)
\label{eq:Jij}\end{aligned}$$
So, the Jacobian matrix can be written as:
$$J=\left(
\begin{array}{ccccc}
\ddots & \cdots & \cdots & \cdots & \cdots \\
\cdots & -J_{ii} & \cdots & J_{ij} & \cdots \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\cdots & J_{ji} & \cdots & -J_{jj} & \cdots \\
\cdots & \cdots & \cdots & \cdots & \ddots
\end{array}
\right)$$
where the diagonal entries are all negatives and the off-diagonal terms are all positives.
The sum of eigenvalues satisfy:
$$\sum_{k}^{n_{a}+n_{p}} \lambda_{k}= -\left(\sum_{k}^{n_{a}+n_{p}} J_{kk}\right)
\stepcounter{equation}\tag{\theequation}\label{eq:sum_lambdas}$$
This means that not all the eigenvalues are positives, and then the *singular point* is not an asymptotically unstable node. On the other hand the eigenvalues cannot be complex because all the terms of the Jacobian matrix out of the diagonal are zero or positives values so they are stable nodes or saddle points.
Numerical treatment of the equations {#NumSim}
====================================
Population models deal with sets of discrete entities such as animals or plants and computer simulation is a powerful tool to describe the dynamics and stochastic behavior. The choice of a specific simulation method depends on its accuracy and computational efficiency, and sometimes is a challenge.
For instance, Discrete Markov models have been frequently used for this kind of simulation, but this approach has a number of disadvantages compared with Discrete Stochastic Simulation (Poisson simulations or Binomial Simulations). In moderate size Markov models, the set of states may be huge, while Binomial or Poisson Simulation aggregate state variables make them much faster [@gustafsson2007bringing; @balcan2009multiscale].
We have chosen Binomial Simulation to solve the equations of our mutualistic population model. This technique is a stochastic extension of Continuous System Simulation and a reasonable choice when the outcome of the random process has only two values. For instance, survival over a finite time interval is a Bernoulli process, the individual either lives or dies. Breeding may also be described by a Bernouilli trial if time interval is small.
For a species with intrinsic growth $r$, we can assume that probability of breeding over an interval $\Delta T$ is exponentially distributed with an average value $1/r$. So, the probability of reproduction is: $$\label{eq:probbreeding}
P = \int_0^{\Delta T} \! e^{-r\, T} \, dt = 1 - e^{-r\, \Delta T}$$ In particular, a population of $N$ individuals in time $t$, with pure exponential growth, will be in $t+\Delta T$: $$N(t+\Delta T)=N(t) + sgn \left(r \right) Binomial \left( N(t),P \right)$$ The set of equations becomes in stochastic form: $$\begin{split}
N^{a}_{j}(t+\Delta T)=N^{a}_{j}(t) + sgn \left(\hat{r}^{a}_{ef,j} \right) Binomial \left( N^{a}_{j}(t),P^{a}_{j}\right)\\
N^{p}_{l}(t+\Delta T)=N^{p}_{l}(t) + sgn \left(\hat{r}^{p}_{ef,l} \right) Binomial \left(N^{p}_{l}(t),P^{p}_{l} \right)
\end{split}$$ where $\hat{r}^{a}_{ef,j}$ is the class *a* $j$th-species effective growth rate in the simulation period, and $P^{a}_{j}, P^{p}_{l}$ , the probabilities of growth according to equation \[eq:probbreeding\]. In particular, working with one day steps, as we do: $$\hat{r}_{ef} = e^{r_{ef}/365}-1$$
Data tables {#DataTables}
===========
\[exp1\]
Pl 1 Pl 2 Pl 3 Pl 4
---------------------------------------- ------- ------ ------ --------
$b_{1j}$[$\left(10^{-6}\right)$]{} 1 12 12 16
$b_{2j}$[$\left(10^{-6}\right)$]{} 12 4 11 0
$b_{3j}$[$\left(10^{-6}\right)$]{} 12 10 0 0
$b_{4j}$[$\left(10^{-6}\right)$]{} 6 10 0 0
$b_{5j}$[$\left(10^{-6}\right)$]{} 10 0 0 0
$N_{init\,j}$ 700 600 500 200
$c_{j}$[$\left(10^{-4}\right)$]{} 1 1 1 1
$\alpha_{j}$[$\left(10^{-6}\right)$]{} 7 12 12 10
$r_{birth\, j}$ 0.004 0.01 0.01 0.005
$r_{death\, j}$ 0.005 0.04 0.05 0.0055
: Mutualistic coefficients and conditions for the first simulation (fig. \[fig:red\_exper\_stab1\]). Top, pollinator-plant interaction matrix; bottom, plant-pollinator matrix[]{data-label="tab:experiment1"}
Pol 1 Pol 2 Pol 3 Pol 4 Pol 5
---------------------------------------- ------- ------- ------- ------- -------
$b_{1m}$[$\left(10^{-6}\right)$]{} 14 13 10 10 20
$b_{2m}$[$\left(10^{-6}\right)$]{} 12 6 1 10 0
$b_{3m}$[$\left(10^{-6}\right)$]{} 2 5 1 0 0
$b_{4m}$[$\left(10^{-6}\right)$]{} 10 1 0 0 0
$N_{init\,m}$ 500 300 500 200 150
$c_{m}$[$\left(10^{-4}\right)$]{} 1 1 1 1 1
$\alpha_{m}$[$\left(10^{-6}\right)$]{} 10 10 8 10 30
$r_{b\, m}$ 0.28 0.02 0.05 0.02 0.02
$r_{d\, m}$ 0.44 0.058 0.065 0.034 0.038
: Mutualistic coefficients and conditions for the first simulation (fig. \[fig:red\_exper\_stab1\]). Top, pollinator-plant interaction matrix; bottom, plant-pollinator matrix[]{data-label="tab:experiment1"}
\[exp2\]
Pl 1 Pl 2 Pl 3 Pl 4
---------------------------------------- ------- ------ ------ --------
$b_{1j}$[$\left(10^{-6}\right)$]{} 1 12 12 16
$b_{2j}$[$\left(10^{-6}\right)$]{} 12 4 11 0
$b_{3j}$[$\left(10^{-6}\right)$]{} 12 10 0 0
$b_{4j}$[$\left(10^{-6}\right)$]{} 6 10 0 0
$b_{5j}$[$\left(10^{-6}\right)$]{} 10 0 0 0
$N_{init\,j}$ 1500 2000 1200 1500
$c_{j}$[$\left(10^{-4}\right)$]{} 1 1 1 1
$\alpha_{j}$[$\left(10^{-6}\right)$]{} 7 12 12 10
$r_{birth\, j}$ 0.004 0.01 0.01 0.005
$r_{death\, j}$ 0.005 0.04 0.05 0.0055
: Mutualistic coefficients and conditions for the second simulation (fig. \[fig:exper\_stab2\]).[]{data-label="tab:experiment2"}
Pol 1 Pol 2 Pol 3 Pol 4 Pol 5
---------------------------------------- ------- ------- ------- ------- -------
$b_{1m}$[$\left(10^{-6}\right)$]{} 14 13 10 10 20
$b_{2m}$[$\left(10^{-6}\right)$]{} 12 6 1 10 0
$b_{3m}$[$\left(10^{-6}\right)$]{} 2 5 1 0 0
$b_{4m}$[$\left(10^{-6}\right)$]{} 10 1 0 0 0
$N_{init\,m}$ 700 600 1000 700 500
$c_{m}$[$\left(10^{-4}\right)$]{} 1 1 1 1 1
$\alpha_{m}$[$\left(10^{-6}\right)$]{} 10 10 8 10 30
$r_{b\, m}$ 0.28 0.02 0.05 0.02 0.02
$r_{d\, m}$ 0.44 0.058 0.065 0.034 0.038
: Mutualistic coefficients and conditions for the second simulation (fig. \[fig:exper\_stab2\]).[]{data-label="tab:experiment2"}
Pl 1 Pl 2 Pl 3 Pl 4 Pl 5 Pl 6 Pl 7
---------------------------------------- ------- ------ ------ ------- ------- ------ -------
$b_{1j\, }$[$\left(10^{-6}\right)$]{} 20 12 16 16 19 25 35
$b_{2j\, }$[$\left(10^{-6}\right)$]{} 12 14 4.1 2 22 0 0
$b_{3j\, }$[$\left(10^{-6}\right)$]{} 20 11 3.1 20 0 0 0
$b_{4j\, }$[$\left(10^{-6}\right)$]{} 11 24 0 0 0 0 0
$b_{5j\, }$[$\left(10^{-6}\right)$]{} 1 0 0 0 0 0 0
$N_{init\,j}$ 1200 1500 800 770 700 800 400
$c_{j}$[$\left(10^{-4}\right)$]{} 1 0.5 1 2 1 1 1
$\alpha_{j}$[$\left(10^{-6}\right)$]{} 20 30 10 10 50 10 10
$r_{birth\, j}$ 0.004 0.01 0.02 0.005 0.004 0.02 0.025
$r_{death\, j}$ 0.03 0.04 0.04 0.055 0.03 0.03 0.028
: Mutualistic coefficients and conditions for the simulation of a high nested network (fig. \[fig:exper\_resilience\_strong\]).[]{data-label="tab:exper_resilience_strong"}
Pol 1 Pol 2 Pol 3 Pol 4 Pol 5
---------------------------------------- ------- ------- ------- ------- -------
$b_{1m\,}$[$\left(10^{-6}\right)$]{} 14 13 23 30 23
$b_{2m\,}$[$\left(10^{-6}\right)$]{} 19 26 10 10 0
$b_{3m\,}$[$\left(10^{-6}\right)$]{} 2 25 10 0 0
$b_{4m\,}$[$\left(10^{-6}\right)$]{} 1 11 10 0 0
$b_{5m\,}$[$\left(10^{-6}\right)$]{} 1 1 0 0 0
$b_{6m}$[$\left(10^{-6}\right)$]{} 1 0 0 0 0
$b_{7m}$[$\left(10^{-6}\right)$]{} 1 0 0 0 0
$N_{init\,m}$ 1200 1500 1300 1000 700
$c_{m}$[$\left(10^{-4}\right)$]{} 1 1 1 0.7 2
$\alpha_{m}$[$\left(10^{-6}\right)$]{} 10 10 20 10 20
$r_{b\, m}$ 0.08 0.02 0.02 0.05 0.02
$r_{d\, m}$ 0.11 0.078 0.068 0.07 0.028
: Mutualistic coefficients and conditions for the simulation of a high nested network (fig. \[fig:exper\_resilience\_strong\]).[]{data-label="tab:exper_resilience_strong"}
Pl 1 Pl 2 Pl 3 Pl 4 Pl 5 Pl 6 Pl 7
---------------------------------------- ------- ------ ------ ------- ------- ------- -------
$b_{1j\, }$[$\left(10^{-6}\right)$]{} 20 12 16 16 19 0 45
$b_{2j\, }$[$\left(10^{-6}\right)$]{} 12 14 4.1 2 22 0 0
$b_{3j\, }$[$\left(10^{-6}\right)$]{} 20 11 3.1 20 0 0 0
$b_{4j\, }$[$\left(10^{-6}\right)$]{} 11 24 0 0 0 0 0
$b_{5j\, }$[$\left(10^{-6}\right)$]{} 0 0 0 0 0 25 1
$N_{init\,j}$ 1200 1500 800 770 700 400 1000
$c_{j}$[$\left(10^{-4}\right)$]{} 1 0.5 1 2 1 1 1
$\alpha_{j}$[$\left(10^{-6}\right)$]{} 20 30 10 10 50 10 10
$r_{birth\, j}$ 0.004 0.01 0.02 0.005 0.004 0.02 0.025
$r_{death\, j}$ 0.03 0.04 0.04 0.055 0.03 0.024 0.04
: Mutualistic coefficients and conditions for the simulation of low nested network (fig. \[fig:red\_exper\_resilience\_weak\]).[]{data-label="tab:exper_resilience_weak"}
Pol 1 Pol 2 Pol 3 Pol 4 Pol 5
---------------------------------------- ------- ------- ------- ------- -------
$b_{1m\,}$[$\left(10^{-6}\right)$]{} 14 13 23 30 0
$b_{2m\,}$[$\left(10^{-6}\right)$]{} 19 26 10 10 0
$b_{3m\,}$[$\left(10^{-6}\right)$]{} 2 25 10 0 0
$b_{4m\,}$[$\left(10^{-6}\right)$]{} 1 11 10 0 0
$b_{5m\,}$[$\left(10^{-6}\right)$]{} 1 1 0 0 0
$b_{6m}$[$\left(10^{-6}\right)$]{} 0 0 0 0 5
$b_{7m}$[$\left(10^{-6}\right)$]{} 1 0 0 0 30
$N_{init\,m}$ 1200 1500 1300 1000 700
$c_{m}$[$\left(10^{-4}\right)$]{} 1 1 1 0.7 2
$\alpha_{m}$[$\left(10^{-6}\right)$]{} 10 10 20 10 20
$r_{b\, m}$ 0.09 0.02 0.02 0.05 0.02
$r_{d\, m}$ 0.11 0.058 0.04 0.07 0.025
: Mutualistic coefficients and conditions for the simulation of low nested network (fig. \[fig:red\_exper\_resilience\_weak\]).[]{data-label="tab:exper_resilience_weak"}
|
---
abstract: 'We present the star cluster catalogs for 17 dwarf and irregular galaxies in the $HST$ Treasury Program “Legacy ExtraGalactic UV Survey" (LEGUS). Cluster identification and photometry in this subsample are similar to that of the entire LEGUS sample, but special methods were developed to provide robust catalogs with accurate fluxes due to low cluster statistics. The colors and ages are largely consistent for two widely used aperture corrections, but a significant fraction of the clusters are more compact than the average training cluster. However, the ensemble luminosity, mass, and age distributions are consistent suggesting that the systematics between the two methods are less than the random errors. When compared with the clusters from previous dwarf galaxy samples, we find that the LEGUS catalogs are more complete and provide more accurate total fluxes. Combining all clusters into a composite dwarf galaxy, we find that the luminosity and mass functions can be described by a power law with the canonical index of $-2$ independent of age and global SFR binning. The age distribution declines as a power law, with an index of $\approx-0.80\pm0.15$, independent of cluster mass and global SFR binning. This decline of clusters is dominated by cluster disruption since the combined star formation histories and integrated-light SFRs are both approximately constant over the last few hundred Myr. Finally, we find little evidence for an upper-mass cutoff ($<2\sigma$) in the composite cluster mass function, and can rule out a truncation mass below $\approx10^{4.5}$M$_{\odot}$ but cannot rule out the existence of a truncation at higher masses.'
author:
- |
D.O. Cook$^{1,2}$, J.C. Lee$^{2}$, A. Adamo$^{3}$, H. Kim$^{4}$, R. Chandar$^{5}$, B.C. Whitmore$^{6}$, A. Mok$^{5}$, J.E. Ryon$^{6}$, D.A. Dale$^{7}$, D. Calzetti$^{8}$, J.E. Andrews$^{9}$, A. Aloisi$^{6}$, G. Ashworth$^{10}$, S.N. Bright$^{6}$, T.M. Brown$^{6}$, C. Christian$^{6}$, M. Cignoni$^{11}$, G.C. Clayton$^{12}$, R. da Silva$^{13}$, S.E. de Mink$^{14}$, C.L. Dobbs$^{15}$, B.G. Elmegreen$^{16}$, D.M. Elmegreen$^{17}$, A.S. Evans$^{18,19}$, M. Fumagalli$^{20}$, J.S. Gallagher III$^{21}$, D.A. Gouliermis$^{22,23}$, K. Grasha$^{8}$, E.K. Grebel$^{24}$, A. Herrero$^{25,26}$, D.A. Hunter$^{27}$, E.I. Jensen$^{7}$, K.E. Johnson$^{18}$, L. Kahre$^{28}$, R.C. Kennicutt $^{29,30}$, M.R. Krumholz$^{31}$, N.J. Lee$^{7}$, D. Lennon$^{32}$, S. Linden$^{18}$, C. Martin$^{1}$, M. Messa$^{3}$, P. Nair$^{33}$, A. Nota$^{6}$, G. Östlin$^{3}$, R.C. Parziale$^{7}$, A. Pellerin$^{34}$, M.W. Regan$^{6}$, E. Sabbi$^{6}$, E. Sacchi$^{6}$, D. Schaerer$^{35}$, D. Schiminovich$^{36}$, F. Shabani$^{24}$, F.A. Slane$^{7}$, J. Small$^{7}$, C.L. Smith$^{7}$, L.J. Smith$^{6}$, S. Taibi$^{25}$, D.A. Thilker$^{37}$, I.C. de la Torre$^{7}$, M. Tosi$^{38}$, J.A. Turner$^{7}$, L. Ubeda$^{6}$ S.D. Van Dyk$^{2}$, R.A.M. Walterbos$^{28}$, A. Wofford$^{39}$\
$^1$Department of Physics & Astronomy, California Institute of Technology, Pasadena, CA 91101, USA; dcook$@$astro.caltech.edu\
$^2$Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA\
$^3$Dept. of Astronomy, The Oskar Klein Centre, Stockholm University, Stockholm, Sweden\
$^{4}$Gemini Observatory, Casilla 603, La Serena, Chile\
$^5$Dept. of Physics and Astronomy, University of Toledo, Toledo, OH\
$^6$Space Telescope Science Institute, Baltimore, MD\
$^7$Dept. of Physics and Astronomy, University of Wyoming, Laramie, WY\
$^8$Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA\
$^9$Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721\
$^{10}$Institute for Computational Cosmology and Centre for Extragalactic Astronomy, University of Durham, Durham, UK\
$^{11}$Department of Physics, University of Pisa, Largo B. Pontecorvo 3, 56127, Pisa, Italy\
$^{12}$Dept. of Physics and Astronomy, Louisiana State University, Baton Rouge, LA\
$^{13}$Dept. of Astronomy & Astrophysics, University of California – Santa Cruz, Santa Cruz, CA\
$^{14}$Astronomical Institute Anton Pannekoek, University of Amsterdam, Amsterdam, The Netherlands\
$^{15}$School of Physics and Astronomy, University of Exeter, Exeter, United Kingdom\
$^{16}$IBM Research Division, T.J. Watson Research Center, Yorktown Hts., NY\
$^{17}$Dept. of Physics and Astronomy, Vassar College, Poughkeepsie, NY\
$^{18}$Dept. of Astronomy, University of Virginia, Charlottesville, VA\
$^{19}$National Radio Astronomy Observatory, Charlottesville, VA\
$^{20}$Institute for Computational Cosmology and Centre for Extragalactic Astronomy, Durham University, Durham, UK\
$^{21}$Dept. of Astronomy, University of Wisconsin–Madison, Madison, WI\
$^{22}$Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str.2, 69120 Heidelberg, Germany\
$^{23}$Max Planck Institute for Astronomy, Königstuhl17, 69117 Heidelberg, Germany\
$^{24}$Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstr. 12–14, 69120 Heidelberg, Germany\
$^{25}$Instituto de Astrofisica de Canarias, La Laguna, Tenerife, Spain\
$^{26}$Departamento de Astrofisica, Universidad de La Laguna, Tenerife, Spain\
$^{27}$Lowell Observatory, Flagstaff, AZ\
$^{28}$Dept. of Astronomy, New Mexico State University, Las Cruces, NM\
$^{29}$Institute of Astronomy, University of Cambridge, Cambridge, United Kingdom\
$^{30}$Dept. of Astronomy, University of Arizona, Tucson, AZ\
$^{31}$Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT Australia $^{32}$European Space Astronomy Centre, ESA, Villanueva de la Cañada, Madrid, Spain\
$^{33}$Dept. of Physics and Astronomy, University of Alabama, Tuscaloosa, AL\
$^{34}$Dept. of Physics and Astronomy, State University of New York at Geneseo, Geneseo, NY\
$^{35}$Observatoire de Geneve, University of Geneva, Geneva, Switzerland\
$^{36}$Dept. of Astronomy, Columbia University, New York, NY\
$^{37}$Dept. of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD\
$^{38}$Department of Physics and Astronomy, Bologna University, Bologna, Italy\
$^{39}$Instituto de Astronomia, Universidad Nacional Autonoma de Mexico, Unidad Académica en Ensenada, Km 103 Carr. Tijuana-Ensenada, Ensenada 22860, Mexico\
bibliography:
- 'tex/all.bib'
title: Star Cluster Catalogs for the LEGUS Dwarf Galaxies
---
Local Group – galaxies: photometry – galaxies: dwarf – galaxies: irregular – galaxies: spiral
INTRODUCTION
============
Dwarf galaxies are interesting laboratories with which to study the process of star formation. The extreme environments found in dwarfs (low-mass, low-metallicity, and low-star formation rate (SFR)) can provide leverage to test observational scaling relationships and predictions from theoretical models. Star clusters can be especially conspicuous in dwarf galaxies and a prominent tracer of the star formation process. For example, young massive clusters (M$>$10$^5~M_{\odot}$) are the products of extreme star formation periods [i.e., high star formation efficiencies (SFE) $>$60%; @turner15] and have been found in several local bursting dwarf galaxies [@billett02; @johnson03; @johnson04; @calzetti15]. In addition, several cluster properties appear to scale with their host-galaxy properties [e.g., the brightest cluster and the total number of clusters, etc.; @larsen02; @whitmore03; @goddard10] that can provide clues to the physics of star formation.
Despite the important environmental conditions found in dwarf galaxies, their star cluster populations are often challenging to study due to the low SFRs and consequently the reduced numbers of clusters. The low number statistics can add scatter to established cluster-host relationships and cause difficulties in interpreting the results even in larger samples of dwarf galaxies [@cook12]. Thus, providing a large sample of dwarf galaxies whose star clusters have been uniformly identified and their properties uniformly derived can provide key insights into the star formation process.
There are two key factors that can act to reduce the amount of scatter found in the cluster-host relationships of dwarf galaxies are: 1) uniform identification of a more complete sample of clusters, and 2) measuring accurate total fluxes and consequently more accurate cluster ages and masses. There are other factors that can affect the accuracy of derived clusters properties (e.g., single stellar population model uncertainties, stochastic effects for low-mass clusters, and reddening law uncertainties); however, if implemented uniformly across a sample of clusters, then the scatter introduced by these other factors will be reduced.
The first factor that can add scatter into cluster-host relationships in dwarf galaxies is cluster identification. Traditionally, clusters have been identified by visually inspecting images to produce a cluster catalog. However, this method can result in missed clusters and contain biases depending on what an individual might identify as a cluster, which can depend on the size, shape, color, and luminosity of the cluster as well as the background and crowding environments nearby. Automated methods of cluster identification [@bastian12a; @whitmore14; @adamo17] could improve the completeness of clusters as these methods can flag all extended objects in the galaxy as cluster candidates. Unfortunately, these candidates require vetting by human classifiers where, in some cases, the number of candidates can be large compared to the number of real clusters. However, the number of candidates produced in dwarfs will likely be small, thus, making an automated identification method (with subsequent human vetting) practical in dwarf galaxies.
The second factor that can add cluster-host relationship scatter is inaccurate total fluxes and cluster properties. It is clear that high resolution imaging is required across at least 4 photometric bands to obtain clean photometry and accurate physical properties of clusters in nearby galaxies [@anders04a; @bastian14]. Even with high resolution imaging, the aperture correction used for clusters can produce significantly different total fluxes [@chandar10b] and the subsequently derived physical properties (age, mass, and extinction).
In this paper we examine various methods to identify and measure total fluxes of clusters in a sample of 17 dwarf and irregular galaxies from the Legacy ExtraGalactic UV Survey [LEGUS; @legus] where high-resolution HST images in 5 bands have been acquired by LEGUS for each of these dwarfs. We focus on the comparisons between automated and human-based identification as well as which photometric methods produce accurate total cluster fluxes and consequently produce accurate physical properties (i.e., age, mass, and extinction). We then compare our results of identification and photometry with those from previous cluster studies in dwarf galaxies. Finally, we conclude by examining the basic properties of clusters in these extreme environments and test if these properties change with galaxy-wide properties.
Data & Sample\[sec:data\]
=========================
In this section we describe the LEGUS dwarf and irregular galaxy sub-sample and how it compares to the full LEGUS sample. The data and properties of the entire LEGUS sample are fully described in @legus, but we provide an overview here. The LEGUS sample consists of 50 nearby galaxies within a distance of 12 Mpc to facilitate the study of both individual stars and star clusters. A combination of new WFC3 and existing ACS HST imaging constitute the LEGUS data resulting in 5 bands for each galaxy that cover near UV and optical wavelengths. The HST filters available in the LEGUS galaxies are F275W, F336W, F438W/F435W, F555W/F606W, and F814W, and are hereafter referred to as $NUV$, $U$, $B$, $V$, $I$, respectively. The global properties of the full LEGUS sample span a range in SFR ($-2.30<\rm{log(SFR; M_{\odot}yr^{-1})}<0.84$), stellar mass ($7.3<\rm{log(M_{\star}; M_{\odot})}<11.1$), and SFR density ($-3.1<\rm{log(\Sigma_{\rm SFR}; M_{\odot}yr^{-1}kpc^{-2})}<-1.5$). The normalized areas used for SFR densities are the D25 isophotal ellipses from RC3 [@rc3] as tabulated by NED[^1].
The dwarf and irregular galaxy sub-sample was chosen based on the absence of obvious spiral arms and dust lanes in the HST color images. As a result of this morphological selection, the galaxies studied here have irregular morphologies and may not strictly be considered dwarf galaxies. However, the majority (15 out of 17) have stellar masses below log($M_{\star}\leq9~\rm{M}_{\odot}$) (see Figure \[fig:genprop\]). There are 23 galaxies in the LEGUS sample that meet the morphological criteria, and all 23 global galaxy properties are presented in the subsequent paragraphs of this section. However, in the cluster analysis sections of this study (§\[sec:clustcat\] and beyond) we utilize only the 17 dwarfs available in the public cluster catalog release of June 2018.
We used published physical properties to verify that the dwarf sample ($N$=23) tended to have low SFRs, low stellar masses (M$_{\star}$), and low metallicities; these properties are presented in Table \[tab:genprop\]. The FUV-derived SFRs are taken from @lee09b and @legus. The stellar masses are taken from @cook14c and are computed from mass-to-light ratios of the Spitzer 3.6$\mu m$ fluxes from @dale09. The metallicities are taken from the compilation of @cook14c, where oxygen abundances derived from direct methods are favored, but do contain strong-line measurements when direct method values were not available. The global properties of the dwarf sub-sample span a range in SFR ($-2.30<\rm{log(SFR; M_{\odot}yr^{-1})}<-0.03$), stellar mass ($7.3<\rm{log(M_{\star}; M_{\odot})}<9.5$), and SFR density ($-3.1<\rm{log(\Sigma_{\rm{SFR}}; M_{\odot}yr^{-1}kpc^{-2})}<-1.5$); note that the dwarfs span the full range of $\Sigma_{\rm{SFR}}$ as the full LEGUS sample.
Figure \[fig:genprop\] illustrates the galaxy-wide physical properties of both the full LEGUS sample and the dwarf sub-sample. Panel ‘a’ shows the distribution of the SFR versus galaxy morphological type ($T$), where the dwarf sub-sample tends to populate the later-type and lower SFR range of the entire sample. Panel ‘b’ shows the distribution of the SFR versus $\Sigma_{\rm{SFR}}$, where the dwarf sub-sample spans a large range of $\Sigma_{\rm{SFR}}$. Panels ‘c’ and ‘d’ show the distribution of SFR versus stellar mass and metallicity, respectively. The dwarf sub-sample tends to have lower stellar mass and metallicity.
![image](figures/genprop.pdf)
Cluster Catalogs {#sec:clustcat}
================
In this section we describe how the cluster catalogs for the LEGUS dwarf galaxies are constructed. The procedures used to produce the cluster catalogs follow that of @adamo17, and involve multiple steps: detection of candidates, classification, photometry and extrapolation to total flux, and SED fitting to obtain physical properties (e.g., age, mass, and extinction).
One of the main goals of the LEGUS project is to determine whether the properties of star clusters are dependent on the galactic environment where they live. Dwarf galaxies offer a environment distinct from more massive spiral galaxies in which to explore this possibility. Given the possibility that star clusters in dwarfs may exhibit different properties, it is not unreasonable to assume that care must be taken when applying the methods of cluster detection and characterization developed using the LEGUS spiral galaxies to dwarfs galaxies to ensure that systematics are not introduced. In this section we highlight two additional steps that were taken to check for possible systematics: a visual search for clusters, and an alternate method to compute total cluster fluxes.
CLUSTER IDENTIFICATION \[sec:clustid\]
--------------------------------------
### Automated Cluster Candidate Detection \[sec:autoID\]
The LEGUS cluster pipeline allows the user to tailor parameters for selection and photometry to appropriate values for each galaxy. The pipeline begins by utilizing SExtractor [@sex96] to identify point and point-like objects in the V-band image to create an initial catalog of both stars and star cluster candidates.
A key step in the overall process is the visual identification of isolated stars and star clusters which serve as training sets to guide separation of clusters from stars, and to determine appropriate photometric parameters (e.g., aperture radius and aperture correction; see § \[sec:phot\]). Stars and star clusters are separated based on the extent of their radial profiles as measured by the concentration index (CI). In the LEGUS cluster pipeline, CI is defined as the difference in magnitudes as measured from two radii (1 pixel minus 3 pixels). For each galaxy, a CI separation value is chosen by the user via comparison of CI histograms for training stars and clusters, where the high end of the stellar CI histogram helps to set the stellar-cluster CI threshold. Typical CI thresholds in the LEGUS dwarfs are 1.2 to 1.4 mag, similar to the values used for the LEGUS spirals.
After cluster candidates are identified, a final cut is made by the LEGUS pipeline after the photometry is completed (see §\[sec:phot\]) where sources with an absolute magnitude fainter than $M_V{=}-6$ are excluded. Previous studies have shown that separation of stars and clusters in absolute magnitude occurs in the range of $M_V{=}-6$ to $M_V{=}-8$ mag, where stars can be as bright as $M_V{=}-8$ mag and clusters can be identified as faint as $M_V{=}-6$ mag [@larsen04; @chandar10b]. Thus, a $M_V{=}-6$ mag cut is employed by the LEGUS pipeline to remove potential stellar contamination while minimizing the loss of potential star clusters.
The total number of cluster candidates found in the LEGUS dwarfs with this process is 3475. The number of candidates per galaxy is presented in Table \[tab:clustprop\] and spans from over a thousand in NGC4449 to 18 in NGC5238.
### Classification of Cluster Candidates \[sec:visclass\]
After the automated detection is complete, the resulting sources are vetted for contaminants (background galaxies, stars, artifacts, etc) and classified based on morphology and symmetry.
![An HST color mosaic of example clusters. The 4 rows present 3 examples for class 1,2,3,4, from top-to-bottom, where the 3 examples across the row represent sources with low CI values (compact), average, and large CI values. The class 4 examples across the bottom row represent stars with nearby contamination, a star with contaminating nebular emission, and a background galaxy.[]{data-label="fig:manidmosaic"}](figures/MosaicFinal2.jpeg)
A full description of the classification scheme can be found in @adamo17 and Kim et al. (2018; in prep), but we provide a brief overview here. Classifications were performed by at least 3 LEGUS team members, where the final classification is defined as the mode of all classifications. Class 1 sources are those that have extended radial profiles with spherical symmetry. Class 2 sources are those that are extended, but have some degree of asymmetry in their radial profiles. Class 3 sources are those with multiple peaks in their radial profiles. Class 4 sources are those considered to be contaminants (e.g., obvious stars, background galaxies, random overdensities of nebular emission, etc.). The morphologies potentially provide insight into the evolutionary status of the clusters. Class 1 and 2 sources may be gravitationally bound star clusters while class 3 sources (showing multiple stellar peaks) are referred to as compact stellar associations, which may be in the process of being disrupted [@grasha15; @adamo17].
Figure \[fig:manidmosaic\] shows the HST color image cutouts of three example clusters for classes 1, 2, 3, and 4 from the top to the bottom panels. Examples from left-to-right in Figure \[fig:manidmosaic\] show representative CI values near the minimum, average, and maximum for class 1, 2, and 3. The class 4 examples in Figure \[fig:manidmosaic\] from left-to-right show a star with a nearby contaminating object, a star that is spatially coincident with an overdense nebulous region, and a background galaxy, respectively. The majority of class 4 cluster candidates are stars whose CI values are inflated due to light from nearby sources.
Integrated over the LEGUS dwarf galaxy sample, the cluster pipeline finds 944, 495, and 2036 sources for classes 1-2, 3, and 4, respectively (i.e., the majority are determined to be contaminants). The number of confirmed candidates in each class in individual galaxies is presented in Table \[tab:clustprop\].
### Visual Cluster Search \[sec:manID\]
A visual search of the HST color images was also performed to provide a check on the LEGUS cluster pipeline. One of the authors (DOC) used images created from the $V$- and $I$-bands to search for clusters in the LEGUS dwarfs using procedures similar to @cook12. Clusters were identified as: a close grouping of stars within a few pixels with an unresolved component, or a single extended source with spherical symmetry. Sources exhibiting evidence of spiral structure (indicating a background galaxy) were excluded. The clusters were subsequently classified by multiple LEGUS team members as class 1, 2, or 3.
In total, 193 clusters were found in the visual search that were missing from the catalog produced by the LEGUS extraction tool, and these were added to the LEGUS catalogs.[^2] Figure \[fig:absci\] is a plot of absolute $V-$band magnitude versus CI for clusters found via the LEGUS pipeline and visual inspection clusters missed by the pipeline. We find that the majority (74%) of clusters missed by the LEGUS pipeline are fainter than the $M_V=-6$ cut imposed by the pipeline. In addition, we find that the LEGUS cluster pipeline successfully recovers the majority (88%) of clusters to its stated limits (i.e., those brighter than $M_V=-6$ mag).
The small number of visually identified clusters brighter than $M_V=-6$ that were missed by the LEGUS pipeline can be explained by: user defined limits, the 3 sigma detection limit imposed by the pipeline, or poor source extraction in high density environments. The compact cluster missed at CI$\sim$1.25, $M_V=-7.2$ mag was cut in the pipeline due to the user imposed CI cut (=1.3). The missing clusters with the highest CI values (CI$>$2.1) were missed due to larger photometric errors just above the LEGUS pipeline detection threshold of 0.3 mag (i.e., low surface brightness). The remaining handful of clusters are located in a rapidly varying background region in NGC4449. All of these missing clusters were added into the final catalog.
![The absolute $V-$band magnitude of the clusters found in the LEGUS dwarf galaxy sub-sample. The blue X’s represent the human-based clusters not found by the LEGUS cluster pipeline. The majority of these clusters are fainter than the $M_V{=}-6$ magnitude cut imposed by the pipeline.[]{data-label="fig:absci"}](figures/absmag_ci.pdf)
Cluster Photometry \[sec:phot\]
-------------------------------
In this section we describe the procedure to utilize training clusters to determine the radius at which we perform aperture photometry on the clusters, and the aperture correction to obtain total fluxes. These two photometry parameters can greatly affect the total flux of each cluster, and consequently the derived physical properties.
It is relatively straightforward to determine the aperture radius, which is based on the normalized radial flux curves of the training clusters, where the median radial profile provides the profile of a typical star cluster in each galaxy. The aperture is chosen to be the radius at which 50% of the median flux is contained within the aperture. The aperture radii values allowed in our analysis here have discrete values of 4, 5, and 6 pixels.
The challenge in this process particular to dwarf galaxies is that galaxies with low SFRs have small populations of clusters overall, and the isolated clusters that can be used for the training set may be few. Table \[tab:apcorr\] shows the number of training clusters found in each of the dwarf galaxies, where the number ranges from 2 to 55. It is possible that this could lead to aperture corrections that are not well determined for low SFR galaxies, so we have investigated two methods: 1) an average aperture correction as measured from the isolated clusters in the training set for each galaxy, and 2) a correction based the measured CI of each cluster, where the correction to total flux is derived from a suite of artificial star clusters embedded in our HST imaging.
### Average Aperture Correction
The first method adopts an average aperture correction of training clusters. This method has been widely used [@chandar10b; @adamo17] and has the advantage of being resistant to outliers in the training set. Here, we take the difference in magnitudes measured in a 20 pixel radius aperture minus that measured in the “half light" aperture (i.e., a 4,5,6 pixel radius as determined above) as the correction. Figure \[fig:apcorrcomp\] presents the aperture correction histogram in the V-band for the training clusters in IC4247 (panel a) and NGC4449 (panel b), which shows that the average aperture correction is not well defined for galaxies with low numbers of training clusters.
The average aperture correction in NGC4449 follows the peak of the histogram, thus recovering the aperture correction of a typical cluster. However, the histogram in IC4247 is not well determined as there are only 2 training clusters with corrections that differ by a factor of $\sim$2 ($\sim$0.75 mag). Furthermore, since 1 of the 2 training clusters in IC4247 lies outside the allowed limits of the aperture correction histogram, the average aperture correction is based on only a single training cluster making the aperture correction highly uncertain. Depending on the radial profile of only 1 or 2 training clusters the average aperture correction may not reflect a typical cluster in a dwarf galaxy, and may in fact cause this correction to vary across filters within a galaxy.
For example, Table \[tab:apcorr\] shows the average aperture corrections in each filter across the LEGUS dwarf galaxies, where column 10 gives the range in the aperture correction across the filters (Columns 2$-$9). The range in aperture corrections for a single galaxy is as low as 0.09 mag and as high as 0.45 mag. The top panel of Figure \[fig:apcorrN\] graphically presents the average aperture corrections for all filters in each galaxy versus the number of training clusters. The bottom panel of Figure \[fig:apcorrN\] demonstrates that there is a larger spread of aperture corrections in galaxies with lower numbers of training clusters (N$<$10). We note that we found no correlations between distance and the average aperture corrections, the range in aperture corrections, nor the number of training clusters.
![image](figures/ApCorrComp.png)
![Top panel: the average aperture correction for all filters in the LEGUS dwarf galaxies plotted against the number of training clusters used to derived the average aperture correction. Some of the y-axis shifts can be accounted for by different photometry radii. However, there exists large aperture correction spreads (0.4 mag) for individual galaxies with fewer numbers of training clusters. Bottom panel: the range in average aperture correction across the filters in each galaxy. The range increases significantly below 10 training clusters.[]{data-label="fig:apcorrN"}](figures/AvgApcorrPaper.pdf)
The larger scatter in the average aperture corrections at lower numbers of training clusters may artificially change the shape of a cluster’s SED making the cluster more blue/red which can affect the derived age and extinction. In the next section, we explore a second aperture correction based on the CI of each cluster to mitigate the effects of low training cluster numbers on the aperture corrections.
### CI-based Aperture Correction \[sec:ciapcorr\]
An alternative method used to derive an aperture correction is based on the radial profile of each cluster as quantified by the concentration index (CI) in each filter. This method has also been widely used in the literature [@chandar10b; @bastian12a; @adamo15], but can have drawbacks where uncertain aperture corrections can be found for faint sources with marginal detections in some filters.
We derive a relationship between the aperture correction and the CI for model clusters in each filter image. The model clusters are generated using the MKSYNTH task in the BAOLAB package [@baolab] following the procedure of @chandar10b where different sized clusters are constructed by convolving a KING30 profile [@king66] of various FWHM values with an empirically-derived stellar PSF made from isolated stars found in each image [see also; @anders06]. The PSF sizes for the WFC3 and ACS cameras are 2.1 pixels (0.105) and 2.5 pixels (0.125), respectively. Model clusters are then injected into relatively sparse regions of all filter images for several LEGUS galaxies (both dwarfs and spirals). In addition, we inject the empirically-derived PSF into these same regions to define the expected CI threshold between stars and star clusters. After injecting both model clusters and model stars (i.e., the empirical PSF) into each image, we extract the resulting photometry using the “half light" apertures and measure the CI and aperture correction. We note that King profiles are often used for globular clusters (i.e., self-gravitating systems) and that younger clusters show better empirical fits to Moffat profiles [@elson87]. However, we find no difference in the aperture correction–CI relationships for both King and Moffat profiles at low and high CI values and little difference (0.1–0.2 mag) at intermediate CI values (1.5–1.7).
Figure \[fig:ciapcorrex\] shows a plot of the aperture correction versus CI for model clusters and stars inserted into one of the ACS-F555W filter images, where we find the expected relationship in that higher CI values (i.e., more extended) have larger aperture corrections (i.e., more negative). The cubic polynomial fit to both the model stars and clusters is consistent with previous studies [@chandar10b]. In addition, we find a model star-cluster CI threshold of 1.2 and 1.3 mag for the WFC3 and ACS cameras, respectively.
![The aperture correction measured for model stars (red pluses) and clusters (grey diamonds) plotted against the measured CI, where the curved dashed line represents the polynomial fit to both model stars and clusters.The black filled circles represent the median and standard deviation of all model clusters in CI bins. The dashed-blue line in Figure \[fig:ciapcorrex\] represents the cubic polynomial fit to both the model stars and the median extracted model clusters.[]{data-label="fig:ciapcorrex"}](figures/CIapcorr_ugc4305_king30_f555w_5px.png)
In total, we have injected model stars and clusters into the images of 7 LEGUS galaxies (4 spirals and 3 dwarfs) whose imaging contains all camera-filter image combinations present in the entire LEGUS survey. We derive polynomial relationships between aperture correction and CI for all camera-filter image combinations since the WFC3 and ACS PSFs are different. Figure \[fig:ciapcorrall\] shows the polynomial fits for all images (WFC3 and ACS) in the 7 galaxies where we find similar polynomial fits for each camera (regardless of the filter). Thus, we have defined a single polynomial fit for each camera as the median of the fits for all filters, which are represented as solid lines in Figure \[fig:ciapcorrall\].
![The aperture correction-CI polynomial fits for all filter-camera combinations for the 7 LEGUS galaxies used to derive these fits. The fits for each camera show good agreement across filters. Thus, the final CI-based aperture correction polynomial fits are given as the median of the polynomial fits in each camera (see Table \[tab:fakeclust\]). The vertical dotted and dashed lines represent the maximum measured model star CI (CI min) and the maximum measured model cluster CI (CI max) for the ACS and WFC3 cameras, respectively. These limits represent the range of CI values measured for model clusters.[]{data-label="fig:ciapcorrall"}](figures/CompAllFits-1.png)
Finally, we repeat the analysis for different aperture radii since the aperture correction will depend on the aperture radius. We derive a median relationship for each camera with the three aperture radii allowed in the LEGUS cluster pipeline (4, 5, 6 pixels). We do not show the aperture correction versus CI plots for the other 2 aperture radii since they are similar to that in Figure \[fig:ciapcorrall\], but with shifted aperture corrections (i.e., y-axis). We present the cubic polynomial fits for all three apertures in Table \[tab:fakeclust\]
As a check on the model cluster polynomial fits, we compare these fits to the aperture corrections and CI values for the real isolated stars and clusters of NGC4449 for the $V-$band in Figure \[fig:ciapcorrreal\]. We do not show the other filters since they show similar agreement with similar scatter. We find that the measured values of both the real stars and clusters show good agreement with the polynomial fit generated from model stars and clusters; including those clusters with large CI values near CI=2.1 mag. One of these clusters has an extended radial profile (CI=2.14 mag) and a measured CI-based aperture correction of –1.55 mag. The average aperture correction in this filter is –0.85 mag, which is a 0.7 mag difference.
![The measured aperture correction versus CI values for the training clusters in NGC4449. Both the training stars and clusters show good agreement with the polynomial fits derived from model stars and clusters. The horizontal dashed line represents the average aperture correction where the error is represented by the gray shaded area. The difference in average and measured aperture correction at low and high CI values can differ by as much as 1 magnitude. []{data-label="fig:ciapcorrreal"}](figures/ngc4449_ApCorrCI-2.pdf)
The main panel of Figure \[fig:CIerrCI\] shows the CI values versus their uncertainties for all clusters in all filters, where each symbol represents the photometry information from the five filters. In addition, we plot the histograms for CI and CI error values in the top and right histogram panels, respectively. We find that the measured CI values of all clusters span a range in values where the median value is 1.7 mag with a standard deviation of 0.27 mag. We also find that the majority of the CI errors are relatively low, where the median value is 0.04 mag with a standard deviation of 0.12 mag. The low CI errors suggest that the majority of our CI-based aperture corrections are well defined and suitable for aperture corrections.
![The measured CI and errors for all clusters in all filters in the LEGUS dwarfs. We find a median CI of 1.7 mag with a standard deviation of 0.27 mag, and a median CI error of 0.04 mag with a standard deviation of 0.12 mag. []{data-label="fig:CIerrCI"}](figures/All_CIerr_CI.pdf)
A caveat to using the CI-based aperture correction method is that large aperture correction uncertainties can exist for clusters with marginal detections in various filter images. We find that the CI errors increase sharply for the faintest clusters near 24 mag. Typically this occurs in our bluest filters ($NUV$- or $U$-band) due to the lower sensitivity of these observations and/or clusters with a redder SED. The fraction of clusters with a CI error greater than 0.1 mag is 34% and 4% for the $NUV-$ and $V-$band filters, respectively. Thus, the CI-based aperture corrections may change the SED shape of clusters with poor detections in some bands; this is more likely to occur for older/redder clusters.
RESULTS {#sec:results}
=======
In this section we provide a detailed comparison of the two aperture corrections and their effects on cluster colors and derived physical properties (age, mass and extinction). We also compare the LEGUS dwarf galaxy cluster catalogs to those previously identified in another large sample of dwarf galaxies [@cook12]. Finally, we present the luminosity, mass, and age distributions along with an investigation of observable cluster properties across galaxy environment.
CI-based versus Average Aperture Correction
-------------------------------------------
The goal of this section is to test what aperture corrections provide the most accurate total fluxes, colors, and physical properties (age, mass, and extinction) for clusters in dwarf galaxies. Here, we provide methodology guidelines for future cluster investigations in galaxies with small cluster populations.
### Photometric Property Comparison {#sec:photcomp}
We first examine how the total cluster fluxes compare given the two aperture corrections studied here. Figure \[fig:apcorrhist\] plots the distribution of the differences between the CI and average aperture corrections in the $V$-band for the aggregate dwarf galaxy cluster sample, where the different histograms represent different cluster classes. As shown in Figure \[fig:apcorrhist\] (and previously in Figure \[fig:ciapcorrreal\]), corrections inferred from the CI can differ from the average by as much as $\approx$1 mag.
Figure \[fig:apcorrhist\] illustrates how common are the extreme extended/compact clusters. Since the aperture corrections are in magnitudes in this figure, we note that objects with negative difference aperture corrections (to the left) have greater CI-based aperture corrections than the average and are thus more extended sources. Conversely, more compact sources will have positive values (to the right) in Figure \[fig:apcorrhist\].
![The CI minus the average aperture correction histogram for all clusters in the LEGUS dwarfs as measured in the $V-$band. The distribution shows that the majority of clusters are more compact than the average while there exists a small tail of more extended clusters. The histograms are further broken down into the class 1, 2, and 3 as red-filled, blue line-filled, and green line-filled histograms, respectively. []{data-label="fig:apcorrhist"}](figures/All_dApcorr_hist.pdf)
Overall the distributions are not centered on zero, but are shifted to positive values (i.e. more clusters tend to have smaller CI-based aperture corrections) and thus are more compact relative to the isolated training clusters. The median difference for all clusters is 0.2, and the differences for class 1, 2, and 3 are similar. As might be expected, there is a larger tail of negative values for the class 2 and 3 clusters, indicating that these classes include objects with more extended profiles relative to the isolated training clusters.
The Class 3 sources show a large number of more compact objects; however, there still exists a significant number of extended class 3 sources. Class 3 objects are defined by the groupings of stars (i.e., multiple radial profile peaks), but they exhibit a wide range in morphology and density of these stellar groupings. The more extended Class 3s exhibit a more pronounced unresolved component, while the compact Class 3s tend to have a pronounced stellar object near the center.
![The significance of the difference between the CI and average aperture corrections given their combined measured uncertainties. The significance is defined as the difference in aperture corrections divided by their uncertainties and added in quadrature. Between 40–50% of all clusters (depending on the filter) show a $>3\sigma$ aperture correction difference indicating that roughly half show a true variation in their radial profile compared to the average training cluster.[]{data-label="fig:dapcorrsig"}](figures/dApcorrrSig_CI.pdf)
A natural question to ask is whether the large spread of CI-based aperture corrections is a true reflection of the diversity of radial profiles in the cluster population or whether the spread is mainly due to photometric uncertainties in the measurement of the CI. Thus, we next determine the significance of the differences between the two aperture corrections. In other words, how many sigma ($\sigma$) apart are the two aperture corrections? Figure \[fig:dapcorrsig\] presents this significance versus the CI values of all clusters in the $V-$band, which shows that the low (CI$<$1.6) and high (CI$>$2.1) CI clusters are significantly different ($>3\sigma$) from the average. We note that we see similar distributions in all other filters. Between 40–50% of all clusters in the different filters have significant CI-based aperture corrections compared to the average, and thus are not consistent with the average corrections given their measured errors. *In other words, nearly half of the clusters show true variations in their radial profiles.* Conversely, the other half of the clusters have CI-based corrections which are consistent with the average corrections within the uncertainties and generally have CIs between 1.8 and 2.0 mag.
![image](figures/PaperExample_CCplot.pdf)
Next, we examine the impact of the aperture corrections on the cluster colors in the LEGUS dwarfs. Figure \[fig:ccplotcomp\] shows different color-color plots for IC4247 and NGC4449. The top panels show IC4247 (a galaxy with only 2 training clusters) while the bottom panels show NGC4449 (a galaxy with many training clusters). For comparison, we also include colors measured with no aperture correction applied, as they may better represent the true cluster colors. We have excluded class 3 sources in Figure \[fig:ccplotcomp\] for clarity.
The left pair of panels show the $B-V$ versus $V-I$ color-color plots for the two galaxies, while the right pair show the $U-B$ versus $NUV-U$ colors. The colors uncorrected for aperture tend to have the least amount of scatter around the model tracks, while the colors derived from CI-based total fluxes have the most scatter; as might be expected from the relative uncertainties in the colors. Overall, the ensemble populations have consistent color distributions, but clearly the colors can vary significantly for individual clusters. For instance, the average aperture correction colors show systematic offsets with the uncorrected colors, while the CI-based colors tend to show larger scatter in the $NUV$- and $U$-bands.
In the next section we test how the range in total fluxes and colors between the aperture corrections translate into a difference in the derived physical properties (i.e., age, mass, and extinction).
### Physical Property Comparison
The cluster ages and masses are determined via SED fitting to single-aged stellar population models. The methods are detailed fully in @adamo17, but we provide a brief overview here. The cluster photometry are fit via two methods: 1) with Yggdrasil [@zackrisson11] SSP models with the assumption that the IMF is fully sampled and 2) with a Bayesian fitting method based on SLUG [Stochastically Lighting Up Galaxies; @slug] where the IMF is stochastically sampled via <span style="font-variant:small-caps;">cluster\_slug</span> [@krumholzb; @krumholz15]. Since the goal of this paper is to compare the properties of clusters in dwarf galaxies to those in spirals, we have chosen to use the physical properties produced by Yggdrasil methods for a more direct comparison to the results of LEGUS spirals studied in @adamo17 and @messa18a.
The Yggdrasil method uses the model parameters available in Starburst99 [@starburst99; @vazquez05], where two commonly used stellar libraries (Padova-AGB and Geneva tracks) with a Kroupa IMF that ranges from 0.1 to 100 $M_{\odot}$ are provided as well as three extinction laws: Milky Way [@cardelli89], starburst [@calzetti00], and starburst with differential extinction for stars and gas. These models are input into Cloudy [@cloudy] to produce fluxes from nebular emission lines and continuum. In this analysis, we use the SED output based on the following assumptions: the Padova-AGB libraries, a starburst extinction law with differential reddening, and the measured gas phase metallicity of each galaxy (see Table \[tab:genprop\]).
![The age-mass diagram for all clusters in the LEGUS dwarf galaxies. The solid line is the Padova isochrone corresponding to the absolute $V-$band magnitude cut of $M_V=-6$ mag.[]{data-label="fig:agemass"}](figures/agemass_all.pdf)
Figure \[fig:agemass\] shows the age-mass diagram for all clusters found in the LEGUS dwarf galaxies for both CI- and average-based photometry. The majority of the clusters below the $M_V{=}-6$ mag cut line are those that have been identified via visual inspection. There is broad agreement between the CI- and average-based aperture corrected fluxes in the coverage of this diagram.
![image](figures/agemasscomp.pdf)
Figure \[fig:agemasscomp\] is a six panel plot comparing ages (top), masses (middle), and extinctions (bottom) derived from the CI and average aperture corrected fluxes for all clusters across the LEGUS dwarf galaxy sample. Scatter plots are shown on the left, and histograms are shown on the right. A comparison of ages shows overall agreement, but with large scatter. We find a median difference of 0.0 with a standard deviation of 0.64 dex for all cluster classes. The age histograms show a similar distribution. This is similar to what @adamo17 found for the LEGUS spiral galaxy NGC628. We note that the apparent age gap between 7.2 - 7.6 is a well known artifact [@maiz09] and does not imply a real deficit in this age range. This feature arises because the models loop back on themselves during this time period, covering a fairly large range in age but a small range of colors. This also explains the pile-up of clusters at 7.8 in a log(age).
A comparison of masses shows overall agreement with a smaller degree of scatter (0.37 dex). However, the masses derived from the CI-based aperture corrections are systematically lower where the median difference is 0.1 dex. This small overall shift to lower masses can also be seen in the histograms. The shift in masses can be understood from inspection of Figure \[fig:apcorrhist\], where we found that the median CI-based aperture correction for all clusters was 0.2 mag fainter than those derived from the average-based aperture corrections. A comparison of extinction values shows overall agreement for the majority of clusters with some scatter, where the median difference is 0.0 with a standard deviation of 0.18 dex. The histogram comparisons also show little difference between the derived extinctions for the CI- and average-based aperture corrections. We note that the median extinction for all clusters in the LEGUS dwarf galaxies is 0.1 mag which suggests that these dwarf galaxies have low extinction environments [@lee09b; @hao11; @kahre18].
As can be seen in the scatter plots, the age and mass distributions are different for the Class 1, 2, and 3 clusters [@grasha15; @grasha17; @adamo17]. We find that class 1, 2, and 3 sources have a median log(age,yr) of 8.0, 7.0, and 6.7, respectively; these values are similar for ages derived using both aperture corrections. The trend between age and class suggests that the associations (Class 3) are the youngest population while the more compact population (Class 1) is the oldest. We also find that Class 1, 2, and 3 have a median log(mass,M$_{\odot}$) of 3.9, 3.6, and 3.2, respectively. Thus, the Class 1 clusters are the oldest and most massive, while the associations (Class 3) are the youngest and least massive.
![The age (top panel) and mass (bottom panel) ratios for the average- or CI-to-constant aperture correction as represented by open blue and filled red histograms, respectively. The constant aperture correction is defined as the median of all average corrections with a value of –0.85 mag. The hashed red histograms represent the clusters whose CI-average aperture correction differences are greater than 3$\sigma$ of the combined correction errors. The average ages and masses show good agreement with the constant correction ages and masses. The CI ages agree with the constant ages with increased scatter, but the CI masses are 0.2 dex smaller than those derived with a constant correction. The majority of the clusters with low CI-to-constant mass ratios are those with a significantly different CI-based aperture correction suggesting that many of the lower CI-based masses reflect a real difference in cluster mass.[]{data-label="fig:agemass4px"}](figures/dAgeMassHist_1ap.pdf)
To further explore how our aperture corrections can affect the derived ages and masses, we re-compute them using fluxes where a single, constant aperture correction of 0.85 mag has been applied across all filters for all clusters (i.e., the median average aperture correction for all filters in all dwarf galaxies). This process leaves the colors unchanged, and provides a useful comparison as they may better represent the true cluster colors as illustrated in Figure \[fig:ccplotcomp\]. In Figure \[fig:agemass4px\] we show histograms for the ratios of ages and masses computed using the CI or average aperture corrected fluxes relative to those computed with a 0.85 mag constant correction. As might be expected based upon the color-color diagrams and model tracks shown in Figure \[fig:ccplotcomp\], the ages are in overall agreement (the distribution of age ratios are centered upon a value of unity), but show large scatter (factor of $>$2). The ages derived from the CI-based aperture corrected fluxes show a larger spread in values.
Examination of the mass histograms show broad agreement between those based on the constant and average aperture corrected fluxes, but there exists an offset between those based on the constant- and CI-based aperture correction (the median ratio is –0.22 dex). This is a consequence of the fact that many of the clusters are more compact compared to the training clusters, and thus have total fluxes that are overestimated by the average (and similarly the constant) aperture correction. In addition, a large fraction of clusters with low CI-to-constant mass ratios are dominated by those with a statistically significant CI minus average correction difference (as represented by the hashed histogram). Thus, the lower CI-based masses likely reflect a real difference in the derived masses from either the average or the constant aperture correction.
### Cluster Distribution Functions (Age, Luminosity, and Mass) {#sec:distfunc}
In this section we explore the effects of our two aperture corrections on the distribution functions of age, luminosity, and mass in the LEGUS dwarf galaxy clusters. Here we focus on class 1 and 2 clusters; however, we note that we find similar results when including the class 3 sources.
Figure \[fig:LFci\] shows the LFs for the clusters in all LEGUS dwarfs with an age less than 100 Myr using both CI- and average-based aperture corrections. Each filter’s LF is color coded and the y-axis has been normalized to an arbitrary number for clarity. The binned LFs have been constructed with an equal number of clusters in each luminosity bin [@miaz05] where the y-axis is calculated as the number of clusters per bin divided by the bin width. For more details on the constructing these distributions see §5.1 of @cook16. We derive the LF slope via fitting a power-law to bins with luminosities brighter than the peak of the luminosity histogram [@cook16]. We note that the peak of the luminosity histogram agrees with the turnover found in the LFs for each filter.
![The luminosity functions (LFs) for all clusters with an age less than 100 Myr in all LEGUS dwarf galaxies. The LF slopes show agreement across all filters within the errors. However, the bluer filter LF slopes tend to be flatter than the redder wavelength LF slopes.[]{data-label="fig:LFci"}](figures/All_LFabs.pdf)
A comparison of the LF slopes between the CI- and average-based aperture corrections reveals no difference within the fitted errors for each of the five filters. However, we do find that the bluer ($NUV$ and $U$) LF slopes tend to be flatter than those at longer wavelengths ($BVI$) as was found by other studies of spiral galaxies [@DolphinKennicut02; @elmegreen02; @gieles06a; @haas08; @cantiello09; @gieles10; @chandar10b; @adamo17]. The median $NUV-$ and $U-$band slopes are 2.8$\sigma$ flatter than the median $BVI-$band slopes. We note that we find similar results when using more conservative magnitude cuts (a few tenths of a magnitude) than the peak when fitting the LF slopes.
![The mass functions (MFs) for all clusters in the LEGUS dwarf galaxies broken into three age ranges: 1-10 Myr, 10-100 Myr, and 100-400 Myr. The MF slopes for all three age bins agree with the canonical $-2$ power-law slope. []{data-label="fig:MFci"}](figures/All_MF_Ages.pdf)
Figure \[fig:MFci\] shows the cluster MFs in different age ranges (1-10, 10-100, and 100-400 Myr) using both CI- and average-based aperture corrections. We find no difference in the MF slopes between the two aperture corrections for all three age ranges given the uncertainties. We also find a similar MF slope for all three age bins of $-1.9\pm0.1$, which is consistent with a canonical $-2$ power-law slope found by many previous clusters studies [@battinelli94; @elmegreen97; @zhang99; @hunter03; @bik03; @degrijs03b; @mccrady07; @chandar10b; @cook12; @adamo17].
![The age distributions of all clusters in the LEGUS dwarfs broken into three logarithmic mass bins: 3.7-4.0, 4.0-4.5, and $>$4.5. We make a mass cut above log(mass) of 3.7 to avoid incompleteness at older ages and to avoid variations due to stochastic IMF sampling of low-mass clusters. Following @adamo17 and @messa18a, we exclude the youngest age bin. We find a power-law slope of $-0.8\pm0.15$ for all mass bins and for both aperture corrections. []{data-label="fig:dndtall"}](figures/All_dndt_masses.pdf)
Figure \[fig:dndtall\] shows the cluster age distributions in different logarithmic mass bins (3.7$-$4.0, 4.0$-$4.5, and $>$4.5) using both CI- and average-based aperture corrections. We use a log(mass/M$_{\odot}$) cut of 3.7 to avoid incompleteness at older ages and to avoid variations in the derived physical properties of clusters due to stochastic sampling of the cluster IMF [@fouesneau10; @krumholz15]. Note that we use a mass cut instead of a luminosity cut since the derived masses will have taken into account the fading of clusters over time [@fall05]. We exclude the youngest age bin following the methodology of @adamo17 and @messa18a. However, we note that the youngest age bin data agree with the fitted distribution at older ages. We find no significant difference between the average- and CI-based age distributions. In addition, we find no difference across the populations in mass bins, and find a median $-0.8\pm0.15$ power-law slope for all three mass bins.
It is possible that our choice of age bin size and age range might affect our fitted age distribution slopes. As such, we have tested the bin sizes and age ranges used in our age distribution fits. Neither smaller nor larger bin sizes show significant differences in their distribution slopes, but we do find larger fluctuations in the smallest bin sizes ($\Delta t$=0.2) most of which reflect the known age gap artifacts due to model fitting. We also test fitting a power law to ages above 10$^7$ yrs, and find a steeper slope of –1.1, but find no difference between the aperture corrections nor across the mass bins. It is also possible that young bursts of star formation in some of our dwarfs with N$>$100 clusters (NGC4449, NGC4656, and NGC3738) may dominate the age distributions and artificially create a steeper slope. We test this by removing each and all of these three galaxies and find no differences in the age distribution slopes within the errors.
[*We conclude that any discrepancies found in the total fluxes, colors, ages, and masses of individual clusters when using different aperture corrections do not translate into a measurable effect in the LFs, MFs, nor the age distributions of ensembles of star clusters.* ]{}
### Aperture Correction Comparison Take-Away Points
We have performed an analysis regarding the effects of two commonly-used cluster aperture corrections on both the observable and physical properties of star clusters. Both methods show consistent luminosity, mass, and age distributions for ensembles of clusters, but both have drawbacks when measuring the properties of individual clusters.
The average aperture correction can produce systematic color offsets when too few training clusters ($N<10$) are available to define the average correction. In addition, the CI-based aperture corrections show increased color scatter for clusters with marginal detections in some filters (usually the $NUV$- or $U$-bands). The ages from both the CI- and average-based aperture corrections show larger scatter compared to those derived from a constant correction, where the CI-based ages ($\sigma$(CI-to-constant)=0.6) are larger than the average ($\sigma$(Avg-to-constant)=0.35 dex).
The median relative difference in total flux resulting from the two aperture corrections is 0.2 mag (in the sense that the average correction is larger indicating that most of the clusters are more compact than the training clusters), and that the difference for individual clusters can be as large as $\sim$1 mag (Figure \[fig:apcorrhist\]). For half of the clusters in our dwarf galaxy sample, these differences are within the photometric uncertainties; for the other half the difference points to a true variation in the radial profile of the clusters relative to those characterized by the training sample. The median difference in total fluxes translates into a median mass difference of 0.22 dex where the masses derived from CI-based corrections are smaller than the average.
From these experiments we have found that the total fluxes of individual clusters are more accurately recovered from a CI-based aperture, but that the CI-based aperture corrections result in increased scatter around the predicted colors when applied individually to each filter. Based on these results, we recommend the following hybrid strategy for aperture corrections. Measure the CI using the filter in which the clusters are detected (V-band in this case), assume the same CI for all other bands, and compute the appropriate aperture correction given the HST camera (i.e., the appropriate aperture correction-CI relationship from Table \[tab:fakeclust\]). This method will introduce a small amount of scatter in the final fluxes across filters due to the PSF variation across the two HST cameras, but this scatter will be smaller than the scatter added by either aperture correction studied here. We have implemented these recommendations on a single galaxy NGC 4449 with significant clusters and find similar age and mass distributions within the fitted errors.
Comparison to ANGST Dwarf Galaxy Clusters \[sec:angstcomp\]
-----------------------------------------------------------
In this section we compare the cluster populations found in the LEGUS dwarf galaxies to those found in the ANGST dwarf galaxies [@cook12]. The cluster catalogs in these two programs represent two of the largest dwarf galaxy samples to have uniformly identified and characterized clusters. However, these two programs use two different identification methods. Thus, a comparison of their cluster populations can yield insights into effective identification methods in these extreme environments.
The main difference in cluster identification methods between ANGST and LEGUS is the generation of star cluster candidates. The ANGST cluster candidates were identified via visual inspection of HST images whereas the LEGUS cluster candidates were generated via automated methods. Both programs then used visual classification, with similar classification definitions, to produce final cluster catalogs.
The ANGST dwarf sample consisted of 37 galaxies whose global SFRs extended down to log(SFR) of –5 where 144 clusters were found at all ages. There are three galaxies in common between ANGST and LEGUS: UGC4305, UGC4459, and UGC5139. In these three galaxies LEGUS found 2.5 and 11 times the number of clusters found in ANGST for all ages and $<$100 Myr, respectively (see Table \[tab:clustnum\]). However, it should be noted that the LEGUS pipeline produces many more candidates that are rejected by visual classification.
![Six ANGST clusters missed by the LEGUS identification methods, where 3 are from UGC4305 (top) and 3 are from UGC4459 (bottom). All but one of these sources (except the top-right) are fainter than the $M_V{=}-6$ magnitude cut employed by the LEGUS cluster pipeline. The source in the top-right exhibits a stellar CI value and is likely a bright star on top of a stellar field in the galaxy. The size of the bars are $\sim$0.8 Five of these were older than 100 Myrs and the upper right was 15Myr.[]{data-label="fig:legusmissed"}](figures/legusmissed.jpeg)
A cross-match of the clusters in both programs shows that all but six of the ANGST clusters were found in the LEGUS catalog. Figure \[fig:legusmissed\] shows HST color cutouts of these clusters where 5 are fainter than the LEGUS pipeline magnitude cut ($M_V{=}-6$) and the sixth (upper-right object) exhibits a small CI value consistent with a stellar PSF. Thus, these clusters do not make either the magnitude cut nor the CI cut of the LEGUS pipeline. However, we note that these “missed" clusters would be visually classified as class 3s or a contaminant in the LEGUS classification scheme.
![Two-color images of six representative LEGUS clusters that are not in the ANGST cluster catalog (2 from each of the three galaxies). Nearly all of the LEGUS clusters not in the ANGST catalog are compact sources with CI values near 1.6 mag.[]{data-label="fig:angstmissed"}](figures/angstmissed.png)
The LEGUS pipeline found over twice as many clusters in these three galaxies. We show 2 representative LEGUS clusters missed by ANGST in Figure \[fig:angstmissed\] for each of the three galaxies, where these clusters tend to be more compact (CI$<$1.7 mag). To illustrate this, Figure \[fig:cimagangst\] shows the absolute $V-$band magnitude versus CI for the young clusters in both ANGST and LEGUS. We use the LEGUS class and CI values for the ANGST clusters. The majority of the LEGUS clusters missed by ANGST tend to be fainter and more compact, which could be difficult to separate from stars in dense regions via visual inspection.
Figure \[fig:cimagangst\] also shows that the total flux is overestimated in the ANGST catalog. This is due to the existence of only two high-resolution HST images at the time of the ANGST cluster study [@cook12]. Thus, ground-based imaging was used to fill in the wavelength gaps in the cluster SEDs, and the HST imaging was smoothed to match the ground-based seeing. The photometric aperture used by @cook12 was 2$\farcs$5 which is $\sim$10 times the size of the LEGUS photometric aperture. Consequently, there can be considerable contamination from nearby sources in the ANGST photometric aperture as evidenced by several ANGST clusters showing $>$1 mag brighter than the LEGUS CI-based photometry. A comparison of the clusters in common to both the ANGST and LEGUS catalogs, the ages show good agreement. However, the ANGST masses can be significantly larger since derived masses will scale with the measured brightness, where on average the mass ratio is a factor of a few. We note that the clusters with the largest magnitude difference ($\sim3$ mag) have mass estimates in ANGST that are larger by factors of $\sim10-50$.
![The absolute $V-$band magnitude versus CI for the young ($<$100 Myr) ANGST and LEGUS clusters in the three galaxies, where open symbols represent class 3 sources and closed symbols represent class 1 and 2 clusters. The vertical lines connect the $V-$band magnitude for the same cluster in ANGST and LEGUS. []{data-label="fig:cimagangst"}](figures/CImag_angstcomp.pdf)
To put the cluster statistics of both ANGST and LEGUS into perspective, Figure \[fig:Nsfr\] shows the total number of young clusters ($<$100 Myr) found in LEGUS and ANGST given an absolute magnitude cut of $M_V=-6$ mag (i.e., the LEGUS pipeline cut). We note that both studies used the same HST images to identify clusters, thus applying the same magnitude cut to ANGST is reasonable. The ANGST clusters show a consistent dearth of clusters at nearly all SFRs compared to LEGUS. For a non-dwarf comparison, we overplot the number of clusters found in a sample of spiral galaxies brighter than the adopted brightness limits of each galaxy [typically $-8$ mag; @whitmore14], and find that the LEGUS cluster numbers smoothly extend the spiral relationship between the number of clusters and global SFR.
![The number of clusters versus the global galaxy SFR for the LEGUS clusters (blue circles), ANGST clusters (open squares), and a uniformly identified catalog of clusters in several spiral galaxies [crosses; @whitmore14]. The dashed line is a bisector fit to the LEGUS and spiral sample with an RMS scatter of 0.24 dex.[]{data-label="fig:Nsfr"}](figures/Nclust_sfr.pdf)
Trends With Global Galaxy Properties \[sec:trends\]
---------------------------------------------------
### Binned Age, Luminosity, and Mass Distributions
Here we explore how the age, luminosity, and mass distributions of clusters in the LEGUS dwarf galaxies change as a function of global galaxy SFR. While most of the LEGUS dwarf galaxies do not have enough young clusters to provide well-behaved distribution functions, we can combine all of the clusters from these galaxies to make a composite dwarf and improve our cluster statistics. This approach has been used by several previous cluster studies [@cook12; @whitmore14; @cook16].
![The luminosity functions (LFs) for the clusters in all LEGUS dwarf galaxies binned by global SFR. We find no trend between the LF slope and binned SFR. []{data-label="fig:binLF"}](figures/binLFabs_sfr_eqnum.pdf)
Figure \[fig:binLF\] shows the LFs of all young clusters (age$<$100 Myr) in the LEGUS dwarfs binned by their host-galaxy’s SFR. The SFR bins were chosen to ensure good number statistics in the bins and so that at least two galaxies were in each bin. We find no trend between the binned LF slope and global SFR where the median LF slope is –1.95$\pm$0.06. We also tested various SFR bin definitions and using a color cut to approximate a 100 Myr age cut ($U-B<-0.5$ mag). We found no significant differences in the LF slopes. We note that the luminosity of the brightest luminosity bin increases with the binned SFR as would be expected since the brightest cluster in a galaxy scales with the global SFR [@whitmore00; @larsen02; @bastian08; @cook12; @whitmore14].
![The mass functions (MFs) for the clusters in all LEGUS dwarf galaxies binned by global SFR. A single limiting log(mass) of 3.7 is used to fit the power-laws for all three SFR bins (see Figure \[fig:MFci\]). We find no trend between the MF slope and the binned SFR. []{data-label="fig:binMF"}](figures/binMF_sfr_eqnum.pdf)
Figure \[fig:binMF\] shows the MFs of all young clusters (age$<$100 Myr) in the LEGUS dwarfs binned by their host-galaxy’s SFR. Similar to our findings for the LFs, we find no statistical difference between the MF slopes across the SFR bins where the median MF slope is –1.9$\pm$0.1. We find no differences in the MF slopes when using various SFR bin definitions nor a color cut to approximate a 100 Myr age cut. We also note that we find no trend between the LF and MF slopes when using higher age cuts (up to 1 Gyr) to increase the cluster number statistics.
![The age distributions for the clusters in all LEGUS dwarf galaxies binned by global SFR with a mass cut above log(mass) of 3.7. Following @adamo17 and @messa18a, we exclude the youngest age bin in the fits (although the results are similar if this age bin is included). We find a constant slope of $-0.8\pm0.15$ and no trend in the age distributions across SFR bins.[]{data-label="fig:bindndt"}](figures/bindNdt_sfr_eqnum.pdf)
Figure \[fig:bindndt\] shows the age distributions of all clusters binned by global SFR with a log(mass)$>$3.7 in the LEGUS dwarfs. We find a constant slope of $-0.8\pm0.15$ and no trend in the age distributions across SFR bins. We find similar results when using various age bin sizes, and we find slightly steeper slopes of $-1.0\pm0.1$ across the SFR bins when fitting to age bins above 10 Myrs. Additionally, if use a more conservative completeness mass cut of log(M/M$_{\odot}$)=4, we find similar results with a slightly flatter average slope of –0.75$\pm$0.15.
The lack of any trends in the LFs, MFs, and age distributions across SFR bins suggests that clusters in different SFR environments exhibit similar mass (and similarly luminosity) distributions and similar disruption rates over time. We discuss this topic further in §\[sec:disc\]. We also note that an upcoming LEGUS paper will explore the luminosity, mass, and age distributions across different local environment in NGC4449 (See Whitmore in prep).
### MF Truncation {#sec:mftrunc}
We use two different methods to test whether or not there is a truncation in the composite mass function of the LEGUS dwarfs at the high end. Following Mok et al. (2018), we use the mass function in three different intervals of age: $1-10$, $10-100$, and $100-400$ Myr, where we apply a log(M) completeness cut of 3.4, 3.7, and 3.7 following the turnover in the MFs for each age interval (see Figure \[fig:MFci\]).
For method 1, we follow @messa18a and fit a truncated power law distribution to the cumulative mass distributions using the <span style="font-variant:small-caps;">mspecfit</span> software [@rosolowsky2005]. The best fit cutoff mass is characterized by $M_0$, and the significance of the fit can be determined from the accompanying value of $N_0$. The best fit results for clusters in the $1-10$, $10-100$, and $100-400$ Myr intervals are shown in the upper panels of Figure \[fig:mftrunc\]. Here, the two youngest intervals return values for Log $M_0\sim5.5-5.7$, but with low significance (only $\approx1\sigma$). The oldest $100-400$ Myr age range has too few clusters above the completeness limit to give a meaningful fit.
For method 2, we perform a standard maximum likelihood analysis [as described in Chapter 15.2 of @mo10] by assuming that the cluster masses have an underlying Schechter form. This method returns the best fit values of the characteristic mass $M_0$ and power-law index $\beta$. We plot the resulting 1-, 2-, and 3-$\sigma$ confidence contours in the bottom panels of Figure \[fig:mftrunc\] for each of the three age intervals. The 2 younger age intervals do not show statistically significant evidence for an upper mass cutoff in the composite LEGUS dwarf sample since the 2 and 3$\sigma$ contours do not close (i.e., remain open to the right edge of each diagram). The oldest age interval contains too few clusters (N=23) for a robust measurement, and clearly demonstrates that the size of the contours is a strong function of the number of clusters in the sample when compared to the younger 2 age intervals. We find similar results when the larger age intervals of 1-200 and 1-400 Myr are used.
Given the shape of the 3$\sigma$ contours at the low mass end in the 2 younger age intervals, we can rule out a truncation mass of $\approx10^5~$M$_{\odot}$ and below, but cannot rule out that a truncation exists above this. We note that we find similar results in the 3 age ranges when removing the 3 highest SFR dwarfs (NGC3738, NGC4656, and NGC4449), except that the lower limit on the mass truncation is smaller at $\approx$10$^{4.5}~M_{\odot}$ in the 1-10 Myr age range.
The overall results from these two independent methods are similar: neither one finds statistically significant evidence for a truncation at the upper end of the cluster mass function of young clusters in our composite LEGUS dwarf sample. In addition, the maximum likelihood results indicate that any truncation mass must be higher than $\approx10^{4.5}~M_{\odot}$.
![image](figures/MFtruncV5.png)
DISCUSSION {#sec:disc}
===========
The luminosity, mass, and age distributions of the star clusters in dwarf galaxies provide important clues to their formation and disruption. In this section, we discuss our results in this context.
In §\[sec:distfunc\] and §\[sec:trends\], we found that the luminosity functions of clusters in the LEGUS dwarf galaxies can be described by a simple power-law, $dN/dL\propto L^{\alpha}$ with $\alpha\approx-2$. This is similar to the cluster populations in two LEGUS spiral galaxies, which have higher SFRs. NGC628 has a SFR$_{FUV+24\mu m}$=6.8 M$_{\odot}\rm{yr}^{-1}$ and M51 has a SFR$_{FUV+24\mu m}$=2.9 M$_{\odot}\rm{yr}^{-1}$ [@lee09b; @cook14c]. The luminosity functions for the clusters in these galaxies were derived using the same methodology as used here, resulting in a slope of –2.09$\pm$0.02 [@adamo17] and –2.02$\pm$0.03 [@messa18a] for NGC628 and M51, respectively. Both the spiral and dwarf galaxy luminosity function slopes are consistent with each other, and show no evidence of a trend between luminosity slope and galaxy SFR.
We also found that the cluster mass functions in the LEGUS dwarf galaxies can be described by a single power-law with an index of $\beta\approx-2$ in different age intervals for clusters with ages up to $\approx400$ Myr. A consistent MF slope over different age ranges can provide clues into the disruption of clusters over time. We see no evidence for flattening at the low end of the cluster mass functions (above the completeness limits), which means that mass-dependent disruption (i.e., where lower mass clusters disrupt faster than higher mass ones), does not have a strong impact on the observable mass and age ranges of our cluster population. We also do not find a correlation between the power-law indices of our composite cluster mass distributions with the overall SFR of the host galaxies; although the masses of the most massive clusters increase with SFR as expected from sampling statistics.
Several previous studies have found truncations at the upper end of the cluster mass function for individual spiral and interacting galaxies [$10^4<M_0~(M_{\odot})<10^6$; @gieles06b; @jordan07; @larsen09; @bastian12a; @adamo15; @johnson17; @adamo17; @messa18a]. Most of the dwarf galaxies in our sample have fairly low SFRs and contain very few clusters, making it difficult to statistically test for a truncation in an individual galaxy. Therefore, in §\[sec:mftrunc\], we tested a composite dwarf galaxy cluster mass function for a Schechter-like downturn at the high mass end. Two different methods found little evidence for an upper mass cutoff in two out of three age ranges (the third $100-400$ Myr range has too few clusters for a robust measurement). To put these results into context and to provide a more direct comparison to other LEGUS studies in the spirals M51 and NGC628 [@adamo17; @messa18a] we test the age interval of 1-200 Myrs, which also provides the added benefit of better clusters statistics. The upper-left panel of Figure \[fig:MFtruncSFRbin\] shows the maximum likelihood contours for our composite dwarf sample in the age interval of 1-200 Myrs. We find no statistically significant evidence for a downturn at the 2-3$\sigma$ level (i.e., the contours remain open).
In addition, we test if our MF truncation constraints change with global SFR in the remaining 3 panels of Figure \[fig:MFtruncSFRbin\]. Here, we bin the clusters with ages of 1-200 Myrs into the same SFR bin definitions as used in our LF, MF, and age distribution tests of Figures \[fig:binLF\]–\[fig:bindndt\]. We find no significant evidence for an upper mass truncation at the 2-3$\sigma$ level in any SFR bin. We also find that the low-mass end of the 3$\sigma$ contours in the lowest SFR bin extends to lower truncation masses when compared to the higher SFR bins. However, since the number of clusters in bins of SFR scales with SFR and the size of the contours, this may give the appearance of trends in parameter constraints with SFR. To draw definitive conclusions, the size of the sample in the lowest SFR bin must be increase by a factor of 3 to 5 in future studies. Taking into account all of our MF truncation tests using different age intervals and SFR binned samples, the 1$\sigma$ contours are closed for some of the age intervals and SFR bins, while the 2-3$\sigma$ contours do not close in any of our tests. This indicates weak evidence ($<2\sigma$ level) for a truncation in some cases.
![image](figures/MFtruncSFRbin.png)
In Section \[sec:trends\], we found that the age distributions of the clusters decline steadily, and can be described as a simple power law, $dN/d\tau \propto \tau^{\gamma}$, with $\gamma=-0.8\pm0.15$. The declining shape of the age distribution in the LEGUS dwarfs is remarkably robust to binning, mass range, age range of the fit, and the specific galaxies that are included.
In order to interpret this result, we need to disentangle the effects of formation versus disruption, since the observed distribution includes both the formation and disruption histories of the clusters: $\gamma_{cl} = \gamma_{\rm form} + \gamma_{\rm disrupt}$. We can do this by assuming that the cluster formation history is proportional to the star formation history, and estimating a composite formation history by summing the SFRs in different age ranges (i.e., the star formation histories; SFHs). As we are using a composite dwarf galaxy SFH from many independent systems, then presumably the combined SFH should be relatively flat over the past few hundred million years since bursts that occur in any individual galaxy should be uncorrelated.
To test this assumption we utilize SFRs from two independent methods: 1) the H$\alpha$ and FUV SFRs from integrated light measurements corrected for internal dust extinction [@lee09b] and 2) the recent star formation histories from resolved-star CMD analysis [@cignoni18]. The integrated light measurements provide a low-resolution SFH since the SFRs derived from H$\alpha$ and FUV probe $t<10$ Myr and $t<100$ Myr timescales, respectively [@kennicutt98]. The recent SFHs for 3 of the LEGUS dwarfs were presented by @cignoni18, and the others will be presented in an upcoming paper (Cignoni et al. in preparation). While we wait for the final SFHs to become available for all of our galaxies, we can still assess whether the composite SFH is flat, declines, or increases for the 17 dwarf galaxies using preliminary SFHs.
Figure \[fig:superSFH\] shows the average SFRs for the 17 LEGUS dwarfs over time using both the integrated light measurement and the resolved-star SFHs. We fit a power law to these SFRs (i.e., dM/dt vs t) and find a slope ($\gamma_{\rm form}$) in the range of 0.1–0.3, which is consistent with a flat or constant formation history over the age ranges studied here. This is similar to the results found by @mcquinn10 for 18 nearby dwarf galaxies also using multi-band $HST$ observations. Since $\gamma_{\rm form} \approx 0$, then $\gamma_{\rm cl} \approx \gamma_{\rm disrupt}$ which means that the observed cluster age distribution is dominated by the disruption of clusters rather than their formation.
![The total SFR versus age of the LEGUS dwarf galaxies using two independent SFR measurements. The red squares represent the summed H$\alpha$ and FUV SFRs corrected for dust extinction from @lee09b; we have updated the SFR conversion using the prescription of @murphy11 with a Kroupa IMF. The green circles represent the summed SFRs derived from the resolved-star SFHs of @cignoni18. The SFHs are preliminary and are in the process of being updated. The blue solid line represents the cluster age distribution in the LEGUS dwarf sample that has been scaled to fit on this graph for purposes of comparing the slopes. The total SFRs from both methods show a constant star formation in the composite dwarf sample indicating that the decline in the cluster age distribution is dominated by cluster disruption. []{data-label="fig:superSFH"}](figures/SuperSFH.pdf)
The best fit power law ($\gamma$=–0.8$\pm$0.15) found in the LEGUS dwarfs, when compared with the SFHs and binned by different parameters, indicates that approximately 70-90% of the clusters disrupt every decade in age, independent of cluster mass and SFR environment. These age distributions are similar to that found individually for a number of more massive galaxies [@fall05; @villaLarsen11; @fall12; @silvavilla14; @mulia16], where a median $\gamma$ calculated in @chandar17 is –0.7$\pm$0.3. This is consistent with the ‘quasi-universal’ model of cluster formation and disruption [@whitmore03; @whitmore07; @fall12]. However, several studies have found age distributions significantly flatter ($\gamma$=–0.2 to –0.4) than that found in our dwarfs [@villaLarsen11; @johnson17; @adamo17; @messa18a], and several works have found evidence that cluster disruption may occur at different rates across different environments within the same galaxy [@bastian12a; @silvavilla14; @adamo17; @messa18b]. For instance, @messa18b found a trend between the cluster disruption rate and gas surface density variations across the M51 disk. In a future work, we will investigate whether other parameters (e.g., SFR/area $\equiv\Sigma_{\rm SFR}$) may have a more pronounced effect on the formation and disruption of clusters in our dwarf galaxies.
SUMMARY
=======
This study has uniformly identified and examined the star clusters in a large sample of dwarf galaxies (N=17) with high resolution $HST$ imaging in 5 filters. The nearly uniform data has facilitated: 1) a detailed comparison of different cluster identification and photometry methods commonly used in the literature, 2) an examination of cluster properties in low-SFR environments with better number statistics than previously studied. The main conclusions are listed below.
- An examination of two widely used aperture corrections (average-based and concentration index (CI)-based) shows that both methods provide largely consistent colors and ages, but that roughly half of the clusters show CI-based aperture corrections that are inconsistent with the average correction given the measured errors. The median total flux difference derived from the two aperture corrections is 0.2 mag suggesting that many of the clusters are more compact than the average training cluster. This median total flux difference translates into a mass offset of 0.1–0.2 dex between the two aperture correction methods. However, the ensemble luminosity, mass, and age distributions derived from both aperture corrections are consistent with each other within the errors.
- Comparing the LEGUS cluster catalog with that of a previous large sample of dwarf galaxies [@cook12] shows that the LEGUS catalog is more complete and provides more accurate total fluxes. The differences in the total fluxes is attributed to the low resolution of the ground-based imaging used to augment the HST imaging in @cook12. For clusters found in common to both catalogs, we find overall agreement in the ages, but the @cook12 masses can be considerably different given the large total flux differences.
- The luminosity and mass functions observed for clusters in the LEGUS dwarfs can be described by a power-law, with an index of $\approx-2$. The mass function appears to be independent of cluster age up to the $\approx400$ Myr studied here, and does not vary with star formation rate.
- The composite cluster mass function shows little evidence for an upper mass truncation at the 2-3$\sigma$ level. The lack of a significant evidence holds for different age intervals and cluster samples binned by global SFR. The extent of the 3$\sigma$ contours in the maximum likelihood fits rule out a truncation below $\approx$10$^{4.5}$ M$_{\odot}$, but cannot rule out a truncation at higher masses.
- The observed age distribution for the composite cluster population in the LEGUS dwarf galaxies can be described by a power law, $dN/d\tau \propto \tau^{\gamma}$, with $\gamma = -0.8\pm0.15$, over the age range $\approx10-400$ Myr. This distribution appears to be independent of the mass of the clusters, and does not vary with star formation rate.
- The composite star formation history for our dwarf galaxies from both integrated light measurements (H$\alpha$ and FUV) and preliminary resolved-star CMD SFHs are both quite flat, with a best fit power law index of $\gamma_{\rm form}=0.1-0.3$. This indicates that disruption dominates the observed cluster age distribution, with $\approx$80% of the clusters being disrupted every decade in age.
In a future work, we will use updated star formation rates for the LEGUS dwarf galaxies to determine the star formation rate density, $\Sigma_{\rm SFR}$ in a consistent way. We will also extrapolate the mass functions presented here to determine the fraction of stars found in clusters ($\Gamma$) in the LEGUS dwarf galaxies, and compare the results with the more massive galaxies in the LEGUS sample. Finally, we will explore if the age distributions change as a function of $\Sigma_{\rm SFR}$.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the referee for their helpful comments. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \# 13364. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. A.A. acknowledges the support of the Swedish Research Council (Vetenskapsradet) and the Swedish National Space Board (SNSB). D.A.G acknowledges support by the German Aerospace Center (DLR) and the Federal Ministry for Economic Affairs and Energy (BMWi) through program 50OR1801 ”MYSST: Mapping Young Stars in Space and Time”.
[^1]: https://ned.ipac.caltech.edu
[^2]: For users of the LEGUS cluster catalogs, the clusters missed by the LEGUS pipeline are indicated with a ’manflag’ value equal to one.
|
---
abstract: |
In this thesis we focus on studying the physics of cosmological recombination and how the details of recombination affect the Cosmic Microwave Background (CMB) anisotropies. We present a detailed calculation of the spectral line distortions on the CMB spectrum arising from the Ly$\,\alpha$ and two-photon transitions in the recombination of hydrogen (H), as well as the corresponding lines from helium (He). The peak of these distortions mainly comes from the Ly$\alpha$ transition and occurs at about $170\,\mu$m, which is the Wien part of the CMB. The detection of this distortion would provide the most direct supporting evidence that the Universe was indeed once a plasma.
The major theoretical limitation for extracting cosmological parameters from the CMB sky lies in the precision with which we can calculate the cosmological recombination process. Uncertainty in the details of hydrogen and helium recombination could effectively increase the errors or bias the values of the cosmological parameters derived from microwave anisotropy experiments. With this motivation, we perform a multi-level calculation of the recombination of H and He with the addition of the spin-forbidden transition for neutral helium (He[i]{}), plus the higher order two-photon transitions for H and among singlet states of He[i]{}. Here, we relax the thermal equilibrium assumption among the higher excited states to investigate the effect of these extra forbidden transitions on the ionization fraction $x_{\rm e}$ and the CMB angular power spectrum $C_\ell$. We find that the inclusion of the spin-forbidden transition results in more than a percent change in $x_{\rm e}$, while the higher order non-resonance two-photon transitions give much smaller effects compared with previous studies.
Lastly we modify the cosmological recombination code [recfast]{} by introducing one more parameter to reproduce recent numerical results for the speed-up of helium recombination. Together with the existing hydrogen ‘fudge factor’, we vary these two parameters to account for the remaining dominant uncertainties in cosmological recombination. By using a Markov Chain Monte Carlo method with [*Planck*]{} forecast data, we find that we need to determine the parameters to better than 10% for He[i]{} and 1% for H, in order to obtain negligible effects on the cosmological parameters.
author:
- Wan Yan Wong
title: Cosmological Recombination
---
|
---
abstract: 'The lexicalist approach to Machine Translation offers significant advantages in the development of linguistic descriptions. However, the Shake-and-Bake generation algorithm of is NP-complete. We present a polynomial time algorithm for lexicalist MT generation provided that sufficient information can be transferred to ensure more determinism.'
author:
- |
Victor Poznański, John L. Beaven & Pete Whitelock [^1]\
SHARP Laboratories of Europe Ltd.\
Oxford Science Park, Oxford OX4 4GA\
United Kingdom\
{vp,jlb,pete}@sharp.co.uk
title: An Efficient Generation Algorithm for Lexicalist MT
---
Introduction
============
Lexicalist approaches to MT, particularly those incorporating the technique of [*Shake-and-Bake*]{} generation , combine the linguistic advantages of transfer [@Arnold:Relaxed; @Allegranza:Eurotra] and interlingual [@Nirenburg:MTKB; @Dorr:MT] approaches. Unfortunately, the generation algorithms described to date have been intractable. In this paper, we describe an alternative generation component which has polynomial time complexity.
Shake-and-Bake translation assumes a source grammar, a target grammar and a bilingual dictionary which relates translationally equivalent sets of lexical signs, carrying across the semantic dependencies established by the source language analysis stage into the target language generation stage.
The translation process consists of three phases:
1. A [*parsing phase*]{}, which outputs a multiset, or [*bag*]{}, of source language signs instantiated with sufficiently rich linguistic information established by the parse to ensure adequate translations.
2. A [*lexical-semantic transfer phase*]{} which employs the bilingual dictionary to map the bag of instantiated source signs onto a bag of target language signs.
3. A [*generation phase*]{} which imposes an order on the bag of target signs which is guaranteed grammatical according to the monolingual target grammar. This ordering must respect the linguistic constraints which have been transferred into the target signs.
The [*Shake-and-Bake*]{} generation algorithm of combines target language signs using the technique known as [*generate-and-test*]{}. In effect, an arbitrary permutation of signs is input to a shift-reduce parser which tests them for grammatical well-formedness. If they are well-formed, the system halts indicating success. If not, another permutation is tried and the process repeated. The complexity of this algorithm is $O(n!)$ because all permutations ($n! $ for an input of size $n$) may have to be explored to find the correct answer, and indeed [*must*]{} be explored in order to verify that there is no answer.
Proponents of the Shake-and-Bake approach have employed various techniques to improve generation efficiency. For example, [@Beaven:Lexicalist] employs a chart to avoid recalculating the same combinations of signs more than once during testing, and [@Popowich:Efficiency] proposes a more general technique for storing which rule applications have been attempted; [@Brew:Cat] avoids certain pathological cases by employing global constraints on the solution space; researchers such as [@Brown:Statistical] and [@Chen:BagGen] provide a system for bag generation that is heuristically guided by probabilities. However, none of these approaches is guaranteed to avoid protracted search times if an exact answer is required, because bag generation is NP-complete [@Brew:Cat].
Our novel generation algorithm has polynomial complexity ($O(n^4)$). The reduction in theoretical complexity is achieved by placing constraints on the power of the target grammar when operating on instantiated signs, and by using a more restrictive data structure than a bag, which we call a [*target language normalised commutative bracketing (TNCB)*]{}. A TNCB records dominance information from derivations and is amenable to incremental updates. This allows us to employ a greedy algorithm to refine the structure progressively until either a target constituent is found and generation has succeeded or no more changes can be made and generation has failed.
In the following sections, we will sketch the basic algorithm, consider how to provide it with an initial guess, and provide an informal proof of its efficiency.
A Greedy Incremental Generation Algorithm
=========================================
We begin by describing the fundamentals of a greedy incremental generation algorithm. The crucial data structure that it employs is the [*TNCB*]{}. We give some definitions, state some key assumptions about suitable TNCBs for generation, and then describe the algorithm itself.
TNCBs
-----
We assume a sign-based grammar with binary rules, each of which may be used to [*combine*]{} two signs by unifying them with the daughter categories and returning the mother. Combination is the commutative equivalent of rule application; the linear ordering of the daughters that leads to successful rule application determines the orthography of the mother.
Whitelock’s Shake-and-Bake generation algorithm attempts to arrange the bag of target signs until a grammatical ordering (an ordering which allows all of the signs to combine to yield a single sign) is found. However, the target [*derivation*]{} information itself is not used to assist the algorithm. Even in [@Beaven:Lexicalist], the derivation information is used simply to cache previous results to avoid exact recomputation at a later stage, not to improve on previous guesses. The reason why we believe such improvement is possible is that, given adequate information from the previous stages, two target signs cannot combine by accident; they must do so because the underlying semantics within the signs licenses it.
If the linguistic data that two signs contain allows them to combine, it is because they are providing a semantics which might later become more specified. For example, consider the bag of signs that have been derived through the Shake-and-Bake process which represent the phrase:\
\
(1) The big brown dog\
\
Now, since the determiner and adjectives all modify the same noun, most grammars will allow us to construct the phrases:\
\
(2) The dog\
(3) The big dog\
(4) The brown dog\
\
as well as the ‘correct’ one. Generation will fail if all signs in the bag are not eventually incorporated in the final result, but in the naïve algorithm, the intervening computation may be intractable.
In the algorithm presented here, we start from observation that the phrases (2) to (4) are not incorrect semantically; they are simply under-specifications of (1). We take advantage of this by recording the constituents that have combined within the TNCB, which is designed to allow further constituents to be incorporated with minimal recomputation.
A TNCB is composed of a sign, and a history of how it was derived from its children. The structure is essentially a binary derivation tree whose children are unordered. Concretely, it is either NIL, or a triple:
$$\begin{aligned}
\mbox{TNCB} & = & \mbox{NIL}\, | \, \mbox{Value} \times \mbox{TNCB} \times
\mbox{TNCB} \\
\mbox{Value} & = & \mbox{Sign}\, | \\
& & \mbox{INCONSISTENT}\, | \\
& & \mbox{UNDETERMINED}\end{aligned}$$
The second and third items of the TNCB triple are the [*child TNCBs*]{}. The [*value*]{} of a TNCB is the sign that is formed from the combination of its children, or [*INCONSISTENT*]{}, representing the fact that they cannot grammatically combine, or [*UNDETERMINED*]{}, i.e. it has not yet been established whether the signs combine.
Undetermined TNCBs are commutative, e.g. they do not distinguish between the structures shown in Figure \[equivalences\].
(291,81) (255,66) (169,47) (107,67) (47,43) (245,46) (118,45) (36,67) (179,67) (212,34)[=]{} (138,34)[=]{} (66,34)[=]{} (10,12)[S]{} (55,12)[O]{} (32,12)[V]{} (274,12)[S]{} (230,12)[O]{} (255,12)[V]{} (233,23)[(1,2)[22]{}]{} (254,68)[(1,-2)[22]{}]{} (245,46)[(1,-2)[11]{}]{} (14,23)[(1,2)[22]{}]{} (35,68)[(1,-2)[22]{}]{} (35,23)[(1,2)[11]{}]{} (169,47)[(1,-2)[11]{}]{} (178,69)[(1,-2)[22]{}]{} (157,24)[(1,2)[22]{}]{} (81,12)[S]{} (103,12)[O]{} (126,12)[V]{} (106,23)[(1,2)[11]{}]{} (106,69)[(1,-2)[22]{}]{} (85,24)[(1,2)[22]{}]{} (154,12)[V]{} (178,12)[O]{} (198,12)[S]{}
In section \[initialisation\] we will see that this property is important when starting up the generation process.
Let us introduce some terminology.
A TNCB is
- [*well-formed*]{} iff its value is a sign,
- [*ill-formed*]{} iff its value is INCONSISTENT,
- [*undetermined*]{} (and its value is UNDETERMINED) iff it has not been demonstrated whether it is well-formed or ill-formed.
- [*maximal*]{} iff it is well-formed and its parent (if it has one) is ill-formed. In other words, a maximal TNCB is a largest well-formed component of a TNCB.
Since TNCBs are tree-like structures, if a TNCB is undetermined or ill-formed then so are all of its ancestors (the TNCBs that contain it).
We define five operations on a TNCB. The first three are used to define the fourth transformation ([*move*]{}) which improves ill-formed TNCBs. The fifth is used to establish the well-formedness of undetermined nodes. In the diagrams, we use a cross to represent ill-formed nodes and a black circle to represent undetermined ones.
- [**Deletion:**]{} A maximal TNCB can be deleted from its current position. The structure above it must be adjusted in order to maintain binary branching. In figure \[deletion\], we see that when node 4 is deleted, so is its parent node 3. The new node 6, representing the combination of 2 and 5, is marked undetermined.
(324,124) (248,106)[6]{} (252,98) (47,10)[(35,25)]{} (307,19)[5]{} (193,19)[2]{} (124,19)[5]{} (61,19)[4]{} (94,74)[3]{} (10,19)[2]{} (67,103)[1]{} (252,99)[(-4,-5)[54]{}]{} (252,99)[(5,-6)[54]{}]{} (129,70)[(1,0)[54]{}]{} (96,68)[(-4,-5)[28]{}]{} (69,100)[(5,-6)[54]{}]{} (69,100)[(-4,-5)[54]{}]{} (54,92)
(30,15) (29,13)[(-5,-2)[28]{}]{} (1,13)[(5,-2)[28]{}]{}
(81,60)
(30,15) (29,13)[(-5,-2)[28]{}]{} (1,13)[(5,-2)[28]{}]{}
- [**Conjunction:**]{} A maximal TNCB can be conjoined with another maximal TNCB if they may be combined by rule. In figure \[conjunction\], it can be seen how the maximal TNCB composed of nodes 1, 2, and 3 is conjoined with the maximal TNCB composed of nodes 4, 5 and 6 giving the TNCB made up of nodes 1 to 7. The new node, 7, is well-formed.
(314,114) (44,89)[(1,-2)[32]{}]{} (44,89)[(-1,-2)[32]{}]{} (150,88)[(-1,-2)[32]{}]{} (150,88)[(1,-2)[32]{}]{} (85,45)
(23,21) (11,21)[(0,-1)[21]{}]{} (0,11)[(1,0)[23]{}]{}
(183,56)[(1,0)[41]{}]{} (267,91)[(1,-2)[32]{}]{} (267,91)[(-1,-2)[32]{}]{} (251,55)[(1,-2)[14]{}]{} (285,55)[(-1,-2)[14]{}]{} (42,94)[1]{} (10,12)[2]{} (74,12)[3]{} (146,91)[4]{} (114,12)[5]{} (178,12)[6]{} (265,96)[7]{} (243,57)[1]{} (233,12)[2]{} (258,12)[3]{} (284,57)[4]{} (268,12)[5]{} (297,12)[6]{}
- [**Adjunction:**]{} A maximal TNCB can be inserted inside a maximal TNCB, i.e. conjoined with a non-maximal TNCB, where the combination is licensed by rule. In figure \[adjunction\], the TNCB composed of nodes 1, 2, and 3 is inserted inside the TNCB composed of nodes 4, 5 and 6. All nodes (only 8 in figure \[adjunction\]) which dominate the node corresponding to the new combination (node 7) must be marked undetermined — such nodes are said to be disrupted.
(408,174) (321,145) (50,118)[(-1,-2)[29]{}]{} (50,118)[(1,-2)[29]{}]{} (185,118)[(1,-2)[29]{}]{} (185,118)[(-1,-2)[29]{}]{} (320,145)[(1,-2)[59]{}]{} (320,145)[(-1,-2)[59]{}]{} (350,85)[(-1,-2)[28]{}]{} (337,57)[(1,-2)[14]{}]{} (111,95)[(0,-1)[21]{}]{} (99,85)[(1,0)[24]{}]{} (227,85)[(1,0)[44]{}]{} (217,51) (47,122)[1]{} (10,45)[2]{} (80,45)[3]{} (182,122)[4]{} (146,45)[5]{} (214,45)[6]{} (355,94)[7]{} (327,57)[1]{} (317,12)[2]{} (350,12)[3]{} (256,12)[5]{} (380,12)[6]{} (318,156)[8]{}
- [**Movement:**]{} This is a combination of a deletion with a subsequent conjunction or adjunction. In figure \[movement\], we illustrate a move via conjunction. In the left-hand figure, we assume we wish to move the maximal TNCB 4 next to the maximal TNCB 7. This first involves deleting TNCB 4 (noting it), and raising node 3 to replace node 2. We then introduce node 8 above node 7, and make both nodes 7 and 4 its children. Note that during deletion, we remove a surplus node (node 2 in this case) and during conjunction or adjunction we introduce a new one (node 8 in this case) thus maintaining the same number of nodes in the tree.
(470,189) (372,171)[9]{} (374,161) (104,164)[(-1,-2)[67]{}]{} (104,164)[(1,-2)[67]{}]{} (82,122)[(1,-2)[45]{}]{} (104,76)[(-1,-2)[22]{}]{} (373,164)[(1,-2)[67]{}]{} (374,164)[(-1,-2)[67]{}]{} (394,122)[(-1,-2)[45]{}]{} (372,76)[(1,-2)[22]{}]{} (86,69)[(35,25)]{} (181,24) (101,169)[1]{} (73,127)[2]{} (22,16)[3]{} (102,78)[4]{} (67,16)[5]{} (120,16)[6]{} (174,16)[7]{} (400,128)[8]{} (204,99)[(1,0)[89]{}]{} (86,155)
(36,21) (1,2)[(2,1)[33]{}]{} (1,19)[(2,-1)[34]{}]{}
(64,110)
(36,21) (1,2)[(2,1)[33]{}]{} (1,19)[(2,-1)[34]{}]{}
(292,16)[3]{} (361,78)[4]{} (340,16)[5]{} (393,16)[6]{} (443,16)[7]{}
- [**Evaluation:**]{} After a movement, the TNCB is undetermined as demonstrated in figure \[movement\]. The signs of the affected parts must be recalculated by combining the recursively evaluated child TNCBs.
Suitable Grammars
-----------------
The Shake-and-Bake system of employs a bag generation algorithm because it is assumed that the input to the generator is no more than a collection of instantiated signs. Full-scale bag generation is not necessary because sufficient information can be transferred from the source language to severely constrain the subsequent search during generation.
The two properties required of TNCBs (and hence the target grammars with instantiated lexical signs) are:
1. [**Precedence Monotonicity.**]{} The order of the orthographies of two combining signs in the orthography of the result must be determinate — it must not depend on any subsequent combination that the result may undergo. This constraint says that if one constituent fails to combine with another, no permutation of the elements making up either would render the combination possible. This allows bottom-up evaluation to occur in linear time. In practice, this restriction requires that sufficiently rich information be transferred from the previous translation stages to ensure that sign combination is deterministic.
2. [**Dominance Monotonicity.**]{} If a maximal TNCB is adjoined at the highest possible place inside another TNCB, the result will be well-formed after it is re-evaluated. Adjunction is only attempted if conjunction fails (in fact conjunction is merely a special case of adjunction in which no nodes are disrupted); an adjunction which disrupts $i$ nodes is attempted before one which disrupts $i+1$ nodes. Dominance monotonicity merely requires all nodes that are disrupted under this top-down control regime to be well-formed when re-evaluated. We will see that this will ensure the termination of the generation algorithm within $n-1$ steps, where $n$ is the number of lexical signs input to the process.
We are currently investigating the mathematical characterisation of grammars and instantiated signs that obey these constraints. So far, we have not found these restrictions particularly problematic.
The Generation Algorithm
------------------------
The generator cycles through two phases: a [*test*]{} phase and a [*rewrite*]{} phase. Imagine a bag of signs, corresponding to “the big brown dog barked”, has been passed to the generation phase. The first step in the generation process is to convert it into some arbitrary TNCB structure, say the one in figure \[bad\_initial\_guess\]. In order to verify whether this structure is valid, we evaluate the TNCB. This is the test phase. If the TNCB evaluates successfully, the orthography of its value is the desired result. If not, we enter the rewrite phase.
(282,140) (10,13)[PAST]{} (59,13)[dog]{} (103,13)[bark]{} (153,13)[the]{} (197,13)[brown]{} (251,13)[big]{} (22,22)[(6,5)[120]{}]{} (142,123)[(6,-5)[120]{}]{} (118,102)[(6,-5)[95]{}]{} (94,82)[(6,-5)[71]{}]{} (70,62)[(6,-5)[47]{}]{} (47,42)[(6,-5)[23]{}]{} (123,116)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(99,95)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(51,55)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(26,35)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(75,75)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
If we were continuing in the spirit of the original Shake-and-Bake generation process, we would now form some arbitrary mutation of the TNCB and retest, repeating this test-rewrite cycle until we either found a well-formed TNCB or failed. However, this would also be intractable due to the undirectedness of the search through the vast number of possibilities. Given the added derivation information contained within TNCBs and the properties mentioned above, we can direct this search by incrementally improving on previously evaluated results.
We enter the rewrite phase, then, with an ill-formed TNCB. Each move operation must improve it. Let us see why this is so.
The [*move*]{} operation maintains the same number of nodes in the tree. The deletion of a maximal TNCB removes two ill-formed nodes (figure \[deletion\]). At the deletion site, a new undetermined node is created, which may or may not be ill-formed. At the destination site of the movement (whether conjunction or adjunction), a new well-formed node is created.
The ancestors of the new well-formed node will be at least as well-formed as they were prior to the movement. We can verify this by case:
1. When two maximal TNCBs are conjoined, nodes dominating the new node, which were previously ill-formed, become undetermined. When re-evaluated, they may remain ill-formed or some may now become well-formed.
2. When we adjoin a maximal TNCB within another TNCB, nodes dominating the new well-formed node are disrupted. By dominance monotonicity, all nodes which were disrupted by the adjunction must become well-formed after re-evaluation. And nodes dominating the maximal disrupted node, which were previously ill-formed, may become well-formed after re-evaluation.
We thus see that rewriting and re-evaluating must improve the TNCB.
Let us further consider the contrived worst-case starting point provided in figure \[bad\_initial\_guess\]. After the test phase, we discover that every single interior node is ill-formed. We then scan the TNCB, say top-down from left to right, looking for a maximal TNCB to move. In this case, the first move will be [*PAST*]{} to [*bark*]{}, by conjunction (figure \[generation\_step0\]).
(291,150) (19,23)[PAST]{} (68,23)[dog]{} (112,23)[bark]{} (162,23)[the]{} (206,23)[brown]{} (260,23)[big]{} (31,32)[(6,5)[120]{}]{} (151,133)[(6,-5)[120]{}]{} (127,112)[(6,-5)[95]{}]{} (103,92)[(6,-5)[71]{}]{} (79,72)[(6,-5)[47]{}]{} (56,52)[(6,-5)[23]{}]{} (132,126)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(108,105)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(60,65)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(35,45)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(84,85)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(13,13)[(45,25)]{} (129,27)
Once again, the test phase fails to provide a well-formed TNCB, so we repeat the rewrite phase, this time finding [*dog*]{} to conjoin with [*the*]{} (figure \[generation\_step1\] shows the state just after the second pass through the test phase).
(269,150) (150,27) (10,39)[(45,25)]{} (62,85)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(38,65)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(86,105)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(110,126)
(40,14) (0,12)[(4,-1)[40]{}]{} (39,12)[(-4,-1)[38]{}]{}
(57,72)[(6,-5)[47]{}]{} (81,92)[(6,-5)[71]{}]{} (105,112)[(6,-5)[95]{}]{} (129,133)[(6,-5)[120]{}]{} (238,23)[big]{} (184,23)[brown]{} (140,23)[the]{} (90,23)[bark]{} (22,48)[dog]{} (46,22)[PAST]{} (81,52)[(-6,-5)[24]{}]{} (63,47)[barked]{} (130,133)[(-6,-5)[96]{}]{}
After further testing, we again re-enter the rewrite phase and this time note that [*brown*]{} can be inserted in the maximal TNCB [*the dog barked*]{} adjoined with [*dog*]{} (figure \[generation\_step2\]). Note how, after combining [*dog*]{} and [*the*]{}, the parent sign reflects the correct orthography even though they did not have the correct linear precedence.
(266,151) (10,24)[PAST]{} (54,24)[bark]{} (98,24)[dog]{} (141,24)[the]{} (175,24)[brown]{} (230,24)[big]{} (126,132)[(1,-1)[99]{}]{} (126,132)[(-1,-1)[99]{}]{} (105,113)[(1,-1)[78]{}]{} (88,93)[(1,-1)[58]{}]{} (48,53)[(1,-1)[20]{}]{} (126,53)[(-1,-1)[20]{}]{} (107,124)
(40,14) (39,12)[(-4,-1)[38]{}]{} (0,12)[(4,-1)[40]{}]{}
(86,106)
(40,14) (39,12)[(-4,-1)[38]{}]{} (0,12)[(4,-1)[40]{}]{}
(170,15)[(45,25)]{} (110,27) (26,55)[barked]{} (107,55)[the dog]{} (61,96)[the dog]{} (66,85)[barked]{}
After finding that [*big*]{} may not be conjoined with [*the brown dog*]{}, we try to adjoin it within the latter. Since it will combine with [*brown dog*]{}, no adjunction to a lower TNCB is attempted.
(278,146) (122,53)[(65,20)]{} (221,10)[(32,20)]{} (127,129)[(-1,-1)[101]{}]{} (127,129)[(1,-1)[101]{}]{} (106,108)[(1,-1)[81]{}]{} (146,68)[(-1,-1)[40]{}]{} (127,49)[(1,-1)[22]{}]{} (47,49)[(1,-1)[22]{}]{} (107,122)
(40,14) (39,12)[(-4,-1)[38]{}]{} (0,12)[(4,-1)[40]{}]{}
(228,19)[big]{} (137,19)[brown]{} (184,19)[the]{} (57,19)[bark]{} (100,19)[dog]{} (10,19)[PAST]{} (69,100)[dog barked]{} (26,50)[barked]{} (91,70)[the brown dog]{} (91,50)[brown dog]{} (72,110)[the brown]{}
The final result is the TNCB in figure \[generation\_step4\], whose orthography is “the big brown dog barked”.
(271,142) (126,122)[(-1,-1)[101]{}]{} (126,122)[(1,-1)[101]{}]{} (165,82)[(-1,-1)[61]{}]{} (146,62)[(1,-1)[42]{}]{} (126,42)[(1,-1)[22]{}]{} (47,42)[(1,-1)[22]{}]{} (57,13)[bark]{} (10,13)[PAST]{} (95,13)[dog]{} (133,13)[brown]{} (182,13)[big]{} (225,13)[the]{} (27,44)[barked]{} (95,44)[brown dog]{} (108,65)[big brown dog]{} (108,85)[the big brown dog]{} (50,124)[the big brown dog barked]{}
We thus see that during generation, we formed a basic constituent, [*the dog*]{}, and incrementally refined it by adjoining the modifiers in place. At the heart of this approach is that, once well-formed, constituents can only grow; they can never be dismantled.
Even if generation ultimately fails, maximal well-formed fragments will have been built; the latter may be presented to the user, allowing graceful degradation of output quality.
Initialising the Generator {#initialisation}
==========================
Considering the algorithm described above, we note that the number of rewrites necessary to repair the initial guess is no more than the number of ill-formed TNCBs. This can never exceed the number of interior nodes of the TNCB formed from $n$ lexical signs (i.e. $n-2$). Consequently, the better formed the initial TNCB used by the generator, the fewer the number of rewrites required to complete generation. In the last section, we deliberately illustrated an initial guess which was as bad as possible. In this section, we consider a heuristic for producing a motivated guess for the initial TNCB.
Consider the TNCBs in figure \[equivalences\]. If we interpret the S, O and V as Subject, Object and Verb we can observe an equivalence between the structures with the bracketings: (S (V O)), (S (O V)), ((V O) S), and ((O V) S). The implication of this equivalence is that if, say, we are translating into a (S (V O)) language from a head-final language and have isomorphic dominance structures between the source and target parses, then simply mirroring the source parse structure in the initial target TNCB will provide a correct initial guess. For example, the English sentence (5):\
\
(5) the book is red\
\
has a corresponding Japanese equivalent (6):\
\
----- -------- ------ ------- --------
(6) ((hon wa) (akai desu))
((book TOP) (red is))
----- -------- ------ ------- --------
\
If we mirror the Japanese bracketing structure in English to form the initial TNCB, we obtain: ((book the) (red is)). This will produce the correct answer in the test phase of generation without the need to rewrite at all.
Even if there is not an exact isomorphism between the source and target commutative bracketings, the first guess is still reasonable as long as the majority of child commutative bracketings in the target language are isomorphic with their equivalents in the source language. Consider the French sentence:\
\
----- ------- --------- -------- --------- ---------
(7) ((le ((grand chien) brun)) aboya)
(8) ((the ((big dog) brown)) barked)
----- ------- --------- -------- --------- ---------
\
The TNCB implied by the bracketing in (8) is equivalent to that in figure \[generation\_step3\] and requires just one rewrite in order to make it well-formed. We thus see how the TNCBs can mirror the dominance information in the source language parse in order to furnish the generator with a good initial guess. On the other hand, no matter how the SL and TL structures differ, the algorithm will still operate correctly with polynomial complexity. Structural transfer can be incorporated to improve the efficiency of generation, but it is never necessary for correctness or even tractability.
The Complexity of the Generator
===============================
The theoretical complexity of the generator is $O(n^4)$, where $n$ is the size of the input. We give an informal argument for this. The complexity of the test phase is the number of evaluations that have to be made. Each node must be tested no more than twice in the worst case (due to precedence monotonicity), as one might have to try to combine its children in either direction according to the grammar rules. There are always exactly $n-1$ non-leaf nodes, so the complexity of the test phase is $O(n)$. The complexity of the rewrite phase is that of locating the two TNCBs to be combined. In the worst case, we can imagine picking an arbitrary child TNCB ($O(n)$) and then trying to find another one with which it combines ($O(n)$). The complexity of this phase is therefore the product of the picking and combining complexities, i.e. $O(n^2)$. The combined complexity of the test-rewrite cycle is thus $O(n^3)$. Now, in section \[initialisation\], we argued that no more than $n-1$ rewrites would ever be necessary, thus the overall complexity of generation (even when no solution is found) is $O(n^4)$.
Average case complexity is dependent on the quality of the first guess, how rapidly the TNCB structure is actually improved, and to what extent the TNCB must be re-evaluated after rewriting. In the SLEMaT system [@Poznanski:SLEMaT], we have tried to form a good initial guess by mirroring the source structure in the target TNCB, and allowing some local structural modifications in the bilingual equivalences.
Structural transfer operations only affect the efficiency and not the functionality of generation. Transfer specifications may be incrementally refined and empirically tested for efficiency. Since complete specification of transfer operations is not required for correct generation of grammatical target text, the version of Shake-and-Bake translation presented here maintains its advantage over traditional transfer models, in this respect.
The monotonicity constraints, on the other hand, might constitute a dilution of the Shake-and-Bake ideal of independent grammars. For instance, precedence monotonicity requires that the status of a clause (strictly, its lexical head) as main or subordinate has to be transferred into German. It is not that the transfer of information [*per se*]{} compromises the ideal — such information must often appear in transfer entries to avoid grammatical but incorrect translation (e.g. [*a great man*]{} translated as [*un homme grand*]{}). The problem is justifying the main/subordinate distinction in every language that we might wish to translate into German. This distinction can be justified monolingually for the other languages that we treat (English, French, and Japanese). Whether the constraints will ultimately require monolingual grammars to be enriched with entirely unmotivated features will only become clear as translation coverage is extended and new language pairs are added.
Conclusion
==========
We have presented a polynomial complexity generation algorithm which can form part of any Shake-and-Bake style MT system with suitable grammars and information transfer. The transfer module is free to attempt structural transfer in order to produce the best possible first guess. We tested a TNCB-based generator in the SLEMaT MT system with the pathological cases described in [@Brew:Cat] against Whitelock’s original generation algorithm, and have obtained speed improvements of several orders of magnitude. Somewhat more surprisingly, even for short sentences which were not problematic for Whitelock’s system, the generation component has performed consistently better.
V. Allegranza, P. Bennett, J. Durand, F. van Eynde, L. Humphreys, P. Schmidt, and E. Steiner. 1991. Linguistics for [M]{}achine [T]{}ranslation: The [E]{}urotra [L]{}inguistic [S]{}pecifications. In C. Copeland, J. Durand, S. Krauwer, and B. Maegaard, editors, [ *The Eurotra Formal Specifications. Studies in Machine Translation and Natural Language Processing 2*]{}, pages 15–124. Office for Official Publications of the European Communities.
D. Arnold, S. Krauwer, L. des Tombe, and L. Sadler. 1988. ‘[R]{}elaxed’ [C]{}ompositionality in [M]{}achine [T]{}ranslation. In [*Second International Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages*]{}, Carnegie Mellon Univ, Pittsburgh.
John L. Beaven. 1992a. . thesis, University of Edinburgh, Edinburgh.
John L. Beaven. 1992b. Shake-and-[B]{}ake [M]{}achine [T]{}ranslation. In [*Proceedings of COLING 92*]{}, pages 602–609, Nantes, France.
Chris Brew. 1992. Letting the [C]{}at out of the [B]{}ag: [G]{}eneration for [S]{}hake-and-[B]{}ake [MT]{}. In [*Proceedings of COLING 92*]{}, pages 29–34, Nantes, France.
Peter F. Brown, John Cocke, A Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A [S]{}tatistical [A]{}pproach to [M]{}achine [T]{}ranslation. , 16(2):79–85, June.
Hsin-Hsi Chen and Yue-Shi Lee. 1994. A [C]{}orrective [T]{}raining [A]{}lgorithm for [A]{}daptive [L]{}earning in [B]{}ag [G]{}eneration. In [*International Conference on New Methods in Language Processing (NeMLaP)*]{}, pages 248–254, Manchester, UK. UMIST.
Bonnie Jean Dorr. 1993. . Artificial Intelligence Series. The MIT Press, Cambridge, Mass.
Sergei Nirenburg, Jaime Carbonell, Masaru Tomita, and Kenneth Goodman. 1992. . Morgan Kaaufmann, San Mateo, CA.
Fred Popowich. 1994. Improving the [E]{}fficiency of a [G]{}eneration [A]{}lgorithm for [S]{}hake and [B]{}ake [M]{}achine [T]{}ranslation using [H]{}ead-[D]{}riven [P]{}hrase [S]{}tructure [G]{}rammar. Technical Report CMPT-TR 94-07, School of Computing Science, Simon Fraser University, Burnaby, British Columbia, CANADA V5A 1S6.
V. Pozna[ń]{}ski, John L. Beaven, and P. Whitelock. 1993. The [D]{}esign of [SLEMaT Mk II]{}. Technical Report IT-1993-19, Sharp Laboratories of Europe, LTD, Edmund Halley Road, Oxford Science Park, Oxford OX4 4GA, July.
P. Whitelock. 1992. Shake and [B]{}ake [T]{}ranslation. In [*Proceedings of COLING 92*]{}, pages 610–616, Nantes, France.
P. Whitelock. 1994. Shake-and-[B]{}ake [T]{}ranslation. In C. J. Rupp, M. A. Rosner, and R. L. Johnson, editors, [ *Constraints, Language and Computation*]{}, pages 339–359. Academic Press, London.
[^1]: We wish to thank our colleagues Kerima Benkerimi, David Elworthy, Peter Gibbins, Ian Johnson, Andrew Kay and Antonio Sanfilippo at SLE, and our anonymous reviewers for useful feedback and discussions on the research reported here and on earlier drafts of this paper.
|
---
abstract: 'We report evidence for a quasi-periodic oscillation (QPO) in the optical light curve of KIC 9650712, a narrow-line Seyfert 1 galaxy in the original *Kepler* field. After the development and application of a pipeline for *Kepler* data specific to active galactic nuclei (AGN), one of our sample of 21 AGN selected by infrared photometry and X-ray flux demonstrates a peak in the power spectrum at log $\nu=-6.58$ Hz, corresponding to a temporal period of $t=44$ days. We note that although the power spectrum is well-fit by a model consisting of a Lorentzian and a single power law, alternative continuum models cannot be ruled out. From optical spectroscopy, we measure the black hole mass of this AGN as log ($M_{\mathrm{BH}}/M_\odot) = 8.17$. We find that this frequency lies along a correlation between low-frequency QPOs and black hole mass from stellar and intermediate mass black holes to AGN, similar to the known correlation in high-frequency QPOs.'
author:
- 'Krista Lynne Smith, Richard F. Mushotzky, Patricia T. Boyd & Robert V. Wagoner'
title: 'Evidence for an Optical Low-frequency Quasi-Periodic Oscillation in the *Kepler* Light Curve of an Active Galaxy'
---
Introduction {#sec:intro}
============
Quasi-periodic oscillations (QPOs) have been seen in the X-ray power spectra of the majority of stellar mass black hole candidates in X-ray binaries [@Remillard2006]. These oscillations belong to two main types: low- and high-frequency QPOs. High-frequency QPOs are the rarer type. They occur in the range of tens to hundreds of Hz, and have often been found to manifest in pairs with a 3:2 frequency ratio. Low-frequency QPOs are stronger and more ubiquitous than high-frequency QPOs. They occur in the frequency range of mHz to $\sim30$ Hz, and can drift in centroid frequency. More details on these properties can be found in the reviews by @Remillard2006 and @Motta2016.
There are many physical origin theories for QPOs. The behavior underlying these phenomena is believed to occur very near to the black hole itself, perhaps within a few gravitational radii. High-frequency QPOs have been proposed as consequences of periastron and orbital disk precession [@Stella1999], warped accretion disks [@Kato2005], global disk oscillations [@Titarchuk2000], magnetic reconnection [@Huang2013], magnetically-choked accretion flows [@McKinney2012] and diskoseismology [@Wagoner2001]. The origin of low-frequency QPOs varies depending on their detailed type [A, B, or C; see @Motta2016], and include unstable spiral density waves [@Tagger1999], viscous magneto-acoustic oscillations in a spherical transition layer near the compact object [@Titarchuk2004], and Lense-Thirring precession [e.g., @Ingram2009]. Regardless of the mechanism responsible for these rapid variations, their origin in accretion-related structures very near the black hole makes them a rare and valuable probe of strong gravity and the effect of black holes on their immediate environments.
The first QPO in an AGN was discovered in the X-ray light curve of RE J1034+396 by @Gierlinski2008 and robustly confirmed by @Alston2014. Recently, X-ray QPOs have been detected in two intermediate-mass black hole (IMBH) candidates and a handful of additional active galactic nuclei (AGN). A remarkable linear correlation of the frequency of the QPO and the black hole mass seems to hold over many orders of magnitude, from stellar mass black holes with $M\sim10~M_{\odot}$, to supermassive black holes of $M\sim10^6~M_{\odot}$ [@Abramowicz2004; @Zhou2015]. The universality of this correlation links accretion processes across vast scales, and indicates that QPO frequency may act as a very accurate probe of black hole mass. As previous authors have indicated, such a $1/M$ scaling is indeed expected if the oscillations are in any way dependent on the characteristic length scale of strong gravity. Interestingly, all of the AGN QPO candidates are in a spectroscopic subclass known as Narrow-Line Seyfert 1s (NLS1). These objects are characterized by relatively narrow broad emission lines (FWHM$\leq2000~\mathrm{km~s}^{-1}$), strong Fe II emission, and weak \[O III\] emission compared to H$\beta$. Such objects may have very high accretion rates; see the review by @Komossa2007.
![image](qpo_lc.pdf){width="\textwidth"}
So far, no optical QPO has been reported in an AGN. This is perhaps because ground-based, sporadically sampled optical light curves have never been directly comparable to the high-precision X-ray light curves generated by space telescopes. Fortunately, the unparalleled precision and regular sampling of the *Kepler* exoplanet satellite has lately produced precise, evenly-sampled space-based optical light curves. Our team has developed a sophisticated pipeline for the treatment of *Kepler* AGN light curves [@Smith2018]. Among the phenomena revealed by our approach is the first optical QPO candidate in an AGN. Searching for periodicities with the sparse and uneven sampling of ground based telescopes is problematic, since the red noise nature of AGN variability can easily mimic periodic signals [@Vaughan2016]. The *Kepler* light curves can be analyzed with Fourier techniques that enable period detection in the frequency domain. While more robust than time domain detection, there is still a risk of red noise mimicking our periodic signal; however, our candidate is detected as a peak in the Fourier power spectrum of an AGN matching the spectroscopic sub-type of all current X-ray AGN QPOs and agrees very well with the extrapolation of an existing correlation between QPO frequency and black hole mass.
Power Spectrum Modeling {#sec2}
=======================
This object is part of a sample of 21 Type 1 AGN monitored by the *Kepler* spacecraft during its 3.5 year mission with 30-minute cadence, selected using a combination of infrared photometric techniques [@Edelson2012] and X-ray detection [@Smith2015]. While summarized here, the full details of this sample, the special methods necessary for analyzing *Kepler* AGN data, and the reduction methods used can be found in @Smith2018. The resulting light curve for our QPO candidate, KIC 9650712, is shown in Figure \[fig:qpo\_lc\]. We obtained an optical spectrum of this target from Lick Observatory, and calculated the redshift to be $z=0.128$. The light curve of KIC 9650712 spans 950 days in the object’s rest frame. We have also used the FWHM of the H$\beta$ emission line to calculate the black hole mass, a method that is very commonly used for AGN and for QPO X-ray candidates in particular. Based on the accepted formulae from @Wang2009, we estimate a mass of log ($M_{\mathrm{BH}}/M_\odot) = 8.17\pm0.20$, two orders of magnitude larger than the most massive object in the small number of known AGN QPOs. We conservatively assume the larger error estimate on this method found by @Vestergaard2006 of $\sim0.5$ dex. Although this mass is higher than the usual mass for the NLS1 class, the FWHM of H$\beta$ is only 2270 km s$^{-1}$, much lower than the other Type 1 AGN in our sample. In order to calculate the Eddington ratio, we first estimate the bolometric luminosity using the *Swift* survey value of $L_X = 1.62\times10^{44}~\mathrm{erg s}^{-1}$ and the hard X-ray bolometric correction of @Vasudevan2007. To be consistent with many of the other optical studies used here, we also perform the calculation using the correction on $L_{5100}$ from @Runnoe2012. The two measurements of $L / L_\mathrm{Edd}$ are 0.14 and 0.23, respectively. The latter estimate makes it the highest accretion rate object in our sample of *Kepler* AGN. Additionally, the spectrum shows very strong [\[Fe [ii]{}\]]{} emission, and we measure the /H$\beta$ line ratio to be 0.14, a value comfortably less than the definition threshold for NLS1s of /H$\beta < 3$. We mention this because all of the current X-ray QPO candidates happen to be in NLS1 galaxies.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Power spectrum of KIC 9650712 both raw (grey dots) and binned (solid black). Error bars correspond to the rms-spread of simulated light curves. The top panel shows the fit and residuals for a broken power law, the middle for a bending power law, and the bottom panel for a single power law plus Lorentzian. The horizontal dashed line represents the value for the expected Poisson noise; fitting is only performed for frequencies below this value. The arrows denote the location where the corresponding high-frequency QPO would be; see discussion at the end of Section \[sec:corr\].[]{data-label="fig:qpo_powspec"}](brokenfit_wresid_dots_arrow.pdf "fig:"){width="50.00000%"}
![Power spectrum of KIC 9650712 both raw (grey dots) and binned (solid black). Error bars correspond to the rms-spread of simulated light curves. The top panel shows the fit and residuals for a broken power law, the middle for a bending power law, and the bottom panel for a single power law plus Lorentzian. The horizontal dashed line represents the value for the expected Poisson noise; fitting is only performed for frequencies below this value. The arrows denote the location where the corresponding high-frequency QPO would be; see discussion at the end of Section \[sec:corr\].[]{data-label="fig:qpo_powspec"}](betterbentfit_for_revision.pdf "fig:"){width="50.00000%"}
![Power spectrum of KIC 9650712 both raw (grey dots) and binned (solid black). Error bars correspond to the rms-spread of simulated light curves. The top panel shows the fit and residuals for a broken power law, the middle for a bending power law, and the bottom panel for a single power law plus Lorentzian. The horizontal dashed line represents the value for the expected Poisson noise; fitting is only performed for frequencies below this value. The arrows denote the location where the corresponding high-frequency QPO would be; see discussion at the end of Section \[sec:corr\].[]{data-label="fig:qpo_powspec"}](qpofit_wresid_dots_arrow.pdf "fig:"){width="50.00000%"}
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Our aim in @Smith2018 was to detect characteristic timescales in AGN and determine the power spectral slope at high frequencies. The red-noise power spectra of AGN are typically well-fit by a power law, where the spectral density $S$ varies with the frequency as $S \propto f^{-\alpha}$. In order to determine whether our objects were well-fit by a single power law, we followed the PSRESP process, described in @Uttley2002. Briefly, this consists of simulating a very long light curve from a given power spectral slope using the @Timmer1995 algorithm, which allows 500 light curves of the observed length to be drawn from it without overlap. The same gaps and interpolation techniques are introduced and used on the simulated light curves as on the original, and 500 power spectra are created from the simulated light curves. The rms spread of these power spectra become the error bars on the observed power spectrum. The observed spectrum is then fit above the Poisson cutoff value by a single power law, generating a $\chi^2$ value. The goodness-of-fit is then measured by calculating the percentile value above which the observed $\chi^{2}$ exceeds the simulated distribution. This percentage is the confidence with which we can reject the model. In a few cases, single power-law models were always rejected with high confidence at all slopes ranging from $\alpha=1.5-3.5$. In the case of KIC 9650712, a single power law model with the observed slope can be rejected with 83% confidence: an acceptable fit, but one that could perhaps be improved upon. In our initial analysis, a broken power law model provides an acceptable fit to the data with a high-frequency slope of $\alpha=2.9$. However, we found that this object experienced the highest $\chi^{2}$ minimization when fit with a single power law with a slope of $\alpha=1.9$ and a Lorentzian component.
In order to explore the goodness-of-fit of other possible models in more detail, we compare a broken power law, bending power law, and single power law plus a Lorentzian component to the single power law case. Our broken power law modeling procedure can be found in @Smith2018, our bending power law model corresponds to Equation 3 of @Gonzalez2012, and our periodic model consists of the sum of a linear component (the underlying single power law) and a standard Lorentzian. The power spectra and residuals for the broken power law, bending power law, and quasi-periodic Lorentzian are shown in Figure \[fig:qpo\_powspec\]. We have normalized the power spectra by a constant $A_{\mathrm{rms}}^{2} = 2\Delta T_{\mathrm{samp}}/\bar{x}^{2}N$, where $\Delta T_{\mathrm{samp}}$ is the sampling interval, $\bar{x}$ is the mean count rate in cts s$^{-1}$, and $N$ is the total number of data points [@VanderKlis1997].
To determine whether or not these models provide better fits than a single power law in reality or simply because they have more free parameters, we follow the method of @Summons2007. Using the 500 light curves simulated from the best-fitting single power law slope, we fit each simulated light curve with the best-fitting broken power law parameters, bending power law parameters, and single power law plus Lorentzian parameters, just as was done with the real data. We then compare histograms of these fit probabilities to the fiducial $\chi^2$ distribution calculated as described in @Smith2018. In Figure \[fig:chisqs\] we show the three comparisons. Note that the $\chi^{2}$ values given here differ slightly from those in @Smith2018; this is because each time the PSRESP process is run, a slightly different ensemble of simulated light curves is generated, resulting in slightly different error bars. In all cases, no given simulation has a better fit to the more complex model than the observed power spectrum. Each of these models provides a better fit than a single power law model, but all are acceptable fits to the data. A single power law plus a periodic component is then just one of several complex models that provide good fits. As a consistency check on our periodic model, we have also computed the Lomb-Scargle periodogram on the processed light curve without including any linear interpolation, since this method is capable of handling unevenly sampled data [@Scargle1982]. The periodogram also shows a peak at $\sim45$ days.
The best-fitting Lorentzian model has a central frequency of log $\nu=-6.58$ Hz, corresponding to a temporal period of $t=44$ days. The $Q$-value of coherence of the feature, defined as $\nu_0 / \mathrm{FWHM}$ [@Nowak1999], is $Q=1.69$; somewhat lower than low-frequency QPOs in the literature. Possible reasons for this can be found in the conclusion. Papers reporting X-ray QPOs frequently calculate the fractional rms of the periodic component. In our case, such a measurement would be misleading, since the host galaxy contributes a large constant flux to the *Kepler* bandpass that cannot be determined from the present data. The quoted X-ray values do not suffer from a contaminating constant term, and so could not be compared with our value.
A Correlation of Black Hole Mass and Low-Frequency QPOs {#sec:corr}
=======================================================
There is now considerable evidence that the correlation between black hole mass and the central frequency of the 2$\times \nu_0$ peak in high-frequency QPOs extends from stellar masses to supermassive black holes with $M_\mathrm{BH}\sim10^{6}~M_\odot$ [@Remillard2006; @Abramowicz2004; @Zhou2015]. In stellar and intermediate mass black holes, low-frequency QPOs can be used in tandem with spectral variations to predict the black hole mass [@Fiorito2004; @Casella2008]. When both the high- and low-frequency QPOs have been detected in a given object, the two methods provide independent checks on the mass. This is the case for several stellar mass black holes and the intermediate mass ULX in M 82. In the case of AGN, independent mass checks are provided by other measurement methods. The most frequently used is the H$\beta$ width method employed here, but Mrk 766 has a very well-determined mass from reverberation mapping [@Bentz2010]. When the independent mass measurement agrees with the prediction from the high-frequency correlation, and especially if the observed QPO has the well-known 3:2 ratio of high-frequency QPOs, one can confidently claim the detected QPO is high-frequency. In Figure \[fig:qpombh\], we reproduce the plot from @Zhou2015 and @Abramowicz2004 for the high-frequency QPOs, and add the known low-frequency QPOs. The references for the high-frequency points can be found in those papers, except for the more recent detections described in the figure caption. We note that MS $2254.9-3712$ and Mrk 766 may exhibit QPOs in a 3:2 frequency ratio, strengthening their high-frequency classification. We note also that @Alston2014 has pointed out that searching for transient QPO phenomena via data mining, as has been done for some of the AGN QPOs, is potentially problematic; however, we report them here for completeness. The lines in the plot correspond to the previously-observed relation and the resonance models of @Aschenbach2004a, which translate vertically depending on black hole spin.
![Distributions of the fiducial $\chi^{2}_{\mathrm{dist}}$ (green) and the $\chi^{2}_{\mathrm{dist}}$ values from comparing each simulated realization of a single power law to the broken, bending, and single + Lorentzian models (orange). The $\chi^{2}_{\mathrm{dist}}$ value of the observed data compared to the model is shown by a black line.[]{data-label="fig:chisqs"}](chisqs_all3.pdf){width="50.00000%"}
![image](qpo_types_more.pdf){width="80.00000%"}
The current roster of low-frequency QPOs with independent mass estimates includes the stellar mass black holes XTE J1550-564 [@Vignarca2003], GRS 1915+105 [@Vignarca2003], XTE J1859+226 [@Casella2004], and H 1743-322 [@McClintock2009], the intermediate mass black hole M82 X1 [@Strohmayer2003; @Dewangan2006], and the lone low-frequency QPO detection in an AGN, 2XMM J123103.2+110648 [@Lin2013]. We note that the mass of this last object is uncertain. It was determined by [@Ho2012] to be very low for an AGN ($\sim10^{5} M_\odot$), using the M-$\sigma$ relation since the object is a Type 2 AGN (i.e., it does not exhibit Doppler-broadened emission lines, precluding the H$\beta$ method). The validity of the M-$\sigma$ relation is in doubt for AGN and for NLS1s in particular. The mass was later determined by @Lin2013 to be $2\times10^{6} M_\odot$ based on X-ray and UV spectral fitting. For plotting, we use the average of these two estimates with the error bar encompassing the full range of both.
The mass of M82 X-1, while previously quite uncertain, has now been calculated using both the high-frequency QPO extrapolation and a relativistic precession model with consistent results at $\sim420 M_\odot$ [@Pasham2014]. Nevertheless, we show a large error bar on the measurement to encompass the range of the mass estimated from the low-frequency QPO scaling with spectral index done by @Casella2008, $95-1260 M_\odot$, so that the reader may see the results of both methods.
The case of the intermediate mass ULX NGC 5408 X-1 is not as well determined. A QPO has been detected robustly at $20$ mHz [@Strohmayer2009], but whether it is a low- or high-frequency QPO is not known, and there is no independent mass measurement. @Huang2013 quote the mass as $10^{5} M_\odot$ based on the assumption of the object as a high-frequency QPO and fitting of the X-ray spectrum, while the original analysis by @Strohmayer2009 claim it is a low-frequency QPO, and use the previously mentioned @Fiorito2004 method to calculate a mass of $2000-5000 M_\odot$. We have placed the object among the high-frequency QPOs in the plot because the @Huang2013 method used X-ray spectral fitting to back up their mass estimate. It does not affect the veracity of the plot one way or the other, since the mass estimates arise only from the assumption of the QPO type and so would fall on either correlation by design.
Given the mass of log ($M_{\mathrm{BH}}/M_\odot) = 8.17\pm0.20$ measured using the FWHM of the H$\beta$ line, the predicted mass is approximately a factor of 40 below the high-frequency QPO correlation. However, a factor of 40 is approximately the difference between the central frequencies of low- and high-frequency QPOs in X-ray binaries. Motivated by this coincidence, we plotted our candidate QPO with the low-frequency QPOs of X-ray binaries and intermediate mass black hole candidates. If we perform a linear regressive fit through the stellar and intermediate mass points only, preserving the $1/M$ dependence, we obtain:
$$f\mathrm{(Hz)} = 51.9~(M_\mathrm{BH} / M_\odot)^{-1}.$$
If we extrapolate this all the way to our mass, we find that our object is in excellent agreement with such a relation, as can be seen in Figure \[fig:qpombh\].
One might naturally ask whether we can detect the corresponding high-frequency QPO in our data. The predicted period for the high-frequency QPO is $\sim22$ hours based on the extrapolated high-frequency relation. Although our 30-minute sampling is theoretically sensitive to such a period, it occurs in a region where the light curve is dominated by Poisson noise. In Figure \[fig:qpo\_powspec\] we denote the location where this feature would be with an arrow. There is no detection. High-frequency QPOs can also be weaker than their low-frequency counterparts by at least a factor of ten. We therefore do not consider the lack of a 22 hour timescale to be particularly surprising.
Concluding Remarks {#sec:conclusion}
==================
Detecting a QPO in an optical light curve may have implications for some QPO models. There are two possibilities. First, the oscillations could be occurring in the optically-emitting region of the disk as well as in the inner regions assumed in X-ray studies. Second, the optical disk region may simply be reprocessing the nuclear X-ray oscillations. If the optical region of the disk is also producing oscillations, this may favor models such as density waves. In the reprocessing scenario, re-radiation from a wide range of optical disk radii may contribute to the reduced coherence of optical QPO features, as arrival times at the observer would be effectively smeared out compared with the compact X-ray region. However, the low coherence value could also be due to a wandering central frequency throughout the observation or to turbulence in the disk. Many more optical QPOs will need to be detected before they can inform interpretations of X-ray QPOs. This goal will soon be attainable if we pursue AGN science with upcoming high-cadence, long-duration timing facilities, including exoplanet-hunting satellites like TESS and PLATO.
KLS is grateful for support from the NASA Earth and Space Sciences Fellowship (NESSF). Support for this work was also provided by the National Aeronautics and Space Administration through Einstein Postdoctoral Fellowship Award Number PF7-180168, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. We gratefully acknowledge Tod Strohmayer for helpful discussions and the referee for remarks that significantly improved the manuscript.
Abramowicz, M.A., Kluźniak, W., McClintock, J.E. & Remillard, R.A. 2004, , 609, 63
Alston, W.N., Markevičiūtė, J., Kara, E., Fabian, A.C. & Middleton, M. 2014, , 445, 16
Alston, W.N., Parker, M.L., Markevičiūtė, J., Fabian, A.C., Middleton, M., Lohfink, A., Kara, E. & Pinto, C. 2015, , 449, 467
Aschenbach, B. 2004, , 425, 1075
Bentz, M. et al. 2010, , 716, 993
Casella, P., Belloni, T., Homan, J. & Stella, L. 2004, , 426, 587
Casella, P., Ponti, G., Patruno, A., Belloni, T., Miniutti, G. & Zampieri, L. 2008, , 387, 1707
Chakrabarti, S.K. & Manickam, S.G. 2000, , 531, 41
Dewangan, G.C., Titarchuk, L. & Griffiths, R.E. 2006, , 637, L21
Edelson, R. & Malkan, M. 2012, , 751, 52
Filippenko, A.V. & Chornock, R. 2001, IAU Circ., 7644, 2
Fiorito, R. & Titarchuk, L. 2004, , 614, 113
Gierliński, M., Middleton, M., Ward, M. & Done, C. 2008, Nature, 455, 369
González-Martín, O. & Vaughan, S. 2012, , 544, 80
Ho, L.C., Kim., M. & Terashima, Y. 2012, ApJL, 759, L16
Huang, C.-Y., Wang, D.-X., Wang, J.-Z. & Wang, Z.-Y. 2013, RAA, 13, 705
Ingram, A., Done, C. & Fragile, C.P. 2009, , 397, 101
Kato, S. 2005, PASJ, 57, 699
Komossa, S. ÒNarrow-line Seyfert 1 Galaxies,Ó in Revista Mexicana de Astronomia y Astrofisica Conference Series, Revista Mexicana de Astronomia y Astrofisica, Vol. 32 (2008), pp. 86Ð92, arXiv:0710.3326
Lin, D., Irwin, J.A., Godet, O., Webb, N.A. & Barret, D. 2013, ApJL, 776, L10
McClintock, J.E., Remillard, R.A., Rupen, M.P., Torres, M.A.P., Steeghs, D., Levine, A.M. & Orosz, J.A. 2009, , 698, 1398
McKinney, J.C., Tchekhovskoy, A. & Blandford, R.D. 2012, , 423, 3083
Motta, S.E., Muñoz-Darias, T., Sanna, A., Fender, R., Belloni, T. & Stella, L. 2014, , 439, 65
Motta, S.E. 2016, Astronomische Nachrichten, 337, 398
Nowak, M.A., Wilms, J. & Dove, J.B. 1999, , 517, 355
Pan, H.-W. et al. 2016, , 819, 19
Pasham, D.R., Strohmayer, T.E. & Mushotzky, R.F. 2014, Nature, 513, 74
Remillard, R.A. & McClintock, J.E. 2006, , 44, 49
Reis, R.C., Miller, J.M., Reynolds, M.T., Gültekin, K., Maitra, D., King, A.L. & Strohmayer, T.E. 2012, Science, 337, 949
Runnoe, J.C., Brotherton, M.S. & Shang, Z. 2012, , 427, 1800
Tagger, M. & Pellat, R. , 349, 1003
Titarchuk, L. & Osherovich, V. 2000, , 542, 111
Titarchuk, L. & Fiorito, R. 2004, , 612, 988
Scargle, J.D. 1982, , 263, 835
Smith, K.L., Boyd, P.T., Mushotzky, R.F., Gehrels, N., Edelson, R., Howell, S.B., Gelino, D.M., Brown, A. & Young, S. 2015, , 150, 126
Smith, K.L., Mushotzky, R.F., Boyd, P.T., Malkan, M., Howell, S.B. & Gelino, D.M. 2018, arXiv:1803.06436
Stella, L., Vietri, M. & Morsink, S.M. 1999, , 524, 63
Strohmayer, T.E. & Mushotzky, R.F. 2003, , 586, 61
Strohmayer, T.E. & Mushotzky, R.F. 2009, , 703, 1386
Summons, D.P., Arevalo, P., McHardy, I.M., Uttley, P. & Bhaskar, A. 2007, , 378, 649
Timmer, J. & König, M. 1995, , 300, 707
Uttley, P., McHardy, I.M. & Papadakis, I.E. 2002, , 332, 231
van der Klis, M., 1997, in Babu G.J., Feigelson E.D., eds, Statistical Challenges in Modern Astronomy, Vol. II. Springer-Verlag, New York, p. 321
Vasudevan, R.V. & Fabian, A.C. 2007, , 381, 1235
Vaughan, S., Uttley, P., Markowitz,A.G., Huppenkothen, D., Middleton, M.J., Alston, W.N., Scargle, J.D. & Farr, W.M. 2016, , 461, 3145
Vestergaard, M. & Peterson, B.M. 2006, , 641, 689
Vignarca, F., Migliari, S., Belloni, T., Psaltis, D. & van der Klis, M. 2003, , 397, 729
Wagoner, R.V., Silbergleit, A.S. & Ortega-Rodríguez, M. 2001, , 559, L25
Wang, J.-G. et al. 2009, , 707, 1334
Zhang, P., Zhang, P.-F., Yan, J.-Z., Fan, Y.-Z. & Liu, Q.-Z. 2017, , 849, 9
Zhou, X.-L., Yuan, W., Pan, H.-W & Liu, Z. 2015, , 798, L5
|
---
abstract: 'We investigate the initial value problem for some defocusing coupled nonlinear fourth-order Schrödinger equations. Global well-posedness and scattering in the energy space are obtained.'
address: 'University Tunis El Manar, Faculty of Sciences of Tunis, LR03ES04 partial differential equations and applications, 2092 Tunis, Tunisia.'
author:
- 'R. Ghanmi and T. Saanouni'
title: 'On defocusing fourth-order coupled nonlinear Schrödinger equations'
---
Introduction
============
We consider the initial value problem for a defocusign fourth-order Schrödinger system with power-type nonlinearities $$\left\{
\begin{array}{ll}
i\frac{\partial }{\partial t}u_j +\Delta^2 u_j+ \displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j=0 ;\\
u_j(0,x)= \psi_{j}(x),
\label{S}
\end{array}
\right.$$ where $u_j: {\mathbb{R}}\times {\mathbb{R}}^N \to {\mathbb{C}}$ for $j\in[1,m]$ and $a_{jk} =a_{kj}$ are positive real numbers.\
Fourth-order Schrödinger equations have been introduced by Karpman [@Karpman] and Karpman-Shagalov [@Karpman; @1] to take into account the role of small fourth-order dispersion terms in the propagation of intense laser beams in a bulk medium with Kerr nonlinearity.\
The m-component coupled nonlinear Schrödinger system with power-type nonlinearities $$\begin{aligned}
i\frac{\partial }{\partial t}u_j +\Delta u_j= \pm \displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j ,\end{aligned}$$ arises in many physical problems. This models physical systems in which the field has more than one component. For example, in optical fibers and waveguides, the propagating electric field has two components that are transverse to the direction of propagation. Readers are referred to various other works [@Hasegawa; @Zakharov] for the derivation and applications of this system.\
A solution ${\bf u}:= (u_1,...,u_m)$ to formally satisfies respectively conservation of the mass and the energy $$\begin{gathered}
M(u_j):= \displaystyle\int_{{\mathbb{R}}^N}|u_j(x,t)|^2\,dx = M(\psi_{j});\\
E({\bf u}(t)):= \frac{1}{2}\displaystyle \sum_{j=1}^{m}\displaystyle\int_{{\mathbb{R}}^N}|\Delta u_j|^2\,dx + \frac{1}{2p}\displaystyle \sum_{j,k=1}^{m}a_{jk}\displaystyle \int_{{\mathbb{R}}^N} |u_j(x,t)|^p |u_k(x,t)|^p\,dx = E({\bf u}(0)).\end{gathered}$$ Before going further let us recall some historic facts about this problem. The model case given by a pure power nonlinearity is of particular interest. The question of well-posedness in the energy space $H^2$ was widely investigated. We denote for $p>1$ the fourth-order Schrödinger problem $$(NLS)_p\quad i\partial_t u+\Delta^2u\pm u|u|^{p-1}=0,\quad u:{\mathbb R}\times{\mathbb R}^N\rightarrow{\mathbb C}.$$ This equation satisfies a scaling invariance. In fact, if $u$ is a solution to $(NLS)_p$ with data $u_0$, then $ u_\lambda:=\lambda^{\frac4{p-1}}u(\lambda^4\, .\,,\lambda\, .\,)$ is a solution to $(NLS)_p$ with data $\lambda^{\frac4{p-1}}u_0(\lambda\,.\,).$ For $s_c:=\frac N2-\frac4{p-1}$, the space $\dot H^{s_c}$ whose norm is invariant under the dilatation $u\mapsto u_{\lambda}$ is relevant in this theory. When $s_c=2$ which is the energy critical case, the critical power is $p_c:=\frac{N+4}{N-4}$, $N\geq 5$. Pausader [@Pausader] established global well-posedness in the defocusing subcritical case, namely $1< p < p_c$. Moreover, he established global well-posedness and scattering for radial data in the defocusing critical case, namely $p=p_c$. The same result in the critical case without radial condition was obtained by Miao, Xu and Zhao [@Miao], for $N\geq 9$. The focusing case was treated by the last authors in [@Miao; @1]. They obtained results similar to one proved by Kenig and Merle [@Merle] in the classical Schrödinger case. See [@ts] in the case of exponential nonlinearity.\
In this note, we combine in some meaning the two problems $(NLS)_p$ and $(CNLS)_p.$ Thus, we have to overcome two difficulties. The first one is the presence of bilaplacian in Schrödinger operator and the second is the coupled nonlinearities. We claim that the critical exponent for local well-posedness of in the energy space is $p=\frac{N}{N - 4}.$ But some technical difficulties yield the condition $4\leq N \leq 6$.\
It is the purpose of this manusrcipt to obtain global well-posedness and scattering of via Morawetz estimate.\
The rest of the paper is organized as follows. The next section contains the main results and some technical tools needed in the sequel. The third and fourth sections are devoted to prove well-posedness of . In section five, scattering is established. In appendix, we give a proof of Morawetz estimate and a blow-up criterion.\
We define the product space $$H:={H^2({{\mathbb{R}}^N})\times...\times H^2({{\mathbb{R}}^N})}=[H^2({{\mathbb{R}}^N})]^m$$ where $H^2({\mathbb{R}}^N)$ is the usual Sobolev space endowed with the complete norm $$\|u\|_{H^2({\mathbb{R}}^N)} := \Big(\|u\|_{L^2({\mathbb{R}}^N)}^2 + \|\Delta u\|_{L^2({\mathbb{R}}^N)}^2\Big)^\frac12.$$ We denote the real numbers $$p_*:=1+\frac4N\quad\mbox{ and }\quad p^*:=\left\{
\begin{array}{ll}
\frac{N}{N-4}\quad\mbox{if}\quad N>4;\\
\infty\quad\mbox{if}\quad N=4.
\end{array}
\right.$$ We mention that $C$ will denote a constant which may vary from line to line and if $A$ and $B$ are nonnegative real numbers, $A\lesssim B$ means that $A\leq CB$. For $1\leq r\leq\infty$ and $(s,T)\in [1,\infty)\times (0,\infty)$, we denote the Lebesgue space $L^r:=L^r({\mathbb R}^N)$ with the usual norm $\|\,.\,\|_r:=\|\,.\,\|_{L^r}$, $\|\,.\,\|:=\|\,.\,\|_2$ and $$\|u\|_{L_T^s(L^r)}:=\Big(\int_{0}^{T}\|u(t)\|_r^s\,dt\Big)^{\frac{1}{s}},\quad \|u\|_{L^s(L^r)}:=\Big(\int_{0}^{+\infty}\|u(t)\|_r^s\,dt\Big)^{\frac{1}{s}}.$$ For simplicity, we denote the usual Sobolev Space $W^{s,p}:=W^{s,p}({\mathbb R}^N)$ and $H^s:=W^{s,2}$. If $X$ is an abstract space $C_T(X):=C([0,T],X)$ stands for the set of continuous functions valued in $X$ and $X_{rd}$ is the set of radial elements in $X$, moreover for an eventual solution to , we denote $T^*>0$ it’s lifespan.
Background Material
===================
In what follows, we give the main results and collect some estimates needed in the sequel.
Main results
------------
First, local well-posedness of the fourth-order Schrödinger problem is obtained.
\[existence\] Let $4\leq N\leq 6$, $ 1< p \leq p^*$ and $ \Psi \in H$. Then, there exist $T^*>0$ and a unique maximal solution to , $${\bf u} \in C ([0, T^*), H).$$ Moreover,
1. ${\bf u}\in \big(L^{\frac{8p}{N(p-1)}}([0, T^*], W^{2,2p})\big)^{(m)};$
2. ${\bf u}$ satisfies conservation of the energy and the mass;
3. $T^*=\infty$ in the subcritical case $(1<p<p^*)$.
In the critical case, global existence for small data holds in the energy space.
\[glb\] Let $4<N\leq6$ and $p=p^*.$ There exists $\epsilon_0>0$ such that if $\Psi:=(\psi_1,...,\psi_m) \in H$ satisfies $\xi(\Psi):= \displaystyle \sum_{j=1}^m\displaystyle\int_{{\mathbb{R}}^N}|\Delta \psi_j|^2\,dx\leq \epsilon_0$, the system possesses a unique solution ${\bf u}\in C({\mathbb{R}}, H)$.
Second, the system scatters in the energy space. Indeed, we show that every global solution of is asymptotic, as $t\to\pm\infty,$ to a solution of the associated linear Schrödinger system.
Let $4\leq N\leq 6$ and $ p_*< p< p^*.$ Take ${\bf u}\in C({\mathbb{R}}, H),$ a global solution to . Then $${\bf u}\in \big(L^{\frac{8p}{N(p - 1)}}({\mathbb{R}}, W^{2, 2p})\big)^{(m)}$$ and there exists $\Psi:=(\psi_1,...,\psi_m)\in H$ such that $$\lim_{t\longrightarrow\pm\infty}\|{\bf u}(t)-(e^{it\Delta^2}\psi_1,...,e^{it\Delta^2}\psi_m)\|_{H^2}=0.$$
In the next subsection, we give some standard estimates needed in the paper.
Tools
-----
We start with some properties of the free fourth-order Schrödinger kernel.
\[fre\] Denoting the free operator associated to the fourth-order fractional Schrödinger equation $$e^{it\Delta^2}u_0:=\mathcal F^{-1}(e^{it|y|^{4}})*u_0,$$ yields
1. $e^{it\Delta^2}u_0$ is the solution to the linear problem associated to $(NLS)_p$;
2. $e^{it\Delta^2}u_0 \mp i\int_0^te^{i(t-s)\Delta^2}u|u|^{p-1}\,ds$ is the solution to the problem $(NLS)_p$;
3. $(e^{it\Delta^2})^*=e^{-it\Delta^2}$;
4. $e^{it\Delta^2}$ is an isometry of $L^2$.
Now, we give the so-called Strichartz estimate [@Pausader].
A pair $(q,r)$ of positive real numbers is said to be admissible if $$2\leq q,r\leq \infty,\quad (q,r,N) \neq(2, \infty, 4)\quad \mbox{and} \quad \frac{4}{q} = N\Big(\frac{1}{2} - \frac{1}{r}\Big).$$
Let two admissible pairs $(q,r),\, (a,b)$ and $T>0.$ Then, there exists a positive real number $C$ such that $$\begin{gathered}
\|u\|_{L_T^q(W^{2,r})}\leq C \Big( \|u_0\|_{H^2} + \|i\frac{\partial}{\partial t} u+ \Delta^2 u \|_{L_T^{ a^\prime}(W^{2,b^\prime})}\Big);\label{S1}\\ \|\Delta u\|_{L^q_T(L^r)}\leq C \Big(\|\Delta u_0\|_{L^2} + \|i\frac{\partial}{\partial t} u+ \Delta^2 u\|_{L^2_T(\dot W^{1,\frac{2N}{N +2}})}\Big)\label{S2}.$$
The following Morawetz estimate is essential in proving scattering.
\[prop2”\]Let $4\leq N\leq6$, $1<p\leq p^*$ and ${\bf u}\in C(I,H)$ the solution to . Then,
1. if $N>5$, $$\label{mrwtz1}
\sum_{j=1}^m\int_I\int_{{\mathbb{R}}^N\times{\mathbb{R}}^N}\frac{|u_j(t,x)|^2|u_j(t,y)|^2}{|x-y|^5}dxdydt\lesssim_u1;$$
2. if $N=5$, $$\label{mrwtz2}
\sum_{j=1}^m\int_I\int_{{\mathbb{R}}^5}|u_j(t,x)|^4dxdt\lesssim_u1.$$
For the the reader convenience, a proof which follows as in [@Miao; @1; @Miao; @2], is given in appendix. Let us gather some useful Sobolev embeddings [@Adams].
\[injection\] The continuous injections hold
1. $ W^{s,p}({\mathbb{R}}^N)\hookrightarrow L^q({\mathbb{R}}^N)$ whenever $1<p<q<\infty, \quad s>0\quad \mbox{and}\quad \frac{1}{p}\leq \frac{1}{q} + \frac {s}{N};$
2. $W^{s,p_1}({\mathbb{R}}^N)\hookrightarrow W^{s - N(\frac{1}{p_1} - \frac{1}{p_2}),p_2}({\mathbb{R}}^N)$ whenever $1\leq p_1\leq p_2 \leq \infty.$
We close this subsection with some absorption result [@Tao].
\[Bootstrap\][([Bootstrap Lemma]{})]{} Let $T>0$ and $X\in C([0, T], {\mathbb{R}}_+)$ such that $$X\leq a + b X^{\theta}\quad on \quad [0,T],$$ where $a,\, b>0,\, \theta>1,\, a<(1 - \frac{1}{\theta})\frac{1}{(\theta b)^{\frac{1}{\theta}}}$ and $X(0)\leq \frac{1}{(\theta b)^{\frac{1}{\theta -1}}}.$ Then $$X\leq \frac{\theta}{\theta - 1}a\quad on \quad [0, T].$$
Local well-posedness
====================
This section is devoted to prove Theorem \[existence\]. The proof contains three steps. First we prove existence of a local solution to , second we show uniqueness and finally we establish global existence in subcritical case.
Local existence
---------------
We use a standard fixed point argument. For $T>0,$ we denote the space $$E_T:=\big(C([0,T],H^2)\cap L^{\frac{8p}{N(p-1)}}([0, T], W^{2,2p})\big)^{(m)}$$ with the complete norm $$\|{\bf u}\|_T:=\displaystyle\sum_{j=1}^m\Big(\|u_j\|_{L_T^\infty(H^2)}+\| u_j\|_{L^{\frac{8p}{N(p-1)}}_T( W^{2,2p})}\Big).$$ Define the function $$\phi({\bf u})(t) := T(t){\Psi} - i \displaystyle\sum_{j,k=1}^{m}\displaystyle\int_0^tT(t-s)\big(|u_k|^p|u_j|^{p-2}u_j(s)\big)\,ds,$$ where $T(t){\Psi} := (e^{it\Delta^2}\psi_{1},...,e^{it\Delta^2}\psi_{m}).$ We prove the existence of some small $T, R >0$ such that $\phi$ is a contraction on the ball $ B_T(R)$ whith center zero and radius $R.$ Take ${\bf u}, {\bf v}\in E_T$ applying the Strichartz estimate , we get $$\|\phi({\bf u}) - \phi({\bf v})\|_T\lesssim \displaystyle\sum_{j, k=1}^{m}\Big\||u_k|^p|u_j|^{p-2}u_j - |v_k|^p |v_j|^{p-2}v_j\Big\|_{L^{\frac{8p}{p(8-N) + N}}(W^{2,{\frac{2p}{2p-1}}})}.$$ To derive the contraction, consider the function $$f_{j,k}: {\mathbb{C}}^m\rightarrow {\mathbb{C}},\, (u_1,...,u_m)\mapsto |u_k|^p|u_j|^{p-2}u_j.$$ With the mean value Theorem $$\label{H1}
|f_{j,k}({\bf u})-f_{j,k}({\bf v})|\lesssim\max\{ |u_k|^{p - 1}|u_j|^{p - 1}+{|u_k|^{p}|u_j|^{p-2}}, |v_k|^p|v_j|^{p - 2}+{|v_k|^{p - 1}|v_j|^{p - 1}}\}|{\bf u} - { \bf v}|.$$ Using Hölder inequality, Sobolev embedding and denoting the quantity $$(\mathcal{I}):=\| f_{j,k}({\bf u})-f_{j,k}({\bf v})\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})},$$ we compute via a symmetry argument $$\begin{aligned}
(\mathcal{I})
&\lesssim &\big\| \big(|u_k|^{p - 1}|u_j|^{p - 1} +|u_k|^p|u_j|^{p - 2}\big)|{\bf u} - { \bf v}|\big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})} \\
&\lesssim&\|{\bf u} - { \bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \big\| |u_k|^{p-1}|u_j|^{p -1} + |u_k|^{p}|u_j|^{p-2} \big\|_{L_T^{\frac{8p}{8p - 2N(p-1)}}(L^{\frac{p}{p-1}})}\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - { \bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\big\| |u_k|^{p-1}|u_j|^{p-1}
+ |u_k|^{p}|u_j|^{p-2} \big\|_{L_T^\infty(L^{\frac{p}{p-1}})} \\
&\lesssim& T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - { \bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\Big(\|u_k^{p-1}\|_{L_T^\infty(L^{\frac{2p}{p-1}})}\|u_j^{p-1}\|_{L_T^\infty(L^{\frac{2p}{p-1}})}\\
&+& \|u_k^{p}\|_{L_T^\infty(L^2)}\|u_j^{p-2}\|_{L_T^\infty(L^{\frac{2p}{p-2}})} \Big)\\
&\lesssim& T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - { \bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\Big(\|u_k\|_{L_T^\infty(L^{2p})}^{p-1}\|u_j\|_{L_T^\infty(L^{2p})}^{p-1} + \|u_k\|_{L_T^\infty(L^{2p})}^p\|u_j\|_{L_T^\infty(L^{2p})}^{p-2} \Big)\\
&\lesssim& T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - { \bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \Big(\|u_k\|_{L_T^\infty(H^2)}^{p-1}\|u_j\|_{L_T^\infty(H^2)}^{p-1}
+ \|u_k\|_{L_T^\infty(H^2)}^p\|u_j\|_{L_T^\infty(H^2)}^{p-2} \Big).\end{aligned}$$ Then $$\begin{aligned}
\displaystyle\sum_{k,j=1}^m\| f_{j,k}({\bf u})-f_{j,k}({\bf v})\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}
&\lesssim & T^{\frac{8p - 2N(p-1)}{8p}} R^{2p-2}\|{\bf u} - {\bf v}\|_{T}.\end{aligned}$$ It remains to estimate the quantity $$\big\|\Delta \big(f_{j,k}({\bf u}) - f_{j,k}({\bf v})\big)\big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}.$$ Write $$\begin{aligned}
\partial_i^2\Big((f_{j,k}({\bf u}) - f_{j,k}({\bf v})\Big)
&=&\partial_i \Big({u}_i (f_{j,k})_i({\bf u}) - {v}_i(f_{j,k})_i({\bf v})\Big)\\
&=&{\bf u}_{ii}(f_{j,k})_i({\bf u}) - {\bf v}_{ii}(f_{j,k})_i({\bf v}) + {u}_i ^2(f_{j,k})_{ii}({\bf u}) - {v}_i ^2(f_{j,k})_{ii}({\bf v})\\
& =&({\bf u} - {\bf v})_{ii}(f_{j,k})_i({\bf u}) + {\bf v}_{ii}\Big((f_{j,k})_i({\bf u}) - (f_{j,k})_i({\bf v})\Big) \\
&+&\Big({u}_i^2 - {v}_i^2\Big)(f_{j,k})_{ii}({\bf u}) + {v}_i^2\Big( (f_{j,k})_{ii}({\bf u}) - f_{ii}({\bf v})\Big).\end{aligned}$$ Thus $$\begin{aligned}
\big\|\Delta \Big(f_{j,k}({\bf u}) - f_{j,k}({\bf v})\Big)\big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}&\leq&\big\| \displaystyle\sum_i({\bf u} - {\bf v})_{ii}(f_{j,k})_i({\bf u}) \big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}\\& +& \big\| \displaystyle\sum_i {\bf v}_{ii}\Big((f_{j,k})_i({\bf u}) - (f_{j,k})_i({\bf v})\Big)\big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})} \\&+& \big\| \displaystyle\sum_i\Big(u_i^2 - v_i^2\Big)(f_{j,k})_{ii}({\bf u}) \big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})} \\ &+ &\big\|\displaystyle\sum_i |{v}_i|^2\Big( (f_{j,k})_{ii}({\bf u}) - (f_{j,k})_{ii}({\bf v})\Big) \big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}\\
&\leq&(\mathcal{I}_1) + (\mathcal{I}_2) +(\mathcal{I}_3) + (\mathcal{I}_4).\end{aligned}$$ Via Hölder inequality and Sobolev embedding, we obtain $$\begin{aligned}
(\mathcal{I}_1)
&\lesssim&\|\Delta({\bf u} - {\bf v})\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \big\| |u_k|^{p-1}|u_j|^{p -1}+ {|u_k|^{p}|u_j|^{p-2}}\big\|_{L_T^{\frac{8p}{8p - 2N(p-1)}}(L^{\frac{p}{p-1}})}\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}} \|\Delta({\bf u} - {\bf v})\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\big\| |u_k|^{p-1}|u_j|^{p-1} + |u_k|^{p}|u_j|^{p-2} \big\|_{L_T^\infty(L^{\frac{p}{p-1}})} \\
&\lesssim& T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - {\bf v}\|_T\Big(\|u_k\|_{L_T^\infty(L^{2p})}^{p-1}\|u_j\|_{L_T^\infty(L^{2p})}^{p-1}
+ \|u_k\|_{L_T^\infty(L^{2p})}^p\|u_j\|_{L_T^\infty(L^{2p})}^{p-2} \Big)\\
&\lesssim& T^{\frac{8p - 2N(p-1)}{8p}} \|{\bf u} - {\bf v}\|_T\Big(\|u_k\|_{L_T^\infty(H^2)}^{p-1}\|u_j\|_{L_T^\infty(H^2)}^{p-1}
+ \|u_k\|_{L_T^\infty(H^2)}^p\|u_j\|_{L_T^\infty(H^2)}^{p-2} \Big).\end{aligned}$$ With the same way, [$$\begin{aligned}
(\mathcal{I}_2)
&\lesssim& \| \Delta {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \| {\bf u} - {\bf v}\|_{L^\infty_T(L^{2p})}\big\||u_k|^{p-2}|u_j|^{p-1}+|u_k|^{p}|u_j|^{p-3}\big\|_{L_T^{\frac{8p}{8p - 2N(p-1)}}(L^{\frac{2p}{2p-3}})}\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\| \Delta {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \| {\bf u} - {\bf v}\|_{L^\infty_T(L^{2p})}\big\||u_k|^{p-2}|u_j|^{p-1} + |u_k|^{p}|u_j|^{p-3}\big\|_{L_T^\infty(L^{\frac{2p}{2p-3}})}\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\| \Delta {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \| {\bf u} - {\bf v}\|_{L^\infty_T(L^{2p})}\Big(\|u_k\|_{L_T^\infty(L^{2p})}^{p-2}\|u_j\|_{L_T^\infty(L^{2p})}^{p-1}
+ \|u_k\|_{L_T^\infty(L^{2p})}^p\|u_j\|_{L_T^\infty(L^{2p})}^{p-3} \Big)\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\| \Delta {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})} \| {\bf u} - {\bf v}\|_{L^\infty({H^2})}\Big(\|u_k\|_{L_T^\infty({H^2})}^{p-2}\|u_j\|_{L_T^\infty({H^2})}^{p-1}
+ \|u_k\|_{L_T^\infty({H^2})}^p\|u_j\|_{L_T^\infty({H^2})}^{p-3} \Big).\end{aligned}$$]{} Arguing as previously, [$$\begin{aligned}
(\mathcal{I}_3)
&\lesssim&\big\|\displaystyle\sum_i|{u}_i - { v}_i|\Big( |{u}_i| + |{v}_i|\Big)(f_{j,k})_{ii}({\bf u}) \big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}\\
&\lesssim&\sum_i\|{u_i} - {v_i}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\| |{u}_i| +|{v}_i|\|_{L_T^\infty(L^{2p})}T^{\frac{N(p-1)}{8p}}\big\||u_k|^{p-2}|u_j|^{p-1} + |u_k|^p|u_j|^{p-3}\big\|_{L_T^{\frac{8p}{8p- 3N(p-1)}}(L^{\frac{2p}{2p -3}})}\\
&\lesssim&\sum_i\|{ u_i} - { v_i}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\| |{u}_i| +|{v}_i|\|_{L_T^{\infty}(L^{2p})}T^{\frac{N(p-1)}{8p}}\Big(\|u_k^{p-2}\|_{L_T^\infty(L^{\frac{2p}{p -2}})}\|u_j^{p-1}\|_{L_T^\infty(L^{\frac{2p}{p -1}})} \\
&+& \|u_k^p\|_{L_T^\infty(L^2)}\|u_j^{p-3}\|_{L_T^\infty(L^{\frac{4p}{2p-6}})}\Big)T^{\frac{8p- 3N(p-1)}{8p}}\\
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\|{\bf v}\|_{L_T^\infty(H^2)}\Big(\|u_k\|_{L_T^\infty(H^2)}^{p-2}\|u_j\|_{L_T^\infty(H^2)}^{p-1}+ \|u_k\|_{L_T^\infty(H^2)}^p\|u_j\|_{L_T^\infty(H^2)}^{p-3}\Big)T^{\frac{8p- 2N(p-1)}{8p}}.\end{aligned}$$]{} With the same way [$$\begin{aligned}
(\mathcal{I}_4)
&\lesssim& \|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\|{\bf v}\|_{L_T^{\infty}(L^{2p})}^2T^{\frac{N(p-1)}{4p}} \big\||u_k|^{p-3}|u_j|^{p-1}+ |u_k|^p |u_j|^{p-4}\big\|_{L_T^{\frac{2p}{2p - N(p-1)}}(L^{\frac{p}{p-2}})} \\
&\lesssim& T^{\frac{2p - N(p-1)}{2p}}\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\|{\bf v}\|_{L_T^{\infty}(L^{2p})}^2T^{\frac{N(p-1)}{4p}} \big\||u_k|^{p-3}|u_j|^{p-1}+ |u_k|^p |u_j|^{p-4}\big\|_{L_T^\infty(L^{\frac{p}{p-2}})} \\
&\lesssim& T^{\frac{4p - N(p-1)}{4p}}\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\|{\bf v}\|_{L_T^\infty(H^2)}^2\Big(\|u_k\|_{L_T^\infty(L^{2p})}^{p-3} \|u_j\|_{L_T^\infty(L^{2p})}^{p-1} \\
&+& \|u_k\|_{L_T^\infty(L^{2p})}^{p}\|u_j\|_{L_T^\infty(L^{2p})}^{p-4} \Big) \\
&\lesssim& T^{\frac{4p - N(p-1)}{4p}}\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p-1)}}(L^{2p})}\|{\bf v}\|_{L_T^{\infty}(H^2)}^2\Big(\|u_k\|_{L_T^\infty(H^2)}^{p-3} \|u_j\|_{L_T^\infty(H^2)}^{p-1}+\|u_k\|_{L_T^\infty(H^2)}^{p}\|u_j\|_{L_T^\infty(H^2)}^{p-4} \Big).\end{aligned}$$]{} Thus, for $T>0$ small enough, $\phi$ is a contraction satisfying $$\|\phi({\bf u}) - \phi({\bf v})\|_T\lesssim T^{\frac{4p - N(p-1)}{4p}}R^{2p-2}\|{\bf u} - {\bf v}\|_T .$$ Taking in the last inequality ${\bf v}=0,$ yields $$\begin{aligned}
\|\phi({\bf u})\|_T
&\lesssim& T^{\frac{4p - N(p-1)}{4p}}R^{2p-1}+ \|\phi(0)\|_T\\
&\lesssim& T^{\frac{4p - N(p-1)}{4p}}R^{2p-1}+ TR .\end{aligned}$$ Since $1<p\leq p^*$, $\phi$ is a contraction of $ B_T(R)$ for some $R,T>0$ small enough.
Uniqueness
----------
In what follows, we prove uniqueness of solution to the Cauchy problem . Let $T>0$ be a positive time, ${\bf u},{\bf v}\in C_T(H)$ two solutions to and ${\bf w} := {\bf u} - {\bf v}.$ Then $$i\frac{\partial }{\partial t}w_j +\Delta ^2 w_j = \displaystyle \sum_{k=1}^{m}\big( |u_k|^p|u_j|^{p - 2 }u_j - |v_k|^p|v_j|^{p - 2 }v_j\big),\quad w_j(0,.)= 0.$$ Applying Strichartz estimate with the admissible pair $(q,r) = (\frac{8p}{N(p-1)}, 2p) $, we have $$\begin{aligned}
\|{\bf u} - {\bf v}\|_{(L_T^q(L^r))^{(m)}}\lesssim \displaystyle\sum_{j=1}^{m}\displaystyle\sum_{k=1}^{m}\big\|f_{j,k}({\bf u}) - f_{j,k}({\bf v})\big\|_{L_T^{q^\prime}(L^{r^\prime})}.\end{aligned}$$ Taking $T>0$ small enough, whith a continuity argument, we may assume that $$\max_{j=1,...,m}\|u_j\|_{L_T^\infty(H^2)}\leq 1.$$ Using previous computation with$$(\mathcal{I}) :=\big\|f_{j,k}({\bf u}) - f_{j,k}({\bf u})\big\|_{L_T^{q^\prime}(L^{r^\prime})}= \big\||u_k|^p|u_j|^{p-2}u_j - |v_k|^p|v_j|^{p-2}v_j\big\|_{L_T^{q^\prime}(L^{r^\prime})},$$ we have $$\begin{aligned}
(\mathcal{I})&\lesssim&\big\|\Big(|u_k|^{p-1}|u_j|^{p-1} + |u_k|^p|u_j|^{p-2} \Big)|{\bf u} - {\bf v}|\big\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{\frac{2p}{2p-1}})}\\
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{2p})}\big\| |u_k|^{p-1}|u_j|^{p-1} + |u_k|^p|u_j|^{p-2} \big\|_{L_T^\infty(L^{\frac{p}{p-1}})}\\
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{p(8-N) + N}}(L^{2p})}\Big(\|u_k\|_{L_T^\infty(L^{2p})}^{p-1} \|u_j\|_{L_T^\infty(L^{2p})}^{p-1} + \|u_k\|_{L_T^\infty(L^{2p})}^{p}\|u_j\|_{L_T^\infty(L^{2p})}^{p-2} \Big)\\
&\lesssim& T^{\frac{(4 - N)p + N}{4 p}}\|{\bf u} - {\bf v}\|_{L_T^{\frac{8p}{N(p - 1)}}(L^{2p})}\Big(\|u_k\|_{L_T^\infty(H^2)}^{p-1} \|u_j\|_{L_T^\infty(H^2)}^{p-1}+ \|u_k\|_{L_T^\infty(H^2)}^{p}\|u_j\|_{L_T^\infty(H^2)}^{p-2} \Big).\end{aligned}$$ Then $$\|{\bf w}\|_{(L_T^q(L^r))^{(m)}}\lesssim T^{\frac{(4 - N)p + N}{4 p}}\|{\bf w} \|_{(L_T^q(L^r))^{(m)}}.$$ Uniqueness follows for small time and then for all time with a translation argument.
Global existence in the subcritical case
----------------------------------------
We prove that the maximal solution of is global in the defocusing case. The global existence is a consequence of energy conservation and previous calculations. Let ${\bf u} \in C([0, T^*), H)$ be the unique maximal solution of . We prove that ${\bf u}$ is global. By contradiction, suppose that $T^*<\infty.$ Consider for $0< s <T^*,$ the problem $$(\mathcal{P}_s)\label{P1}
\left\{
\begin{array}{ll}
i\frac{\partial }{\partial t}v_j +\Delta ^2v_j = \displaystyle \sum_{k=1}^{m} |v_k|^p|v_j|^{p - 2 }v_j;\\
v_j(s,.) = u_j(s,.).
\end{array}
\right.$$ Using the same arguments used in the local existence, we can prove a real $\tau>0$ and a solution ${\bf v} = (v_1,...,v_m)$ to $(\mathcal{P}_s)$ on $C\big([s, s+\tau], H).$ Using the conservation of energy we see that $\tau$ does not depend on $s.$ Thus, if we let $s$ be close to $T^*$ such that $T^*< s + \tau,$ this fact contradicts the maximality of $T^*.$
Global existence in the critical case
=====================================
We establish global existence of a solution to in the critical case $p=p^*$ for small data as claimed in Theorem \[glb\].\
Several norms have to be considered in the analysis of the critical case. Letting $I\subset {\mathbb{R}}$ a time slab, we define the norms $$\begin{aligned}
\|u\|_{M(I)} &:= &\|\Delta u\|_{L^{\frac{2(N + 4)}{N-4}}(I, L^{\frac{2N(N + 4)}{N^2 + 16}})};\\
\|u\|_{W(I)}& :=&\|\nabla u\|_{L^{\frac{2(N + 4)}{N-4}}(I, L^{\frac{2N(N + 4)}{N^2 -2N + 8}})}; \\
\|u\|_{Z(I)}& :=&\| u\|_{L^{\frac{2(N + 4)}{N-4}}(I, L^{\frac{2(N + 4)}{N - 4}})};\\
\|u\|_{N(I)}&: =&\|\nabla u\|_{L^2(I, L^{\frac{2N}{N+2}})}.\end{aligned}$$ Let $M({\mathbb{R}})$ be the completion of $C_c^\infty({\mathbb{R}}^{N+1})$ with the norm $\|.\|_{M({\mathbb{R}})},$ and $M(I)$ be the set consisting of the restrictions to $I$ of functions in $M({\mathbb{R}}).$ We adopt similar definitions for $W$ and $N.$ An important quantity closely related to the mass and the energy, is the functional $\xi$ defined for ${\bf u}\in H $ by $$\xi({\bf u}) = \displaystyle \sum_{j=1}^m\displaystyle\int_{{\mathbb{R}}^N}|\Delta u_j|^2\,dx.$$ We give an auxiliary result.
\[proposition 1\] Let $4<N\leq6$ and $p= p^*.$ There exists $\delta>0$ such that for any initial data $\Psi \in H$ and any interval $I=[0, T],$ if $$\displaystyle\sum_{j=1}^{m}\|e^{it\Delta^2}\psi_{j}\|_{W(I)}< \delta,$$ then there exits a unique solution ${\bf u}\in C(I, H)$ of which satisfies ${\bf u}\in \big(M(I)\cap L^{\frac{2(N+4)}{N}}(I\times {\mathbb{R}}^N)\big)^{(m)}.$ Moreover, $$\begin{gathered}
\displaystyle\sum_{j=1}^{m}\|u_j\|_{W(I)}\leq 2\delta;\\
\displaystyle\sum_{j=1}^{m}\|u_j\|_{M(I)} + \displaystyle\sum_{j=1}^{m}\|u_j\|_{L^\infty(I, H^2)}\leq C(\|\Psi\|_{H^2}+\delta^{\frac{N+4}{N-4}}).\end{gathered}$$ Besides, the solution depends continuously on the initial data in the sense that there exists $\delta_0$ depending on $\delta,$ such that for any $\delta_1\in (0,\delta_0),$ if $\displaystyle\sum_{j=1}^{m}\|\psi_{j} - \varphi_{j}\|_{H^2}\leq \delta_1$ and ${\bf v}$ be the local solution of with initial data $\varphi:=(\varphi_{1,0},...,\varphi_{m,0}),$ then ${\bf v}$ is defined on $I$ and for any admissible couple $(q,r)$, $$\|{\bf u} - {\bf v}\|_{(L^q(I, L^r))^{(m)}}\leq C\delta_1.$$
The proposition follows from a contraction mapping argument. For ${\bf u}\in( W(I))^{(m)}$, we let $\phi({\bf u})$ given by $$\phi({\bf u})(t) := T(t){\Psi} -i \displaystyle\sum_{j,k =1}^{m}\displaystyle\int_0^tT(t-s)\Big(|u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}u_j(s)\Big)\,ds.$$ Define the set $$X_{M,\delta} := \{ {\bf u}\in (M(I))^{(m)};\, \displaystyle\sum_{j=1}^{m}\|u_j\|_{W(I)}\leq 2\delta, \, \displaystyle\sum_{j=1}^{m}\|u_j\|_{L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\leq 2M\}$$ where $M := C \|\Psi\|_{(L^2)^{(m)}}$ and $\delta>0$ is sufficiently small. Using Strichartz estimate, we get $$\begin{aligned}
\|\phi({\bf u}) - \phi({\bf v})\|_{\big({L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\big)^{(m)}}&\lesssim& \displaystyle\sum_{j,k=1}^{m}\big\|f_{j,k}({\bf u}) - f_{j,k}({\bf v})\big\|_{L^{\frac{2(N+4)}{N+8}}(I,L^{\frac{2(N+4)}{N+8}})}.\end{aligned}$$ Using Hölder inequality and denoting the quantity $ (\mathcal{J}):= \big\|f_{j,k}({\bf u}) - f_{j,k}({\bf v})\big\|_{L^{\frac{2(N+4)}{N+8}}(I,L^{\frac{2(N+4)}{N+8}})}$, we obtain $$\begin{aligned}
(\mathcal{J})
&\lesssim&\big\|\Big(|u_k|^{\frac{4}{N-4}}|u_j|^{\frac{4}{N-4}} + |u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}\Big)|{\bf u} - {\bf v}|\big\|_{L_T^{\frac{2(N+4)}{N+8}}(L^{\frac{2(N+4)}{N+8}})}\\
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{2(N+4)}{N}}(L^{\frac{2(N+4)}{N}})}\Big(\big\||u_k|^{\frac{4}{N-4}}|u_j|^{\frac{4}{N-4}}\big\|_{L_T^{\frac{N+4}{4}}(L^{\frac{N+4}{4}})}+ \big\||u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}\big\|_{L_T^{\frac{N+4}{4}}(L^{\frac{N+4}{4}})}\Big)\\
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{2(N+4)}{N}}(L^{\frac{2(N+4)}{N}})}\Big(\|u_k\|_{L_T^{\frac{2(N+4)}{N- 4}}(L^{\frac{2(N+4)}{N - 4}})}^{\frac{4}{N-4}} \|u_j\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N- 4}})}^{\frac{4}{N-4}}\\ &+& \|u_k\| _{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N - 4}})} ^{\frac{N}{N-4}}\|u_j\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N - 4}})}^{\frac{8-N}{N-4}}\Big).\end{aligned}$$ By Proposition \[injection\], we have the Sobolev embedding $$\| u\|_{L^{\frac{2(N+4)}{N-4}}(I, L^{\frac{2(N + 4)}{N - 4}})}\lesssim \|\nabla u\|_{L^{\frac{2(N + 4)}{N-4}}(I, L^{\frac{2N(N + 4)}{N^2 -2N + 8}})},$$ hence $$\begin{aligned}
(\mathcal{J})
&\lesssim&\|{\bf u} - {\bf v}\|_{L_T^{\frac{2(N+4)}{N}}(L^{\frac{2(N+4)}{N}})}\Big(\|u_k\|_{W(I)}^{\frac{4}{N-4}} \|u_j\|_{W(I)}^{\frac{4}{N-4}}+\|u_k\| _{W(I)} ^{\frac{N}{N-4}}\|u_j\|_{W(I)}^{\frac{8-N}{N-4}}\Big)\\
&\lesssim& \delta^{\frac{8}{N-4}}\|{\bf u} - {\bf v}\|_{L_T^{\frac{2(N+4)}{N}}(L^{\frac{2(N+4)}{N}})}.\end{aligned}$$ Then $$\|\phi({\bf u}) - \phi({\bf v})\|_{\big({L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\big)^{(m)}}\lesssim \delta^{\frac{8}{N-4}} \|{\bf u} - {\bf v}\|_{\big({L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\big)^{(m)}}.$$ Moreover, taking in the previous inequality ${\bf v=0}$, we get for small $\delta>0$, $$\begin{aligned}
\|\phi({\bf u})\|_{\big({L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\big)^{(m)}}
&\lesssim&C\|\Psi\|_{(L^2)^m}+ \delta^{\frac{8}{N-4}} M\\
&\lesssim&(1+ \delta^{\frac{8}{N-4}}) M\\
&\leq&2M.\end{aligned}$$ With a classical Picard argument, there exists ${\bf u}\in L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})$ a solution to satisfying $$\|{\bf u}\|_{\big(L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})\big)^{(m)}}\leq 2M.$$ Taking account of Strichartz estimate we get, $$\begin{aligned}
\| {\bf u}\|_{(M(I))^{(m)}}
&\lesssim& \|\Delta \Psi\|_{({L^2})^{(m)}} +\displaystyle\sum_{j,k=1}^{m} \| \nabla f_{j,k}({\bf u})\|_{L_T^2(L^{\frac{2N}{N +2}})}.\end{aligned}$$ Let $(\mathcal{J}_1):= \| \nabla f_{j,k}({\bf u})\|_{L_T^2(L^{\frac{2N}{N +2}})} $. Using Hölder inequality and Sobolev embedding with, yields $$\begin{aligned}
(\mathcal{J}_1)
&\lesssim& \big\| |\nabla {\bf u}| \Big( |u_k|^{\frac{4}{N-4}}|u_j|^{\frac{4}{N-4}} + |u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}\Big)\big\|_{L_T^2(L^{\frac{2N}{N +2}})}\\
&\lesssim&\|\nabla{\bf u}\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2N(N+4)}{N^2 - 2N +8}})}\big\||u_k|^{\frac{4}{N-4}}|u_j|^{\frac{4}{N-4}} + |u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}\big\|_{L_T^{\frac{N+4}{4}}(L^{\frac{N+4}{4}})}\\
&\lesssim&\|\nabla{\bf u}\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2N(N+4)}{N^2 - 2N +8}})}\Big(\|u_k\|_{L_T^{\frac{2(N+4)}{N- 4}}(L^{\frac{2(N+4)}{N - 4}})}^{\frac{4}{N-4}} \|u_j\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N- 4}})}^{\frac{4}{N-4}}\\ &+& \|u_k\| _{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N - 4}})} ^{\frac{N}{N-4}}\|u_j\|_{L_T^{\frac{2(N+4)}{N - 4}}(L^{\frac{2(N+4)}{N - 4}})}^{\frac{8-N}{N-4}}\Big)\\
&\lesssim&\|{\bf u}\|_{(W(I))^{(m)}}\Big(\|u_k\|_{W(I)}^{\frac{4}{N-4}} \|u_j\|_{W(I)}^{\frac{4}{N-4}}+\|u_k\| _{W(I)} ^{\frac{N}{N-4}}\|u_j\|_{W(I)}^{\frac{8-N}{N-4}}\Big).\end{aligned}$$ Then $$\begin{aligned}
\|{\bf u}\|_{(M(I))^{(m)}}&\lesssim& \|\Psi\|_H +\displaystyle\sum_{j,k=1}^m\|{\bf u}\|_{(W(I))^{(m)}}\Big(\|u_k\|_{W(I)}^{\frac{4}{N-4}} \|u_j\|_{W(I)}^{\frac{4}{N-4}}+\|u_k\| _{W(I)} ^{\frac{N}{N-4}}\|u_j\|_{W(I)}^{\frac{8-N}{N-4}}\Big) \\
&\lesssim& \|\Psi\|_H + \delta^{\frac{N + 4}{N - 4}}.\end{aligned}$$ By Proposition \[injection\], we have the continuous Sobolev embedding $$W^{2, \frac{2N(N + 4)}{N^2 + 16}} \hookrightarrow W^{1, \frac{2N(N + 4)}{N^2 - 2N + 8}}.$$ So, it follows that $$\label{*}\| {\bf u}\|_{(W(I))^{(m)}}\lesssim\| {\bf u}\|_{(M(I))^{(m)}}.$$ Thanks to Strichartz estimates \[S1\], we have $$\begin{aligned}
\| {\bf u}\|_{(W(I))^{(m)}}
&\lesssim& \delta +\|\int_0^tT(t-s)f_{j,k}(u)\,ds\|_{(W(I))^{(m)}}\\
&\lesssim& \delta +\|\int_0^tT(t-s)f_{j,k}(u)\,ds\|_{(M(I))^{(m)}}\\
&\lesssim& \delta +\| {\bf u}\|_{(W(I))^{(m)}}^{\frac{N + 4}{N - 4}}\end{aligned}$$ so, by Lemma \[Bootstrap\], $$\| {\bf u}\|_{(W(I))^{(m)}}\leq 2\delta.$$ Taking an admissible couple $(q,r)$, we return now to the lipschitz bound $ (\mathcal{J}_2):=\|{\bf u} - {\bf v}\|_{(L^q(I, L^r))^{(m)}}\leq C\delta_1.$ By Hölder inequality and Strichartz estimate, we have [$$\begin{aligned}
(\mathcal{J}_2)&\lesssim& \|\Psi - \varphi\|_{(L^2)^{(m)}} + \displaystyle\sum_{j,k=1}^{m}\|f_{j,k}({\bf u}) - f_{j,k}({\bf v})\|_{L^{\frac{2(N+4)}{N+8}}(I,L^{\frac{2(N+4)}{N+8}})}\\
&\lesssim&\|\Psi - \varphi\|_{(L^2)^{(m)}} + \displaystyle\sum_{j,k=1}^{m}\big\|\Big(|u_k|^{\frac{4}{N-4}}|u_j|^{\frac{4}{N-4}} + |u_k|^{\frac{N}{N-4}}|u_j|^{\frac{8-N}{N-4}}\Big)|{\bf u} - {\bf v}|\big\|_{L^{\frac{2(N+4)}{N+8}}(I,L^{\frac{2(N+4)}{N+8}})}\\
&\lesssim&\|\Psi - \varphi\|_{(L^2)^{(m)}}+ \displaystyle\sum_{j,k=1}^{m}\|{\bf u} - {\bf v}\|_{L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\Big(\|u_k\|_{L^{\frac{2(N+4)}{N- 4}}(I,L^{\frac{2(N+4)}{N - 4}})}^{\frac{4}{N-4}} \|u_j\|_{L^{\frac{2(N+4)}{N - 4}}(I,L^{\frac{2(N+4)}{N- 4}})}^{\frac{4}{N-4}}\\ &+& \|u_k\| _{L^{\frac{2(N+4)}{N - 4}}(I,L^{\frac{2(N+4)}{N - 4}})} ^{\frac{N}{N-4}}\|u_j\|_{L^{\frac{2(N+4)}{N - 4}}(I,L^{\frac{2(N+4)}{N - 4}})}^{\frac{8-N}{N-4}}\Big)\\
&\lesssim&\|\Psi - \varphi\|_{(L^2)^{(m)}} + \delta^{\frac{8}{N - 4}}\| {\bf u} - {\bf v}\|_{\big({L^{\frac{2(N+4)}{N}}(I,L^{\frac{2(N+4)}{N}})}\big)^{(m)}}.\end{aligned}$$]{} The proof is ended by taking $\delta$ small enough.
We are ready to prove Theorem \[glb\].
Denote the homogeneous Sobolev space ${\bf H}=($H$^2)^{(m)}$. Using the previous proposition via , it suffices to prove that $\|{\bf u}\|_{\bf H}$ remains small on the whole interval of existence of ${\bf u}.$ Write with conservation of the energy and Sobolev’s inequality $$\begin{aligned}
\|{\bf u}\|_{\bf H}^2&\leq& 2E(\Psi) +\frac{N -4}{N}\displaystyle \sum_{j,k=1}^{m}\displaystyle \int_{{\mathbb{R}}^N} |u_j(x,t)|^{\frac{N}{N - 4}} |u_k(x,t)|^{\frac{N}{N - 4}}\,dx \\
&\leq& C\big( \xi(\Psi) + \xi(\Psi)^{\frac{N}{N - 4}}\big) + C \big(\displaystyle\sum_{j=1}^{m}\|\Delta u_j\|_2^2\big)^{\frac{N}{N - 4}}\\
&\leq& C\big( \xi(\Psi) + \xi(\Psi)^{\frac{N}{N - 4}}\big) +C\|{\bf u}\|_{\bf H} ^{\frac{2N}{N - 4}}.\end{aligned}$$ So by Lemma \[Bootstrap\], if $\xi(\Psi)$ is sufficiently small, then ${\bf u}$ stays small in the ${\bf H}$ norm.
Scattering
==========
For any time slab $I,$ take the Strichartz space $$S(I):=C(I, H^2)\cap{L^{\frac{8p}{N(p -1)}}(I, W^{2, 2p})}$$ endowed the complete norm $$\|u\|_{S(I)}:= \|u\|_{L^\infty(I, H^2)} + \|u\|_{L^{\frac{8p}{N(p -1)}}(I, W^{2, 2p})}.$$ The first intermediate result is the following.
For any time slab $I,$ we have $$\| {\bf u}(t) - e^{it\Delta^2}\Psi\|_{(S(I))^{(m)}}\lesssim\|{\bf u}\|_{\big(L^\infty(I, L^{2p})\big)^{(m)}}^{\frac{2pN(p-1)-8p}{N(p-1)}}\|{\bf u}\|_{\big(L^{\frac{8p}{N(p-1)}}(I, W^{2,2p})\big)^{(m)}}^{\frac{8p - N(p-1)}{N(p-1)}}.$$
Using Strichartz estimate, we have $$\| {\bf u}(t) - e^{it\Delta^2}\Psi\|_{(S(I))^{(m)}}\lesssim \displaystyle\sum_{j,k=1}^m \|f_{j,k}({\bf u})\|_{L^{\frac{8p}{p(8 - N) + N}}(I, W^{2,\frac{2p}{2p -1}})}.$$ Thanks to Hölder inequality, we get $$\begin{aligned}
\|f_{j,k}\|_{L^\frac{2p}{2p -1}}&\lesssim&\big\||u_k|^p|u_j|^{p - 1}\big\|_{L^\frac{2p}{2p -1}}
\lesssim\|u_k\|_{L^{2p}}^p\|u_j\|_{L^{2p}}^{p -1}.\end{aligned}$$ Letting $\mu =\theta:= \frac{8p - N(p - 1)}{2N(p - 1) }$, we get $p - \theta ={\frac{N(p -1)(2p +1) - 8p}{2N(p -1)}}$ and $ p - 1 -\mu={\frac{N(p -1)(2p -1) - 8p}{2N(p -1)}}$. Moreover, $$\begin{aligned}
\|f_{j,k}\|_{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}&\lesssim& \big\|\|u_k\|_{L^{2p}}^p\|u_j\|_{L^{2p}}^{p -1} \big\|_{L^{\frac{8p}{p(8 - N) + N}}(I)}\\
&\lesssim&\|u_k\|_{L^\infty(I,L^{2p})}^{p -\theta}\|u_j\|_{L^\infty(I,L^{2p})}^{p -1-\mu}\big\|\|u_k\|_{L^{2p}}^\theta\|u_j\|_{L^{2p}}^{\mu} \big\|_{L^{\frac{8p}{p(8 - N) + N}}(I)}\\
&\lesssim&\|u_k\|_{L^\infty(I, L^{2p})}^{p -\theta}\|u_j\|_{L^\infty(I, L^{2p})}^{p -1-\mu}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu}.\end{aligned}$$ Then, $$\begin{aligned}
\displaystyle\sum_{j,k=1}^m\|f_{j,k}\|_{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}&\lesssim&\displaystyle\sum_{j,k=1}^m\|u_k\|_{L^\infty(I, L^{2p})}^{p -\theta}\|u_j\|_{L^\infty(I, L^{2p})}^{p -1-\mu}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu}\nonumber\\
&\lesssim&\displaystyle\sum_{k=1}^m\|u_k\|_{L^\infty(I, L^{2p})}^{p -\theta}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\displaystyle\sum_{j=1}^m\|u_j\|_{L^\infty(I, L^{2p})}^{p -1 - \mu}\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu}\nonumber\\
&\lesssim&\Big(\displaystyle\sum_{k=1}^m\big(\|u_k\|_{L^\infty(I, L^{2p})}^{p -\theta}\big)^2\Big)^{\frac{1}{2}}\Big(\displaystyle\sum_{k=1}^m\big( \|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\big)^2\Big)^{\frac{1}{2}}\nonumber\\
&\times&\Big(\displaystyle\sum_{j=1}^m\big(\|u_j\|_{L^\infty(I, L^{2p})}^{p - 1- \mu}\big)^2\Big)^{\frac{1}{2}}\Big(\displaystyle\sum_{j=1}^m\big( \|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu}\big)^2\Big)^{\frac{1}{2}}\nonumber\\
&\lesssim&\|{\bf u}\|_{\big(L^\infty(I, L^{2p})\big)^{(m)}}^{\frac{2pN(p -1) - 8p}{N(p - 1)}}\|{\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}^{\frac{8p - N(p - 1)}{N(p -1)}}\label{sct1}.\end{aligned}$$ It remains to estimate the quantity $(\mathcal{I}):=\|\Delta (f_{j,k}({\bf u}))\|_{{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}}.$ Write $$\begin{aligned}
(\mathcal{I})&\lesssim&
\sum_{i=1}^m \|\Delta{\bf u} (f_{j,k})_i({\bf u})\|_{{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}} + \||\nabla {\bf u}|^2(f_{j,k})_{ii}({\bf u})\|_{{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}}\\
&\lesssim& (\mathcal{I}_1) + (\mathcal{I}_2) .\end{aligned}$$ Using Hölder inequality, we obtain $$\begin{aligned}
\|\Delta{\bf u} (f_{j,k})_i({\bf u})\|_{L^{\frac{2p}{2p -1}}}&\lesssim&\big\| \Delta{\bf u}\big(|u_k|^{p-1}|u_j|^{p-1} + |u_k|^p|u_j|^{p-2}\big)\big\|_{L^{\frac{2p}{2p -1}}}\\
&\lesssim&\|\Delta{\bf u}\|_{(L^{2p})^m}\Big(\big\||u_k|^{p-1}|u_j|^{p - 1}\big\|_{L^\frac{p}{p - 1}} + \big\||u_k|^{p }|u_j|^{p - 2}\big\|_{L^\frac{p}{p - 1}} \Big)\\
&\lesssim&\|\Delta{\bf u}\|_{(L^{2p})^m}\Big(\|u_k\|_{L^{2p}}^{p-1} \|u_j\|_{L^{2p}}^{p-1} + \|u_k\|_{L^{2p}}^{p}\|u_j\|_{L^{2p}}^{p-2} \Big).\end{aligned}$$ Letting $\theta = \mu =\alpha = \beta =:\frac{4p - N(p -1)}{N(p - 1)},$ we get $p -1 -\theta=\frac{N(p -1)p - 4p}{N(p - 1)}$ and $$\begin{aligned}
(\mathcal{I}_1)
&\lesssim& \Big\|\|\Delta{\bf u}\|_{(L^{2p})^m}\Big(\|u_k\|_{L^{2p}}^{p-1} \|u_j\|_{L^{2p}}^{p-1} + \|u_k\|_{L^{2p}}^{p}\|u_j\|_{L^{2p}}^{p-2} \Big)\Big\|_{L^{\frac{8p}{p(8 - N) +N}}}\\
&\lesssim&\|\Delta {\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}\Big(\big\|\|u_k\|_{L^{2p}}^{p-1} \|u_j\|_{L^{2p}}^{p-1}\big\|_{L^{\frac{8p}{8p - 2N(p - 1)}}} + \big\|\|u_k\|_{L^{2p}}^{p}\|u_j\|_{L^{2p}}^{p-2}\big\|_{L^{\frac{8p}{8p - 2N(p - 1)}}}\Big)\\
&\lesssim&\|\Delta {\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}\Big( \|u_k\|_{L^\infty(I, L^{2p})}^{p -1 -\theta}\|u_j\|_{L^\infty(I, L^{2p})}^{p -1- \mu}\big\|\|u_k\|_{L^{2p}}^{\theta} \|u_j\|_{L^{2p}}^{\mu}\big\|_{L^{\frac{8p}{8p - 2N(p - 1)}}} \\
&+&\|u_k\|_{L^\infty(I, L^{2p})}^{p - \alpha}\|u_j\|_{L^\infty(I, L^{2p})}^{p -2-\beta}\big\|\|u_k\|_{L^{2p}}^{\alpha} \|u_j\|_{L^{2p}}^{\beta}\big\|_{L^{\frac{8p}{8p - 2N(p - 1)}}} \Big)\\
&\lesssim& \|\Delta {\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}\Big( \|u_k\|_{L^\infty(I, L^{2p})}^{p -1 -\theta}\|u_j\|_{L^\infty(I, L^{2p})}^{p -1- \mu}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta} \|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu} \\
&+&\|u_k\|_{L^\infty(I, L^{2p})}^{p - \alpha}\|u_j\|_{L^\infty(I, L^{2p})}^{p -2-\beta}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\alpha} \|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\beta} \Big).\end{aligned}$$ Then, with $\mathcal{A}:= \displaystyle\sum_{i,j,k=1}^m\|\Delta{\bf u} (f_{j,k})_i({\bf u})\|_{{L^{\frac{8p}{p(8 - N) + N}}(I, L^{\frac{2p}{2p -1}})}},$ we have $$\begin{aligned}
\mathcal{A}
&\lesssim&\|\Delta {\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}\Big(\displaystyle\sum_{k=1}^m \|u_k\|_{L^\infty(I, L^{2p})}^{p -1 -\theta}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\nonumber\\
&\times&\displaystyle\sum_{j=1}^m \|u_j\|_{L^\infty(I, L^{2p})}^{p -1- \mu}\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu} +\displaystyle\sum_{k=1}^m\|u_k\|_{L^\infty(I, L^{2p})}^{p - \alpha}\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\alpha}\nonumber\\
&\times&\displaystyle\sum_{j=1}^m \|u_j\|_{L^\infty(I, L^{2p})}^{p -2-\beta}\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\beta} \Big)\nonumber.\end{aligned}$$ This implies that [$$\begin{aligned}
\mathcal{A}
&\lesssim&\|\Delta {\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, L^{2p})\big)^{(m)}}\Big(\Big(\displaystyle\sum_{k=1}^m \big(\|u_k\|_{L^\infty(I, L^{2p})}^{p -1 -\theta}\big)^2\Big)^{\frac{1}{2}}\Big(\displaystyle\sum_{k=1}^m\big(\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\theta}\big)^2\Big)^{\frac{1}{2}}\nonumber\\
&\times&\Big(\sum_{j=1}^m \big(\|u_j\|_{L^\infty(I, L^{2p})}^{p -1- \mu}\big)^2\Big)^{\frac{1}{2}}\Big(\sum_{j=1}^m\big(\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\mu}\big)^2\Big)^{\frac{1}{2}}\nonumber\\
&+&\Big(\displaystyle\sum_{k=1}^m\big(\|u_k\|_{L^\infty(I, L^{2p})}^{p - \alpha}\big)^2\Big)^{\frac{1}{2}}\Big(\displaystyle\sum_{k=1}^m\big(\|u_k\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\alpha}\big)^2\Big)^{\frac{1}{2}}\nonumber\\
&\times&\Big(\displaystyle\sum_{j=1}^m\big(\|u_j\|_{L^\infty(I, L^{2p})}^{p -2-\beta} \big)^2\Big)^{\frac{1}{2}}\Big(\displaystyle\sum_{j=1}^m\big(\|u_j\|_{L^{\frac{8p}{N(p -1)}}(I, L^{2p})}^{\beta}\big)^2\Big)^{\frac{1}{2}} \Big)\nonumber\\
&\lesssim&\|{\bf u}\|_{\big(L^{\frac{8p}{N(p -1)}}(I, W^{2,2p})\big)^{(m)}}\|{\bf u}\|_{\big({L^\infty(I, L^{2p})}\big)^{(m)}}^{\frac{2pN(p-1)-8p}{N(p-1)}}\|{\bf u}\|_{\big({L^{\frac{8p}{N(p -1)}}(I, L^{2p})}\big)^{(m)}}^{\frac{8p-2N(p-1)}{N(p-1)}}.\label{sct2}\end{aligned}$$]{} Similarly, with $ \mathcal{B}:=\displaystyle\sum_{j,k=1}^m\||\nabla {\bf u}|^2(f_{j,k})_{ii}({\bf u})\|_{L^{\frac{8p}{p(8-N)+N}}(I, L^{\frac{2p}{2p - 1}})},$ we obtain $$\begin{aligned}
\mathcal{B}
&\lesssim&\displaystyle\sum_{j,k=1}^m\big\||\nabla {\bf u}|^2\Big(|u_k|^{p-2}|u_j|^{p-1} + |u_k|^{p}|u_j|^{p-3}\Big)\big\|_{L^{\frac{8p}{p(8-N)+N}}(I, L^{\frac{2p}{2p - 1}})}\nonumber\\
&\lesssim&\|{\bf u}\|_{\big({L^{\frac{8p}{N(p-1)}}}(I, W^{2,2p})\big)^{(m)}}^2
\|{\bf u}\|_{\big({L^\infty(I, L^{2p})}\big)^{(m)}}^{\frac{2pN(p-1)-8p}{N(p-1)}}\|{\bf u}\|_{\big({L^{\frac{8p}{N(p-1)}}}(I, L^{2p})\big)^{(m)}}^{\frac{8p-3N(p-1)}{N(p-1)}}\label{sct3}.\end{aligned}$$ Finally, thanks to --, it follows that $$\|{\bf u}(t) - e^{it\Delta^2}\Psi\|_{(S(I))^{(m)}}\lesssim\|{\bf u}\|_{\big(L^\infty(I, L^{2p})\big)^{(m)}}^{\frac{2pN(p-1)-8p}{N(p-1)}}\|{\bf u}\|_{\big(L^{\frac{8p}{N(p-1)}}(I, W^{2,2p})\big)^{(m)}}^{\frac{8p - N(p-1)}{N(p-1)}}.$$
The next auxiliary result is about the decay of solution.
\[t1\] For any $2<r<\frac{2N}{N - 4},$ we have $$\displaystyle\lim_{t\to \infty}\|{\bf u}(t)\|_{(L^r)^{(m)}}= 0.$$
Let $\chi \in C^\infty_0({\mathbb{R}}^N)$ be a cut-off function and $\varphi_n:=(\varphi_1^n,...,\varphi_m^n)$ be a sequence in $H$ satisfying $\displaystyle\sup_{n}\|\varphi_n\|_{H}<\infty$ and $$\varphi_n\rightharpoonup \varphi := (\varphi_1,...,\varphi_m)\in H.$$ Let ${\bf u}_n:=(u_1^n,...,u_m^n)\; (\mbox{respectively}\; {\bf u}:=(u_1,...,u_m))$ be the solution in $C({\mathbb{R}}, H)$ to with initial data $\varphi_n\, (\mbox{respectively}\; \varphi).$ In what follows, we prove a claim.\
[**Claim.**]{}\
For every $\epsilon>0,$ there exist $T_\epsilon>0$ and $ n_\epsilon\in {\mathbb{N}}$ such that $$\label{chi} \|\chi({\bf u}_n - {\bf u})\|_{(L_{T_\epsilon}^\infty (L^2))^{(m)}}<\epsilon \quad \mbox{for all}\; n>n_\epsilon.$$ In fact, we introduce the functions ${\bf v}_n:= \chi {\bf u}_n$ and ${\bf v} :=\chi {\bf u}.$ We compute, $v_j^n(0) = \chi \varphi_j^n$ and $$\begin{aligned}
i\partial_t v_j^n + \Delta^2 v_j^n &=& \Delta^2\chi u_j^n + 2 \nabla \Delta\chi \nabla u_j^n + \Delta\chi\Delta u_j^n + 2 \nabla \chi \nabla\Delta u_j^n \\
&+&2\big(\nabla\Delta\chi\nabla u_j^n + \nabla\chi\nabla\Delta u_j^n + 2\displaystyle\sum_{i=1}^N \nabla\partial_i \chi\nabla\partial_i u_j^n\big) + \chi\big(\displaystyle\sum_{k=1}^m|u_k^n|^p|u_j^n|^{p - 2}u_j^n\big).\end{aligned}$$ Similarly, $v_j(0)=\chi\phi_j$ and $$\begin{aligned}
i\partial_t v_j + \Delta^2 v_j &=& \Delta^2\chi u_j + 2 \nabla \Delta\chi \nabla u_j + \Delta\chi\Delta u_j + 2 \nabla \chi \nabla\Delta u_j \\
&+&2\big(\nabla\Delta\chi\nabla u_j + \nabla\chi\nabla\Delta u_j + 2\displaystyle\sum_{i=1}^N \nabla\partial_i \chi\nabla\partial_i u_j\big) + \chi\big(\displaystyle\sum_{k=1}^m|u_k|^p|u_j|^{p - 2}u_j\big).\end{aligned}$$ Denoting ${\bf w}_n:= {\bf v}_n - {\bf v}$ and ${\bf z}_n:= {\bf u}_n - {\bf u},$ we have $$\begin{aligned}
i\partial_t w_j^n + \Delta^2 w_j^n &=& \Delta^2\chi z_j^n+ 4 \nabla \Delta\chi \nabla z_j^n + \Delta\chi\Delta z_j^n + 4 \nabla \chi \nabla\Delta z_j^n \\
&+& 4\displaystyle\sum_{i=1}^N \nabla\partial_i \chi\nabla\partial_i z_j^n + \chi\big(\displaystyle\sum_{k=1}^m|u_k^n|^p|u_j^n|^{p - 2}u_j^n - \displaystyle\sum_{k=1}^m|u_k|^p|u_j|^{p - 2}u_j\big).\end{aligned}$$ By Strichartz estimate, we obtain $$\begin{aligned}
\|{\bf w}_n\|_{\big(L_T^\infty(L^2) \cap L^{\frac{8p}{N(p-1)}}_T(L^{2p})\big)^{(m)}}&\lesssim& \|\chi(\varphi_n - \varphi)\|_{(L^2)^{(m)}} + \|\Delta^2\chi {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}+ 4 \|\nabla \Delta\chi \nabla {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}\\
&+& 4 \|\nabla \chi \nabla\Delta {\bf z}_n\|_{(L^1(L^2))^{(m)}} + 4\| \nabla\partial_i \chi\nabla\partial_i {\bf z}_n\|_{(L^1(L^2))^{(m)}} \\
&+&\displaystyle\sum_{j,k=1}^m\big\|\chi\big(|u_k^n|^p|u_j^n|^{p - 2}u_j^n - |u_k|^p|u_j|^{p - 2}u_j\big)\big\|_{L^{\frac{8p}{p(8-N) + N}}_T(L^{\frac{2p}{2p-1}})}.\end{aligned}$$ Thanks to the Rellich Theorem, up to subsequence extraction, we have $$\epsilon:=\|\chi(\varphi_n - \varphi)\|\longrightarrow0\quad\mbox{as}\quad n\longrightarrow\infty.$$ Moreover, by the conservation laws via properties of $\chi$, $$\begin{aligned}
\mathcal{I}_1
&:=&\|\Delta^2\chi {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}+ 4 \|\nabla \Delta\chi \nabla {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}+ 4 \|\nabla \chi \nabla\Delta {\bf z}_n\|_{(L^1_T(L^2))^{(m)}} + 4\| \nabla\partial_i \chi\nabla\partial_i {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}\\
&\lesssim&\| {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}+ \| \nabla {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}+ \| \nabla\Delta{\bf z}_n\|_{(L^1_T(L^2))^{(m)}} + \|\nabla\partial_i {\bf z}_n\|_{(L^1_T(L^2))^{(m)}}\\
&\lesssim& CT,\end{aligned}$$ where $$C:= \|{\bf u}\|_{(L^\infty({\mathbb{R}},H^2))^{(m)}} + \|{\bf u}_n\|_{(L^\infty({\mathbb{R}},H^2))^{(m)}} .$$ Arguing as previously, we have $$\begin{aligned}
\mathcal{I}_2&:=&\|\chi(|u_k^n|^p|u_j^n|^{p-2}u_j^n - |u_k|^p|u_j|^{p-2}u_j)\|_{L^{\frac{8p}{p(8 - N) + N}}_T(L^{\frac{2p}{2p -1}})}\\
&\lesssim&\|\chi(|u_k^n|^{p -1}|u_j^n|^{p-1} - |u_k|^p|u_j|^{p-2})|{\bf u}_n - {\bf u}|\|_{L^{\frac{8p}{p(8 - N) + N}}_T(L^{\frac{2p}{2p -1}})}\\
&\lesssim&\|\chi({\bf u}_n - {\bf u})\|_{L^{\frac{8p}{p(8 - N) + N}}_T((L^{2p})^{(m)})}\Big( \|u_k^n\|_{L^\infty_T(L^{2p})} ^{p-1}\|u_j^n\|_{L^\infty_T(L^{2p})} ^{p-1} + \|u_k\|_{L^\infty_T(L^{2p})} ^{p}\|u_j\|_{L^\infty_T(L^{2p})} ^{p-2}\Big)\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\|{\bf w}_n \|_{L^{\frac{8p}{N(p -1)}}_T((L^{2p})^{(m)})}\Big( \|u_k^n\|_{L^\infty_T(H^2)} ^{p-1}\|u_j^n\|_{L^\infty_T(H^2)} ^{p-1} + \|u_k\|_{L^\infty_T(H^2)} ^{p}\|u_j\|_{L^\infty_T(H^2)} ^{p-2}\Big)\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\|{\bf w}_n \|_{L^{\frac{8p}{N(p -1)}}_T((L^{2p})^{(m)})}\Big( \|u_k^n\|_{L^\infty_T(H^2)} ^{2(p-1)} + \|u_j^n\|_{L^\infty_T(H^2)} ^{2(p-1)} + \|u_k\|_{L^\infty_T(H^2)} ^{2p} + \|u_j\|_{L^\infty_T(H^2)} ^{2(p-2)}\Big)\\
&\lesssim&T^{\frac{8p - 2N(p-1)}{8p}}\|{\bf w}_n \|_{L^{\frac{8p}{N(p -1)}}_T((L^{2p})^{(m)})}.\end{aligned}$$ As a consequence $$\begin{aligned}
\|{\bf w}_n\|_{\big(L_T^\infty(L^2) \cap L^{\frac{8p}{N(p-1)}}_T(L^{2p})\big)^{(m)}}
&\lesssim& \epsilon + CT + T^{\frac{8p - 2N(p-1)}{8p}}\|{\bf w}_n \|_{L^{\frac{8p}{N(p -1)}}((L^{2p})^{(m)})}\\
&\lesssim& \frac{\epsilon + T}{1 - T^{\frac{8p - 2N(p-1)}{8p}}}.\end{aligned}$$ The claim is proved.\
By an interpolation argument it is sufficient to prove the decay for $r:= 2 +\frac{4}{N}.$ We recall the following Gagliardo-Nirenberg inequality $$\label{GN}
\|u_j(t)\|_{2 + \frac{4}{N}}^{2 + \frac{4}{N}}\leq C \|u_j(t)\|_{H^2}^2
\Big(\displaystyle\sup_x \|u_j(t)\|_{L^2(Q_1(x))}\Big)^{\frac{4}{N}},$$ where $Q_a(x)$ denotes the square centered at $x$ whose edge has length $a$. We proceed by contradiction. Assume that there exist a sequence $(t_n)$ of positive real numbers and $\epsilon >0$ such that $\displaystyle\lim_{n\to \infty}t_n =\infty$ and $$\label{IN}\|u_j(t_n)\|_{L^{2 + \frac{4}{N}}}>\epsilon\quad \mbox{for all}\; n\in {\mathbb{N}}.$$ By and , there exist a sequence $(x_n)$ in ${\mathbb{R}}^N$ and a positive real number denoted also by $\epsilon>0$ such that $$\label{IN1}\|u_j(t_n)\|_{L^2(Q_1(x_n))}\geq\epsilon,\quad \mbox{for all}\; n\in {\mathbb{N}}.$$ Let $\phi_j^n(x):=u_j(t_n,x +x_n).$ Using the conservation laws, we obtain $$\sup_n\|\phi_j^n\|_{H^2}<\infty.$$ Then, up to a subsequence extraction, there exists $\phi_j\in H^2$ such that $\phi_j^n$ convergence weakly to $\phi_j$ in $H^2.$ By Rellich Theorem, we have $$\displaystyle\lim_{n\to \infty}\|\phi_j^n - \phi_j\|_{L^2(Q_1(0))}=0.$$ Moreover, thanks to we have, $\|\phi_j^n\|_{L^2(Q_1(0))}\geq \epsilon.$ So, we obtain $$\|\phi_j\|_{L^2(Q_1(0))}\geq \epsilon.$$ We denote by $\bar{u}_j\in C({\mathbb{R}}, H^2)$ the solution of with data $\phi_j$ and ${u}_j^n\in C({\mathbb{R}}, H^2)$ the solution of with data $\phi_j^n.$ Take a cut-off function $\chi \in C_0^\infty({\mathbb{R}}^N)$ which satisfies $0\leq \chi\leq1,\; \chi=1$ on $Q_1(0)$ and $supp(\chi)\subset Q_2(0).$ Using a continuity argument, there exists $T>0$ such that $$\displaystyle\inf_{t\in[0, T]}\|\chi \bar{u}_j(t) \|_{L^2({\mathbb{R}}^N)}\geq \frac{\epsilon}{2}.$$ Now, taking account of the claim , there is a positive time denoted also $T$ and $n_\epsilon\in {\mathbb{N}}$ such that $$\|\chi(u_j^n - \bar{u}_j)\|_{L_T^\infty(L^2)}\leq \frac{\epsilon}{4}\quad \mbox{for all}\; n\geq n_\epsilon.$$ Hence, for all $t\in [0, T]$ and $n\geq n_\epsilon,$ $$\|\chi u_j^n(t)\|_{L^2}\geq \|\chi \bar{u}_j(t)\|_{L^2} - \|\chi(u_j^n - \bar{u}_j)(t)\|_{L^2}\geq \frac{\epsilon}{4}.$$ Using a uniqueness argument, it follows that $u^n_j(t,x)=u_j(t+t_n,x+x_n)$. Moreover, by the properties of $\chi$ and the last inequality, for all $t\in[0, T]$ and $n\geq n_\epsilon,$ $$\|u_j(t+t_n)\|_{L^2(Q_2(x_n))}\geq \frac{\epsilon}{4}.$$ This implies that $$\|u_j(t)\|_{L^2(Q_2(x_n))}\geq \frac{\epsilon}{4},\quad \mbox{for all}\; t\in [t_n, t_n + T]\;\mbox{and all}\; n\geq n_\epsilon.$$ Moreover, as $\displaystyle\lim_{n\to \infty}t_n=\infty,$ we can suppose that $t_{n +1}- t_n>T$ for $n\geq n_\epsilon.$ Therefore, thanks to Morawetz estimates , we get for $N>5,$ the contradiction $$\begin{aligned}
1 &\gtrsim&\displaystyle\int_0^\infty\displaystyle\int_{{\mathbb{R}}^N\times{\mathbb{R}}^N}\frac{|u_j(t,x)|^2|u_j(t,y)|^2}{|x - y|^5}\,dxdydt\\
&\gtrsim&\displaystyle\sum_n\displaystyle\int_{t_n}^{t_{n} +T}\displaystyle\int_{Q_2(x_n)\times Q_2(x_n)}|u_j(t,x)|^2|u_j(t,y)|^2\,dxdydt\\
&\gtrsim& \displaystyle\sum_nT\big(\frac{\epsilon}{4}\big)^4 = \infty.\end{aligned}$$ Using , for $N=5$, write $$\begin{aligned}
1
&\gtrsim&\int_0^{\infty}\|u_j(t)\|_{L^4({\mathbb{R}}^5)}^4dt\\
&\gtrsim&\sum_n\int_{t_n}^{t_n+T}\|u_j(t)\|_{L^4(Q_2(x_n))}^4dt\\
&\gtrsim&\sum_n\int_{t_n}^{t_n+T}\|u_j(t)\|_{L^2(Q_2(x_n))}^4dt\\
&\gtrsim&\sum_n(\frac\varepsilon4)^4T=\infty.\end{aligned}$$ This completes the proof of Lemma \[t1\].\
Finally, we are ready to prove scattering. By the two previous lemmas we have $$\|{\bf u}\|_{(S(t,\infty))^{(m)}}\lesssim \|\Psi\|_{H} + \epsilon(t) \|{\bf u}\|_{(S(t,\infty))^{(m)}}^{\frac{8p - N(p-1)}{N(p-1)}},$$ where $ \epsilon(t)\to 0, \; \mbox{as}\; t\to \infty.$ It follows from Lemma \[Bootstrap\] that $${\bf u} \in (S({\mathbb{R}}))^{(m)}.$$ Now, let ${\bf v}(t)= e^{-it\Delta^2}{\bf u}(t).$ Taking account of Duhamel formula $${\bf v}(t)= \Psi + i\displaystyle\sum_{j,k=1}^m\displaystyle\int_0^t e^{-is\Delta^2}\big(|u_k|^p|u_j|^{p-2}u_j(s) \big)\, ds.$$ Thanks to , and , $$f_{j,k}({\bf u})\in L^{\frac{8p}{p(8-N)}}({\mathbb{R}}, W^{2, \frac{2p}{2p -1}}),$$ so, applying Strichartz estimate, we get for $0<t<\tau,$ $$\begin{aligned}
\|{\bf v}(t) - {\bf v}(\tau)\|_{H}
&\lesssim&\displaystyle\sum_{j,k=1}^m\big\||u_k|^p|u_j|^{p-2}u_j \big\|_{L^{\frac{8p}{p(8-N)}}((t,\tau), W^{2, \frac{2p}{2p -1}})}\stackrel{t,\tau\rightarrow\infty}{\longrightarrow0}.\end{aligned}$$ Taking $u_\pm:=\lim_{t\rightarrow\pm\infty}{\bf v}(t)$, we get $$\lim_{t\rightarrow\pm\infty}\|{\bf u}(t)-e^{it\Delta^2}u_{\pm}\|_{H^2}=0.$$ Scattering is proved.
Appendix
========
Blow-up criterion
-----------------
We give a useful criterion for global existence in the critical case.
Let $p= \frac{N}{N-4}$ and ${\bf u} \in C([0, T), H)$ be a solution of satisfying $\| {\bf u}\|_{(Z([0, T]))^{(m)}}<+\infty.$ Then, there exists $K:= K ( \| \Psi\|_{H},\, \| {\bf u}\|_{(Z([0, T]))^{(m)}}),$ such that $$\label{G}
\| {\bf u}\|_{\big({L^{\frac{2(N+4)}{N}}([0, T], L^{\frac{2(N+4)}{N}})}\big)^{(m)}} + \| {\bf u}\|_{\big({L^\infty([0, T], {\bf H})}\big)^{(m)}}+ \| {\bf u}\|_{(M([0, T]))^{(m)}}\leq K$$ and ${\bf u}$ can be extended to a solution $ \tilde{{\bf u}} \in C([0, T^\prime), H)$ of for some $T^\prime > T.$
Let $\eta>0$ a small real number and $ M:=\| {\bf u}\|_{(Z([0, T]))^{(m)}}$. The first step is to establish . In order to do so, we subdivide $[0, T]$ into $n$ slabs $I_j$ such that $$n \sim (1 + \frac{M}{\eta})^{\frac{2(N+4)}{N - 4}}\quad\mbox{and}\quad \| {\bf u}\|_{(Z([0, T]))^{(m)}} \leq \eta.$$ Denote $ (\mathcal{A}):=\| {\bf u}\|_{(M([t_j, t]))^{(m)}}$ and $ I_j = [t_j, t_{j+1}]$. For $t\in I_j,$ by Strichartz estimate and arguing as previously $$\begin{aligned}
(\mathcal{A})- \|{\bf u}(t_j)\|_{\bf H}
&\lesssim& \| \nabla f_{j,k}({\bf u})\|_{\big(L^2([t_j, t],L^{\frac{2N}{N +2}})\big)^{(m)}}\\
&\lesssim&\displaystyle\sum_{j,k=1}^{m}\|\nabla{\bf u}\|_{L^{\frac{2(N+4)}{N - 4}}([t_j, t],L^{\frac{2N(N+4)}{N^2 - 2N +8}})}\Big(\|u_k\|_{L^{\frac{2(N+4)}{N- 4}}([t_j, t],L^{\frac{2(N+4)}{N - 4}})}^{\frac{4}{N-4}} \|u_j\|_{L^{\frac{2(N+4)}{N - 4}}([t_j, t],L^{\frac{2(N+4)}{N- 4}})}^{\frac{4}{N-4}}\\ &+& \|u_k\| _{L^{\frac{2(N+4)}{N - 4}}([t_j, t],L^{\frac{2(N+4)}{N - 4}})} ^{\frac{N}{N-4}}\|u_j\|_{L^{\frac{2(N+4)}{N - 4}}[t_j, t],(L^{\frac{2(N+4)}{N - 4}})}^{\frac{8-N}{N-4}}\Big)\\
&\lesssim& \|{\bf u}\|_{(W([t_j, t]))^{(m)}}\|{\bf u}\|_{(Z([t_j, t]))^{(m)}}^{\frac{8}{N-4}}\\
&\lesssim& \|{\bf u}\|_{(M([t_j, t]))^{(m)}}\|{\bf u}\|_{(Z([t_j, t]))^{(m)}}^{\frac{8}{N-4}}\lesssim \eta^{\frac{8}{N-4}}\|{\bf u}\|_{(M([t_j, t]))^{(m)}}.\end{aligned}$$ Take $( \mathcal{B}):=\|{\bf u}\|_{\big({L^{\frac{2(N+4)}{N}}([t_j, t],L^{\frac{2(N+4)}{N}})}\big)^{(m)}}$. Applying Strichartz estimates, we get $$\begin{aligned}
(\mathcal{B})-C\|{\bf u}(t_j)\|_{(L^2)^{(m)}}
&\leq& C\displaystyle\sum_{j,k=1}^{m}\||u_k|^{\frac{N}{N - 4}}|u_j|^{\frac{8 - N}{N - 4}}u_j\|_{L^{\frac{2(N + 4)}{N + 8}}([t_j, t], L^{\frac{2(N + 4)}{N + 8}})}\\
&\leq& C\displaystyle\sum_{j,k=1}^{m}\displaystyle\big\||u_k|^{\frac{N}{N - 4}}|u_j|^{\frac{8 - N}{N - 4}}\big\| _{L^{\frac{N + 4}{N }}([t_j, t], L^{\frac{N + 4}{N }})}\|u_j\|_{L^{\frac{2(N + 4}{N }}([t_j, t], L^{\frac{2(N + 4)}{N}})}\\
&\leq& C\displaystyle\sum_{j,k=1}^{m}\|u_k\| _{L^{\frac{2(N + 4)}{N - 4 }}([t_j, t], L^{\frac{2(N + 4)}{N - 4}})}^{\frac{N}{N - 4}} \|u_j\| _{L^{\frac{2(N + 4)}{N - 4 }}([t_j, t], L^{\frac{2(N + 4)}{N - 4}})}^{\frac{8 - N}{N - 4}}\|u_j\|_{L^{\frac{2(N + 4)}{N }}([t_j, t], L^{\frac{2(N + 4)}{N}})}\\
&\leq& C \|{\bf u}\| _{\big(L^{\frac{2(N + 4)}{N - 4 }}([t_j, t], L^{\frac{2(N + 4)}{N - 4}})\big)^{(m)}}^{\frac{8 }{N - 4}}\|{\bf u}\|_{\big(L^{\frac{2(N + 4)}{N }}([t_j, t], L^{\frac{2(N + 4)}{N}})\big)^{(m)}}\\
&\leq& C \|{\bf u}\| _{(Z([t_j, t]))^{(m)}}^{\frac{8 }{N - 4}}\|{\bf u}\|_{\big(L^{\frac{2(N + 4)}{N }}([t_j, t], L^{\frac{2(N + 4)}{N}})\big)^{(m)}}\\
&\leq& C \eta^{\frac{8 }{N - 4}}\|{\bf u}\|_{\big(L^{\frac{2(N + 4)}{N }}([t_j, t], L^{\frac{2(N + 4)}{N}})\big)^{(m)}}.\end{aligned}$$ If $\eta$ is sufficiently small, with conservation of the mass, yields $$\|{\bf u}\|_{\big({L^{\frac{2(N+4)}{N}}([t_j, t],L^{\frac{2(N+4)}{N}})}\big)^{(m)}} \leq C\|\Psi\|_{(L^2)^{(m)}}$$ and $$\| {\bf u}\|_{(M([t_j, t]))^{(m)}} \leq C \|{\bf u}(t_j)\|_{\bf H}.$$ Applying again Strichartz estimates, yields $$\| {\bf u}\|_{\big( L^\infty([t_j, t], {\bf H})\big)^{(m)}} \leq C \|{\bf u}(t_j)\|_{\bf H} .$$ In particular, $\|{\bf u}(t_{j +1})\|_{\bf H} \leq C \|{\bf u}(t_j)\|_{\bf H}$. Finally, $$\| {\bf u}\|_{\big( L^\infty([t_j, t], {\bf H})\big)^{(m)}}+\| {\bf u}\|_{(M([t_j, t]))^{(m)}} \leq2 C^n\|\Psi\|_{\bf H}<+\infty.$$ The first step is done. Choose $t_0\in I_n,$ Duhamel’s formula gives $$\begin{aligned}
{\bf u}(t) = e^{i(t - t_0)\Delta^2}{\bf u}(t_0) - i \displaystyle\sum_{j,k=1}^{m}\int_{t_0}^te^{i(t - s)\Delta^2}\Big(|u_k|^{\frac{N}{N - 4}}|u_j|^{\frac{8 - N}{N - 4}}u_j(s)\Big)\,ds.\end{aligned}$$ Thanks to Sobolev inequality and Strichartz estimate, $$\begin{aligned}
\|e^{i(t - t_0)\Delta^2}{\bf u}(t_0)\|_{(W([t_0, t]))^{m}}
&\leq &\|{\bf u}\|_{(W([t_0, t]))^{m}} +C \displaystyle\sum_{j,k=1}^{m}\big\||u_k|^{\frac{N}{N - 4}}|u_j|^{\frac{8 - N}{N - 4}}u_j \big\|_{N([t_0, t])}\\
&\leq &\|{\bf u}\|_{(W([t_0, t]))^{m}} +C\|{\bf u}\|_{(W([t_0, t]))^{m}}^{\frac{N + 4}{N - 4}}.\end{aligned}$$ Dominated convergence ensures that the $ \|{\bf u}\|_{(W([t_0, T]))^{m}}$ can be made arbitrarily small as $t_0\to T,$ then $$\|e^{i(t - t_0)\Delta^2}{\bf u}(t_0)\|_{(W([t_0, T]))^{m}}\leq \delta,$$ where $\delta$ is as in Proposition \[proposition 1\]. In particular, we can find $t_1\in (0, T)$ and $T^{\prime }>T$ such that $$\|e^{i(t - t_0)\Delta^2}{\bf u}(t_0)\|_{(W([t_1, T^{\prime}]))^{m}}\leq \delta.$$Now, it follows from Proposition \[proposition 1\] that there exists ${\bf v}\in C([t_1, T'], H)$ such that ${\bf v}$ solves with $ p = \frac{N}{N -4}$ and ${\bf u}(t_1) = {\bf v}(t_1).$ By uniqueness, ${\bf u} = {\bf v}$ in $[t_1, T)$ and ${\bf u}$ can be extended in $[0, T'].$
Morawetz estimate
------------------
In what follows we give a classical proof, inspired by [@Colliander; @Miao; @2], of Morawetz estimates. Let ${\bf u}:=(u_1,...,u_m)\in H$ be solution to $$i\partial_t u_j +\Delta^2 u_j+ \displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j=0$$ in $N_1$-spatial dimensions and ${\bf v}:=(v_1,...,v_m)\in H$ be solution to $$i\partial_t v_j +\Delta^2 v_j+ \displaystyle\sum_{k=1}^{m}a_{jk}|v_k|^p|v_j|^{p-2}v_j =0$$in $N_2$-spatial dimensions. Define the tensor product ${\bf w}:= ({\bf u}\otimes{\bf v})(t,z)$ for $z$ in $${\mathbb{R}}^{N_1 +N_2}:= \{ (x,y)\quad\mbox{s. t}\quad x\in {\mathbb{R}}^{N_1}, y\in {\mathbb{R}}^{N_2}\}$$ by the formula $$({\bf u}\otimes{\bf v})(t,z) = {\bf u}(t,x){\bf v}(t,y) .$$ Denote $F({\bf u}):= \displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j.$ A direct computation shows that ${\bf w}:=(w_1,...,w_n)= {\bf u}\otimes{\bf v}$ solves the equation $$\label{tensor1}
i\partial_t w_j +\Delta^2 w_j+ F({\bf u})\otimes v_j + F({\bf v})\otimes u_j:=i\partial_t w_j +\Delta^2 w_j+ h=0$$ where $\Delta^2:= \Delta_x^2 + \Delta_y^2.$ Define the Morawetz action corresponding to ${\bf w}$ by $$\begin{aligned}
M_a^{\otimes_2}
&:=& 2\displaystyle\sum_{j=1}^m\displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\nabla a(z).\Im(\overline{u_j\otimes v_j(z)}\nabla (u_j\otimes v_j)(z))\,dz\\
&=& 2\displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\nabla a(z).\Im({\bf \bar{w}}(z)\nabla ({\bf\bar w})(z))\,dz,\end{aligned}$$ where $\nabla: =(\nabla_x,\nabla_y).$ It follows from equation that $$\begin{gathered}
\Im(\partial_t \bar{w}_j\partial_i w_j) =\Re (-i\partial_t \bar{w}_j\partial_i w_j)= - \Re \big((\Delta^2 \bar{w}_j +\displaystyle\sum_{k=1}^{m}a_{jk}|\bar{u}_k|^p|\bar{u}_j|^{p-2}\bar{u}_j \bar{v}_j +\displaystyle\sum_{k=1}^{m}a_{jk}|\bar{v}_k|^p|\bar{v}_j|^{p-2}\bar{v}_j \bar{u}_j)\partial_i w_j\big);\\
\Im( \bar{w}_j\partial_i\partial_t w_j) =\Re (-i \bar{w}_j\partial_i\partial_t w_j)=\Re \big(\partial_i(\Delta^2 w_j +\displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j v_j +\displaystyle\sum_{k=1}^{m}a_{jk}|v_k|^p|v_j|^{p-2}v_j u_j) \bar{w}_j\big). \end{gathered}$$ Moreover, denoting the quantity $ \big\{ h,w_j\big\}_p:=\Re \big(h\nabla\bar{w}_j - w_j\nabla\bar{h} \big)$, we compute $$\begin{aligned}
\big\{ h,w_j\big\}_p^i
& = &\partial_i\Big(\displaystyle\sum_{k=1}^{m}a_{jk}|\bar{u}_k|^p|\bar{u}_j|^{p-2}\bar{u}_j \bar{v}_j +\displaystyle\sum_{k=1}^{m}a_{jk}|\bar{v}_k|^p|\bar{v}_j|^{p-2}\bar{v}_j \bar{u}_j\Big) w_j\\
& -& \Big(\displaystyle\sum_{k=1}^{m}a_{jk}|u_k|^p|u_j|^{p-2}u_j v_j +\displaystyle\sum_{k=1}^{m}a_{jk}|v_k|^p|v_j|^{p-2}v_j u_j\Big) \partial_i\bar{w}_j.\end{aligned}$$ It follows that $$\begin{aligned}
\partial_t M_a^{\otimes_2}&=& 2\displaystyle\sum_{j=1}^m\displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\partial_i a\Re \big(\bar{w}_j\partial_i \Delta^2 w_j - \partial_iw_j \Delta^2\bar{w}_j\big)\,dz - 2\displaystyle\sum_{j=1}^m \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\partial_i a \big\{h,w_j\big\}_p^i\,dz\\
&=&-2\displaystyle\sum_{j=1}^m\displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\big[\Delta a\Re(\bar{w}_j \Delta^2 w_j) +2\Re( \partial_i a\partial_i\bar{w}_j \Delta^2 w_j)\big] \,dz - \displaystyle\sum_{j=1}^m2 \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\partial_i a \big\{h,w_j\big\}_p^i\,dz\\
&:=& \mathcal{I}_1+ \mathcal{I}_2 - 2\displaystyle\sum_{j=1}^m \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\partial_i a \big\{h,w_j\big\}_p^i\,dz.\end{aligned}$$ Similar computations done in [@Miao; @2], give $$\begin{aligned}
\mathcal{I}_1 + \mathcal{I}_2 &=& 2 \displaystyle\sum_{j=1}^m \Re \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\Big\{2\big(\partial_{ik}^x\Delta_x a \partial_i\bar{u}_j\partial_k u_j|v_j|^2 + \partial_{ik}^y\Delta_y a \partial_i\bar{v}_j\partial_k v_j|u_j|^2 \big) - \frac{1}{2}(\Delta_x^3 + \Delta_y^3) a |u_jv_j|^2 \\
&+& \big(\Delta_x^2 a |\nabla u_j|^2|v_j|^2 + \Delta_y^2 a |\nabla v_j|^2|u_j|^2\big) - 4\big(\partial_{ik}^x a \partial_{i_1 i}\bar{u}_j\partial_{i_1k}u_j|v_j|^2 +\partial_{ik}^y a \partial_{i_1 i}\bar{v}_j\partial_{i_1k}v_j|u_j|^2 \big)\Big\}\,dz.\end{aligned}$$ Now we take $a(z):=a(x,y) = |x - y|$ where $(x,y)\in{\mathbb{R}}^{N}\times {\mathbb{R}}^{N}.$ Then calculation done in [@Miao; @2], yield $$\partial_tM_a^{\otimes_2}\leq2 \displaystyle\sum_{j=1}^m \Re \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\Big(- \frac{1}{2}(\Delta_x^3 + \Delta_y^3)a|u_jv_j|^2 - 2\partial_i a\{h,w_j\}_p^i \Big)\,dz.$$ Hence, we get $$\displaystyle\sum_{j=1}^m \displaystyle\int_0^T \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\Big((\Delta_x^3 + \Delta_y^3)a|u_jv_j|^2 +4\partial_i a\{h,w_j\}_p^i \Big)\,dz\,dt \leq\displaystyle\sup_{[0,T]}|M_a^{\otimes_2}|.$$ Then $$\begin{aligned}
& &\displaystyle\sum_{j=1}^m \displaystyle\int_0^T \displaystyle\int_{{\mathbb{R}}^{N_1}\times {\mathbb{R}}^{N_2}}\Big((\Delta_x^3 + \Delta_y^3)a|u_jv_j|^2 +4(1 - \frac{1}{p})\Delta_x a\displaystyle\sum_{k=1}^ma_{jk}|u_k|^p|u_j|^p|v_j|^2 \\
&+& 4(1 - \frac{1}{p})\Delta_y a\displaystyle\sum_{k=1}^ma_{jk}|v_k|^p|v_j|^p|u_j|^2 \Big)\,dz\,dt \leq\displaystyle\sup_{[0,T]}|M_a^{\otimes_2}|.\end{aligned}$$ Taking account of the equalities $\Delta_x a = \Delta_ya=(N-1)|x-y|^{-1}$ and $$\Delta_x^3 a = \Delta_y^3a =\left\{\begin{array}{ll}
C\delta(x-y),&\mbox{if}\quad N=5;\\
3(N -1)(N - 3)(N - 5)|x-y|^{-5},&\mbox{if}\quad N> 5,
\end{array}\right.$$ when $N =5$, choosing $u_j = v_j,$ we get $$\displaystyle\sum_{j=1}^m \displaystyle\int_0^T \displaystyle\int_{{\mathbb{R}}^5} |u_j(x,t)|^4\,dx\,dt\lesssim \displaystyle\sup_{[0,T]}|M_a^{\otimes_2}|.$$ If $N>5$, it follows that $$\displaystyle\sum_{j=1}^m \displaystyle\int_0^T\displaystyle\int_{{\mathbb{R}}^{N}\otimes {\mathbb{R}}^{N}}\frac{|u_j(x,t)|^2|u(y,t)|^2}{|x - y|^5}\,dx\,dy\,dt\lesssim \displaystyle\sup_{[0,T]}|M_a^{\otimes_2}|.$$ This finishes the proof.
[99]{}
, [*Sobolev spaces*]{}, Academic. New York, (1975).
, [*Dispersion estimates for fourth order Schrödinger equations*]{}, C. R. Math. Acad. Sci. Sér. 1. Vol. 330, 87-92, (2000). , [*Semilinear Schrödinger Equations*]{}, Courant Lect Notes Math, Vol. 10, Univ Pierre et Marie Curie, (2003).
, [*An Introduction to Nonlinear Schrödinger Equations*]{}, Textos Met. Mat. 26, Instituto de Matematica UFRJ, Rio de Janeiro, (1996).
, [*Tensor products and correlation estimates with applications to nonlinear Schrödinger equations*]{}, Comm. Pure Appl. Math. Vol. 62, no.1, 920-968, (2009).
, [*Self-focusing with fourth-order dispersion*]{}, SIAM J. Appl. Math, Vol. 62, no. 4, 1437-1462, (2002).
, [*The global Cauchy problem and scattering of solutions for nonlinear Schrödinger equations in $H^s$*]{}, Differential Integral Equations, Vol. 15, no. 9, 1073-1083, (2002).
, [*Global well-posedness, scattering and blow-up for the energy-critical focusing non-linear wave equation*]{}, Acta Math. Vol. 201, no. 2, 147-212, (2008).
,[*Global wellposedness, scattering and blow up for the energy critical, focusing, nonlinear Schrödinger equation in the radial case*]{}. Invent. Math. Vol. 166, 645-675, (2006).
,[*Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers II. Normal dispersion*]{}, Appl. Phys. Lett. Vol. 23, 171-172, (1973).
, [*Stabilization of soliton instabilities by higher-order dispersion: fourth-order nonlinear Schrödinger equation*]{}, Phys. Rev. E. Vol. 53, no. 2, 1336-1339, (1996).
, [*Stability of soliton described by nonlinear Schrödinger type equations with higher-order dispersion*]{}, Phys D. Vol. 144, 194-210, (2000).
,[*Time decay for the nonlinear Beam equation*]{}, Meth. Appl. Anal. Vol. 7, 479-488, (2000).
, [*Symétrie et compacité dans les espaces de Sobolev*]{}, J. Funct. Anal. Vol. 49, no. 3, 315-334, (1982).
, [*Global existence for a coupled system of Schrödinger equations with power-type nonlinearities*]{}, J. Math. Phys. Vol. 54, 011503, (2013).
, [*Global well-posedness and scattering for the focusing energy-critical nonlinear Schrödinger equations of fourth-order in the radial case*]{}, J. D. E. Vol. 246, 3715-3749, (2009).
, [*Global well-posedness and scattering for the defocusing energy-critical nonlinear Schrödinger equations of fourth-order in dimensions $d \geq 9$*]{}, J. D. E. Vol. 251, no. 12, 15, 3381-3402, (2011).
, [*Scattering theory below energy for the cubic fourth-order Schrödinger equation*]{}, published online in J. Math. Nachr. doi: 10.1002/mana.201400012.
, [*Global well-posedness for energy critical fourth-order Schrödinger equations in the radial case*]{}, Dyn. Partial Differ. Equ. Vol. 4, no. 3, 197-225, (2007). , [*The cubic fourth-order Schrödinger equation*]{}, J. F. A, Vol. 256, 2473-2517, (2009).
, [*The focusing energy-critical fourth-order Schrödinger equation with radial data*]{}, Discrete Contin. Dyn. Syst. Ser. A, Vol. 24, no. 4, 1275-1292, (2009). , [*A note on fourth-order nonlinear Schrödinger equation*]{}, Ann. Funct. Anal. Vol. 6, no. 1, 249-266, (2015).
, [*Nonlinear wave equations*]{}, CBMS. Lect. A. M. S, [73]{}, (1989).
, [*The best Sobolev constant*]{}, Appl. Anal. Vol. 47, 227-239, (1992).
, [*Nonlinear dispersive equations: local and global analysis*]{}, CBMS Reg. Ser. Math. (2006).
, [*Stability of periodic waves of finite amplitude on the surface of a deep fluid*]{}. Sov. Phys. J. Appl. Mech. Tech. Phys. Vol. 4, 190-194, (1968).
|
---
abstract: 'We find evidence for the semileptonic baryonic decay $B^-\to p\bar p\ell^-\bar\nu_\ell$ ($\ell=e,\mu$), based on a data sample of 772 million $B\bar B$ pairs collected at the $\Upsilon(4S)$ resonance with the Belle detector at the KEKB asymmetric-energy electron-positron collider. A neural-network based hadronic $B$-meson tagging method is used in this study. The branching fraction of $B^-\to p\bar p\ell^-\bar\nu_\ell$ is measured to be $(5.8^{+2.4}_{-2.1}\textrm{(stat.)}\pm 0.9\textrm{(syst.)})\times 10^{-6}$ with a significance of 3.2$\sigma$, where lepton universality is assumed. We also estimate the corresponding upper limit: $\mathcal{B}(B^-\to p\bar p\ell^-\bar\nu_\ell) < 9.6\times 10^{-6}$ at the $90\%$ confidence level. This measurement helps constrain the baryonic transition form factor in $B$ decays.'
author:
- 'K.-J. Tien'
- 'M.-Z. Wang'
- 'I. Adachi'
- 'H. Aihara'
- 'D. M. Asner'
- 'V. Aulchenko'
- 'T. Aushev'
- 'A. M. Bakich'
- 'A. Bala'
- 'B. Bhuyan'
- 'A. Bozek'
- 'M. Bračko'
- 'T. E. Browder'
- 'P. Chang'
- 'V. Chekelian'
- 'A. Chen'
- 'P. Chen'
- 'B. G. Cheon'
- 'K. Chilikin'
- 'R. Chistov'
- 'I.-S. Cho'
- 'K. Cho'
- 'V. Chobanova'
- 'Y. Choi'
- 'D. Cinabro'
- 'J. Dalseno'
- 'M. Danilov'
- 'Z. Doležal'
- 'Z. Drásal'
- 'D. Dutta'
- 'S. Eidelman'
- 'H. Farhat'
- 'J. E. Fast'
- 'T. Ferber'
- 'V. Gaur'
- 'S. Ganguly'
- 'R. Gillard'
- 'Y. M. Goh'
- 'B. Golob'
- 'J. Haba'
- 'H. Hayashii'
- 'Y. Horii'
- 'Y. Hoshi'
- 'W.-S. Hou'
- 'Y. B. Hsiung'
- 'M. Huschle'
- 'H. J. Hyun'
- 'T. Iijima'
- 'A. Ishikawa'
- 'R. Itoh'
- 'Y. Iwasaki'
- 'T. Julius'
- 'D. H. Kah'
- 'J. H. Kang'
- 'E. Kato'
- 'T. Kawasaki'
- 'H. Kichimi'
- 'C. Kiesling'
- 'D. Y. Kim'
- 'H. J. Kim'
- 'J. B. Kim'
- 'J. H. Kim'
- 'Y. J. Kim'
- 'J. Klucar'
- 'B. R. Ko'
- 'P. Kodyš'
- 'S. Korpar'
- 'P. Križan'
- 'P. Krokovny'
- 'B. Kronenbitter'
- 'T. Kuhr'
- 'T. Kumita'
- 'A. Kuzmin'
- 'Y.-J. Kwon'
- 'S.-H. Lee'
- 'J. Li'
- 'Y. Li'
- 'J. Libby'
- 'C. Liu'
- 'Y. Liu'
- 'D. Liventsev'
- 'P. Lukin'
- 'K. Miyabayashi'
- 'H. Miyata'
- 'G. B. Mohanty'
- 'A. Moll'
- 'R. Mussa'
- 'E. Nakano'
- 'M. Nakao'
- 'Z. Natkaniec'
- 'M. Nayak'
- 'E. Nedelkovska'
- 'C. Ng'
- 'N. K. Nisar'
- 'S. Nishida'
- 'O. Nitoh'
- 'S. Ogawa'
- 'S. Okuno'
- 'S. L. Olsen'
- 'W. Ostrowicz'
- 'C. Oswald'
- 'C. W. Park'
- 'H. Park'
- 'H. K. Park'
- 'T. K. Pedlar'
- 'R. Pestotnik'
- 'M. Petrič'
- 'L. E. Piilonen'
- 'M. Ritter'
- 'M. Röhrken'
- 'A. Rostomyan'
- 'H. Sahoo'
- 'T. Saito'
- 'Y. Sakai'
- 'S. Sandilya'
- 'D. Santel'
- 'L. Santelj'
- 'T. Sanuki'
- 'Y. Sato'
- 'V. Savinov'
- 'O. Schneider'
- 'G. Schnell'
- 'C. Schwanda'
- 'D. Semmler'
- 'K. Senyo'
- 'O. Seon'
- 'M. E. Sevior'
- 'M. Shapkin'
- 'C. P. Shen'
- 'T.-A. Shibata'
- 'J.-G. Shiu'
- 'A. Sibidanov'
- 'Y.-S. Sohn'
- 'A. Sokolov'
- 'S. Stanič'
- 'M. Starič'
- 'M. Steder'
- 'M. Sumihama'
- 'T. Sumiyoshi'
- 'K. Tanida'
- 'G. Tatishvili'
- 'Y. Teramoto'
- 'M. Uchida'
- 'S. Uehara'
- 'T. Uglov'
- 'Y. Unno'
- 'S. Uno'
- 'P. Urquijo'
- 'S. E. Vahsen'
- 'C. Van Hulse'
- 'P. Vanhoefer'
- 'G. Varner'
- 'K. E. Varvell'
- 'A. Vinokurova'
- 'V. Vorobyev'
- 'M. N. Wagner'
- 'C. H. Wang'
- 'P. Wang'
- 'M. Watanabe'
- 'Y. Watanabe'
- 'K. M. Williams'
- 'E. Won'
- 'J. Yamaoka'
- 'Y. Yamashita'
- 'S. Yashchenko'
- 'Z. P. Zhang'
- 'V. Zhilich'
- 'V. Zhulanov'
- 'A. Zupanc'
title: |
\
Evidence for Semileptonic $ {B^- \to p\bar{p}\ell^- \bar{\nu}_\ell}$ Decays
---
Measurements of charmless semileptonic $B$ decays play an important role in the determination of the fundamental parameter $|V_{ub}|$ of the Cabibbo-Kobayashi-Maskawa (CKM) matrix [@ref:KM] in the Standard Model. However, all previous efforts have mainly been focused on $\bar{B} \to M l \bar{\nu_l}$ [@ref:Mlnu; @ref:CC], where $M$ stands for a charmless meson. There are no observations to date of semileptonic $B$ decays with a charmless baryon-antibaryon pair in the final state. The most stringent upper limit to date has been set by the CLEO collaboration with $\mathcal{B}(B^-\to p\bar p e^-\bar\nu_e) < 5.2\times 10^{-3}$ [@ref:CLEO]. The corresponding decay diagram is shown in Fig. \[fig:ppln\_fd\].
A theoretical investigation based on phenomenological arguments suggests that the branching fraction of exclusive semileptonic $B$ decays to a baryon-antibaryon pair is only about $10^{-5} - 10^{-6}$ [@ref:HouSoni], so sensitivity to such decays with the current data sets accumulated at the $B$-factories is marginal. In fact, there have been no final states with charmed baryons to date in semileptonic $B$ decays. The <span style="font-variant:small-caps;">BaBar</span> collaboration only reported an upper limit of $\mathcal{B}(\bar{B} \to \Lambda^+_c X \ell^-\bar\nu_\ell)/\mathcal{B}(\bar{B} \to \Lambda^+_c X) < 3.5\%$ [@ref:BABAR] at the 90% confidence level (C.L.).
A recent paper [@ref:GengHsiao] used experimental inputs \[8-12\] to estimate the $B$ to baryon-antibaryon transition form factors predicted an unexpectedly large branching fraction, $(1.04 \pm 0.38) \times 10^{-4}$, for $B^-\to p\bar p\ell^-\bar\nu_\ell$ ($\ell=e,\mu$). This is at the same level as many known $\bar{B} \to M l \bar{\nu_l}$ decays such as $\bar{B} \to \pi l \bar{\nu_l}$ [@ref:PDG]. This meta-analysis triggered our direct experimental search, whose results could be used to improve the theoretical understanding of baryonic $B$ decays, if the predicted branching fraction is confirmed, many similar decays will become available and, with improved theoretical understanding, they will be helpful in determining $|V_{ub}|$ in future.
![Leading diagram for $B^-\to p\bar p\ell^-\bar\nu_\ell$ decay.[]{data-label="fig:ppln_fd"}](ppln_fd.eps){width="35.00000%"}
In this study, we use the full data set of $772 \times 10^6\ B\bar{B}$ pairs collected at the $\Upsilon(4S)$ resonance with the Belle detector [@ref:Belle] at the KEKB asymmetric-energy $e^+e^-$ (3.5 on 8 GeV) collider [@ref:KEKB]. The Belle detector is a large-solid-angle magnetic spectrometer that consists of a silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter comprised of CsI(Tl) crystals (ECL) located inside a superconducting solenoid coil that provides a 1.5 T magnetic field. An iron flux return located outside of the coil is instrumented to detect $K_L^0$ mesons and to identify muons (KLM). The detector is described in detail elsewhere [@ref:Belle]. Monte Carlo (MC) event samples are simulated to evaluate signal efficiency, optimize selection criteria and determine the shapes for signal and background distributions in our analysis. For the signal decays, three million events are generated for each final state lepton flavour of electron or muon. The MC simulation takes into account the experimental conditions pertaining to different running periods of the Belle experiment and the accumulated integrated luminosity for each period. Several MC samples are used to estimate four categories of background: continuum $(e^+e^-\rightarrow q\bar{q}$, where $q=u,d,s,c) $, $B\bar{B}$ (modelling $b \rightarrow c$ transitions), rare $B$ decays and charmless semileptonic $B$ decays ($b \rightarrow u\ell \nu$ transitions), corresponding to 5, 5, 50 and 20 times the integrated luminosity of data, respectively. All MC samples are generated using the EvtGen [@ref:EvtGen] package, and detector simulation is performed using GEANT [@ref:GEANT]. Previous studies of similar baryonic $B$ decays, *viz.* $B^- \rightarrow p \bar{p} \pi^-$ [@ref:B_ppkpi], $B^- \rightarrow p \bar{p} K^-$ [@ref:B_ppkpi; @ref:B_ppkBABAR], and $B^- \rightarrow p \bar{p} K^{*-}$ [@ref:B_ppkst], found that the proton-antiproton mass distributions have low mass enhancements near threshold. We therefore assume that the $p\bar{p}$ pairs have an invariant mass distribution centred at 2.2 GeV$/c^2$ with a width of about 0.2 GeV$/c^2$.
We use the hadronic-tag $B$ reconstruction method to study $B$ decays with a neutrino in the final state. Since the $\Upsilon(4S)$ decays predominantly into $B\bar{B}$ [@ref:PDG], we fully reconstruct one $B$ meson with selected fully-hadronic charmed final states, called $B_\textrm{tag}$. The NeuroBayes algorithm [@ref:fullrecon] is used to provide an assessment for the quality of $B_\textrm{tag}$ reconstruction. A total of 615 exclusive charged $B$ hadronic decay channels are considered in the NeuroBayes neural network to reconstruct $B_\textrm{tag}$ candidates. We reconstruct signal $B$ candidates, called $B_\textrm{cand}$, from the remaining particles in the event. These candidates are reconstructed using final states consisting of three charged particles: one proton, one antiproton and one electron or muon. To identify the neutrino, we define the missing mass squared as $$M^{2}_\text{miss} = E^{2}_\text{miss}/c^4 - |{\vec{p}}_\text{miss}/c|^{2},$$where $E_\text{miss}$ and ${\vec{p}}_\text{miss}$ are the energy and momentum component of the four-vector $\textit{P}_\text{miss} = \textit{P}_{e^+} + \textit{P}_{e^-}- \textit{P}_{B\textrm{tag}} - \textit{P}_{B\textrm{cand}}$ in the laboratory frame. In this study, we accept events whose missing mass is in the range $-1$ GeV$^2/c^4 <M^2_\text{miss}<3 $ GeV$^2/c^4$.
We ensure that tracks used for $B_\textrm{cand}$ reconstruction have not been used in the $B_\textrm{tag}$ reconstruction. In order to remove the secondary tracks generated by hadronic interactions with the detector material, we require $|dz| < 2.0\,\textrm{cm}$ and $dr < 0.4\,\textrm{cm}$, where $dz$ and $dr$ denote the distances at the point of closest approach to the interaction point (IP) along the positron beam and in the plane transverse to this axis, respectively. To identify charged particles, all relevant information provided by the CDC, TOF and ACC is taken into account. For lepton identification, additional information is provided by the ECL and KLM. We define $\mathcal{L}_{p}$, $\mathcal{L}_{K}$, $\mathcal{L}_{\pi}$, $\mathcal{L}_{e}$ and $\mathcal{L}_{\mu}$ as likelihoods for a particle to be identified as a proton, kaon, pion, electron, and muon, respectively, and the likelihood ratios: $\mathcal{R}_{p/K}=\mathcal{L}_{p}/(\mathcal{L}_p+\mathcal{L}_K)$, $\mathcal{R}_{p/\pi}=\mathcal{L}_{p}/(\mathcal{L}_p+\mathcal{L}_\pi)$, $\mathcal{R}_e = \mathcal{L}_e/(\mathcal{L}_e+\mathcal{L}_\textrm{other})$ and $\mathcal{R}_\mu = \mathcal{L}_\mu/(\mathcal{L}_\mu+\mathcal{L}_\textrm{other})$. For a track to be identified as a proton, it is required to satisfy the condition $\mathcal{R}_{p/K}>0.6$ and $\mathcal{R}_{p/\pi}>0.6$, and $\mathcal{R}_e$ and $\mathcal{R}_\mu$ must be less than $0.95$ for lepton rejection. To identify lepton candidates, tracks with $\mathcal{R}_e >0.6$, $\mathcal{R}_\mu <0.95$ are regarded as electrons and those with $\mathcal{R}_\mu>0.9$, $\mathcal{R}_e<0.95$ as muons. In the kinematic region of interest, charged leptons are identified with an efficiency of about $90\%$, while the probability of misidentifying a pion as an electron (muon) is $0.25\%$ $(1.4\%)$. The proton identification efficiency is about $95\%$, while the probability of misidentifying a kaon or a pion as a proton is less than $10\%$. The momentum of an electron (muon) candidate in the laboratory frame must be greater than $300$ ($600$) MeV$/c$. The lepton charge must be opposite that of the $B_\text{tag}$.
Tag-side $B$ mesons are identified using the beam-energy-constrained mass, $M_{\rm bc} \equiv \sqrt{E^{*2}_\text{beam}/c^4 - |\vec{p}^*_B/c|^2}$, and the energy difference, $\Delta E \equiv E^*_B - E^*_\text{beam}$, where $E^*_\text{beam}$ is the run-dependent beam energy, and $E^*_B$ and $\vec{p}^*_B$ are the reconstructed energy and momentum, respectively, of the $B_\textrm{tag}$ in the rest frame of the $\Upsilon(4S)$. We require that $M_{\rm bc}>5.27$ GeV$/c^2$ and $-0.15$ GeV$<\Delta E<0.1$ GeV to reject poorly reconstructed $B_\textrm{tag}$ candidates. The differences in event topology between the more spherical $B\bar{B}$ events and the dominant jet-like continuum background is used to suppress the latter. Here, the ratio of the second to zeroth Fox-Wolfram moments [@ref:SFW], the angle between the $B_\textrm{tag}$ direction and the thrust axis, and the angle between the $B_\textrm{tag}$ direction and the beam direction in the $\Upsilon (4S)$ rest frame are used to construct a NeuroBayes output value for continuum suppression ${o}^\textrm{cs}_\textrm{tag}$. The $B_\textrm{tag}$ with the largest value of ${o}^\textrm{cs}_\textrm{tag}$ within a given event is retained; we accept events satisfying $ \ln ({o}^\textrm{cs}_\textrm{tag})>-7 $ for $B^-\to p\bar pe^-\bar\nu_e$ and $ \ln({o}^\textrm{cs}_\textrm{tag})>-6 $ for $B^-\to p\bar p\mu^-\bar\nu_\mu$, according to the MC-determined selection optimization.
Since there can be more than one $B_\textrm{cand}$ in an event, we select the candidate with the smallest $\chi^2$ value obtained from a fit to the $B$ vertex. The fraction of events with multiple candidates is estimated from MC to be $0.21\%$ for $B^-\to p\bar pe^-\bar\nu_e$ and $0.17\%$ for $B^-\to p\bar p\mu^-\bar\nu_\mu$. The overall signal efficiency obtained is $0.279\%$ for $B^-\to p\bar pe^-\bar\nu_e$ and $0.222\%$ for $B^-\to p\bar p\mu^-\bar\nu_\mu$. Since the reconstruction efficiency may differ between data and MC, we correct these efficiency estimates based on control sample studies. For proton and lepton identification, we use $\Lambda \rightarrow p\pi^-$ and $\gamma\gamma\rightarrow\ell^+\ell^-$ samples, respectively. The corrections are about $-4.4\%$ and $-3.1\%$ for $B^-\to p\bar pe^-\bar\nu_e$ and $-5.7\%$ and $-1.7\%$ for $B^-\to p\bar p\mu^-\bar\nu_\mu$. For the $B_\textrm{tag}$ reconstruction efficiency, we use $B^-\rightarrow X^0_c\ell^- \bar{\nu}_\ell$ samples, where $X^0_c$ denotes a meson containing a $c$ quark, and estimate correction factors of $-14.8\%$ for $B^-\to p\bar pe^-\bar\nu_e$ and $-16.4\%$ for $B^-\to p\bar p\mu^-\bar\nu_\mu$. Applying these corrections, the signal efficiency in data is estimated to be $(0.220\pm 0.011)\%$ for $B^-\to p\bar pe^-\bar\nu_e$ and $(0.172\pm 0.008)\%$ for $B^-\to p\bar p\mu^-\bar\nu_\mu$.
We perform a one-dimensional extended unbinned likelihood fit that maximizes the function $$\begin{aligned}
\mathcal{L} &=&\frac{e^{-(N_\textrm{sig}+N_\textrm{bkg})}}{N!} \prod_{i=1}^N [N_\textrm{sig} P_\textrm{sig}({M^2_\textrm{miss}}^i)+N_\textrm{bkg} P_\textrm{bkg}({M^2_\textrm{miss}}^i)],\end{aligned}$$where $i$ is the event index, $N_\textrm{sig}$ and $N_\textrm{bkg}$ denote the fitted yields of signal and background, and $P_\textrm{sig}$ and $P_\textrm{bkg}$ denote the probability density functions (PDFs) in our signal extraction model. We use three Gaussian functions to describe $P_\textrm{sig}$ for $B^-\rightarrow p\bar{p}e^- \bar{\nu}_e$ and for $B^-\rightarrow p\bar{p}\mu^- \bar{\nu}_\mu$. For background, since no peak is present near the signal region, we combine both continuum and $B$ decays backgrounds to form one PDF. We use a normalized second-order Chebyshev polynomial function to represent $P_\textrm{bkg}$ for each mode. The shape of the signal PDF is determined from the MC simulation, while the shape of the background is floated. The rare $B$ decay and $b \rightarrow u\ell \nu$ backgrounds are not included in the fit, because less than 0.1 events are expected to be found on average in the fitting region.
The fit results are shown in Fig. \[fig:fitresult\]. We determine the fit significance in terms of $\sigma$, the standard deviation of a normal distribution, with $\sqrt{-2\ln\left(\mathcal{L}_{0}/\mathcal{L}_\textrm{max}\right)}$, where $\mathcal{L}_0$ and $\mathcal{L}_\textrm{max}$ represent the maximum likelihood values from the fit with $N_\textrm{sig}$ set to zero, and with all parameters allowed to float, respectively. We also take into account the systematic effects from the signal decay model and PDF shape. The significance is $3.0\sigma$ for $B^-\rightarrow p\bar{p}e^- \bar{\nu}_e$ and $1.3\sigma$ for $B^-\rightarrow p\bar{p}\mu^- \bar{\nu}_\mu$. Assuming lepton universality and equal branching fractions for $B^-\rightarrow p\bar{p}e^- \bar{\nu}_e$ and $B^-\rightarrow p\bar{p}\mu^- \bar{\nu}_\mu$, we obtain a combined fit result with a significance of $3.2\sigma$.
![Fitted missing mass squared distributions for (a) $B^-\to p\bar pe^-\bar\nu_e$, (b) $B^-\to p\bar p\mu^-\bar\nu_\mu$ and (c) the combined fit. Points with error bars represent data, while the curves denote various components of the fit: signal (solid red), total background (dashed blue), and the sum of all components (solid black). The hatched green area denotes the signal fit component from $B^-\to p\bar pe^-\bar\nu_e$ and the dashed purple curve that from $B^-\to p\bar p\mu^-\bar\nu_\mu$.[]{data-label="fig:fitresult"}](mm2result_e.eps "fig:"){width="30.00000%"} ![Fitted missing mass squared distributions for (a) $B^-\to p\bar pe^-\bar\nu_e$, (b) $B^-\to p\bar p\mu^-\bar\nu_\mu$ and (c) the combined fit. Points with error bars represent data, while the curves denote various components of the fit: signal (solid red), total background (dashed blue), and the sum of all components (solid black). The hatched green area denotes the signal fit component from $B^-\to p\bar pe^-\bar\nu_e$ and the dashed purple curve that from $B^-\to p\bar p\mu^-\bar\nu_\mu$.[]{data-label="fig:fitresult"}](mm2result_mu.eps "fig:"){width="30.00000%"} ![Fitted missing mass squared distributions for (a) $B^-\to p\bar pe^-\bar\nu_e$, (b) $B^-\to p\bar p\mu^-\bar\nu_\mu$ and (c) the combined fit. Points with error bars represent data, while the curves denote various components of the fit: signal (solid red), total background (dashed blue), and the sum of all components (solid black). The hatched green area denotes the signal fit component from $B^-\to p\bar pe^-\bar\nu_e$ and the dashed purple curve that from $B^-\to p\bar p\mu^-\bar\nu_\mu$.[]{data-label="fig:fitresult"}](mm2result_com.eps "fig:"){width="30.00000%"}
The systematic uncertainties on the branching fractions are summarised in Table \[table:sys\] and described below. Correlated (uncorrelated) errors are added linearly (in quadrature). Each systematic uncertainty for the combined fit is conservatively considered to be the larger of the uncertainties for $B^-\to p\bar pe^-\bar\nu_e$ and $B^-\to p\bar p\mu^-\bar\nu_\mu$, except for the fitting region uncertainty.
[@l@ @c@ @c@ @c@]{} Source & [$p\bar{p}e^- \bar{\nu}_e$]{} & [$p\bar{p}\mu^-\bar{\nu}_\mu$]{} &Combined\
Track reconstruction & 1.1 & 1.1 & 1.1\
Proton identification & 0.7 & 0.8 & 0.8\
Lepton identification & 2.3 & 1.1 & 2.3\
Tag calibration & 4.3 & 4.3 & 4.3\
Number of $B\bar{B}$ events & 1.4 & 1.4 & 1.4\
Signal Decay Model & 3.6 & 12 & 12\
PDF Shape & 2.1 & 2.8 & 2.8\
Fitting Region & 3.9 & 18 & 6.1\
Summary & 6.7 & 23 & 15\
The systematic uncertainty due to charged-track reconstruction is estimated to be $0.35\%$ per track, using partially reconstructed $D^{*+}\rightarrow D^0(\pi^+\pi^-\pi^0)\pi^+$ decays. We estimate the uncertainty due to proton and lepton identification using the $\Lambda \rightarrow p\pi^-$ and $\gamma\gamma\rightarrow\ell^+\ell^-$ samples, respectively. For tag calibration, the uncertainties are estimated to be $4.3\%$ for each of the two modes, using the $B^-\rightarrow X^0_c\ell^- \bar{\nu}_\ell$ sample. The uncertainty due to the error on the total number of $B\bar{B}$ pairs is $1.4\%$. The uncertainty due to the signal MC modeling of the $p\bar{p}$ mass threshold enhancement is obtained by comparing the efficiency difference between signal MC and the phase space decay model. The uncertainties due to the signal PDF shape are studied by varying each Gaussian parameter by $\pm 1\sigma$ and observing the yield difference. Finally, the upper bound chosen for the fitting region which has a large effect on the fit results is varied from 2 to 4 GeV$^2/c^4$ with a step size of 0.2 GeV$^2/c^4$; we take one standard deviation of the ensemble of obtained fit results to estimate the uncertainty. These are conservative estimates as the statistical uncertainty is also included.
In addition to quoting branching fractions, we also estimate the corresponding upper limits at the $90\%$ confidence level by finding the value of $N$ that satisfies: $$\int^N_0 \mathcal{L}(n)dn = 0.9 \int^\infty_0\mathcal{L}(n)dn,$$where $\mathcal{L}(n)$ denotes the likelihood of the fit result and $n$ is the number of signal events. The systematic uncertainties are taken into account by replacing $\mathcal{L}(n)$ with a smeared likelihood function: $$\mathcal{L}(n)=
\int^\infty_{-\infty}\mathcal{L}(n')
\frac{e^{-(n-n')^2/2\sigma^2_\textrm{syst.}}}{\sqrt{2\pi}\sigma_\textrm{syst.}}dn',$$where $\sigma_\textrm{syst.}$ is the systematic uncertainty of the associated signal yield $n'$.
Table \[table:BRestimate\] summarizes our results. The upper limits include systematic uncertainties.
=2.pt
[@l@ @c@ @c@ @c@]{} Mode & $\mathcal{B}$ $(10^{-6})$ & U.L. $(10^{-6})$\
$B^-\rightarrow p\bar{p}e^- \bar{\nu}_e$ & $8.2$ $^{+3.7}_{-3.2}\pm 0.6 $ & $13.8 $\
$B^-\rightarrow p\bar{p}\mu^- \bar{\nu}_\mu$ & $3.1$ $^{+3.1}_{-2.4}\pm 0.7$ & $8.5$\
Combined sample & $5.8$ $^{+2.4}_{-2.1}\pm 0.9$ & 9.6\
In conclusion, we have performed a search for the four-body semileptonic baryonic $B$ decay $B^-\to p\bar p\ell^-\bar\nu_\ell$ ($\ell=e,\mu$) using a neural-network based hadronic $B$ tagging method. We find evidence for a signal with a significance of $3.2\sigma$ and a branching fraction of $(5.8^{+2.4}_{-2.1}\textrm{(stat.)}\pm 0.9\textrm{(syst.)})\times 10^{-6}$. This measurement is consistent with the theoretical investigation in Ref. [@ref:HouSoni]. As the statistical significance of our reported evidence is marginal, we also set an upper limit on the branching fraction: $\mathcal{B}(B^-\to p\bar p\ell^-\bar\nu_\ell) < 9.6\times 10^{-6}$ ($90\%$ C.L.). Our result is clearly lower than the recent meta-analysis expectation of $\sim 10^{-4}$ [@ref:GengHsiao]. It will be interesting to investigate the theoretical modeling of the baryonic transition form factors in $B$ decays in light of this new information. With the proposed next generation $B$-factories, such semileptonic baryonic $B$ decays can be studied precisely and future results may be useful in further constraining the corresponding CKM matrix elements.
We thank the KEKB group for excellent operation of the accelerator; the KEK cryogenics group for efficient solenoid operations; and the KEK computer group, the NII, and PNNL/EMSL for valuable computing and SINET4 network support. We acknowledge support from MEXT, JSPS and Nagoya’s TLPRC (Japan); ARC and DIISR (Australia); NSFC (China); MSMT (Czechia); CZF, DFG, and VS (Germany); DST (India); INFN (Italy); MEST, NRF, GSDC of KISTI, and WCU (Korea); MNiSW and NCN (Poland); MES and RFAAE (Russia); ARRS (Slovenia); IKERBASQUE and UPV/EHU (Spain); SNSF (Switzerland); NSC and MOE (Taiwan); and DOE and NSF (USA).
[99]{}
M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973).
For example, H. Ha [*et al.*]{} (Belle Collaboration), Phys. Rev. D [**83**]{}, 071101 (2011); P. del Amo Sanchez [*et al.*]{} (BABAR Collaboration), Phys. Rev. D [**83**]{}, 032007 (2011); P. Urquijo [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. [**104**]{}, 021801 (2010).
Throughout this paper, the inclusion of the charge-conjugate mode decay is implied unless otherwise stated.
N.E. Adam [*et al.*]{} (CLEO Collaboration), Phys. Rev. D [**68**]{}, 012004 (2003).
W.-S. Hou and A. Soni, Phys. Rev. Lett. [**86**]{}, 4247 (2001).
J.P. Lees [*et al.*]{} (<span style="font-variant:small-caps;">BaBar</span> Collaboration), Phys. Rev. D [**85**]{}, 011102 (2012).
C.Q. Geng and Y.K. Hsiao, Phys. Lett. B [**704**]{}, 495 (2011).
M.Z. Wang [*et al.*]{} (Belle Collaboration) Phys. Rev. Lett. [**92**]{}, 131801 (2004).
M.Z. Wang [*et al.*]{} (Belle Collaboration) Phys. Lett. B [**617**]{}, 141 (2005).
J.-H. Chen [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. [**100**]{}, 251801 (2008).
K. Abe [*et al.*]{} (Belle Collaboration) Phys. Rev. Lett. [**89**]{}, 151802 (2002).
B. Aubert [*et al.*]{} (<span style="font-variant:small-caps;">BaBar</span> Collaboration), Phys. Rev. D [**74**]{}, 051101 (2006).
J. Beringer [*et al.*]{} (Particle Data Group), Phys. Rev. D [**86**]{}, 010001 (2012).
D. J. Lange, [Nucl. Instrum. Methods Phys. Res. Sect.]{} [A **[462]{}**]{}, 152 (2001).
R. Brun [*et al.*]{}, GEANT 3.21, CERN Report DD/EE/84-1, 1984.
J.-T. Wei [*et al.*]{} (Belle Collaboration), Phys. Lett. B [**659**]{}, 80 (2008).
B. Aubert [*et al.*]{} (<span style="font-variant:small-caps;">BaBar</span> Collaboration), Phys. Rev. D [**72**]{}, 051101 (2005).
M. Feindt [*et al.*]{}, Nucl. Instrum. Methods Phys. Res. Sect. A [**654**]{}, 432 (2011).
G.C. Fox and S. Wolfram, Phys. Rev. Lett. [**41**]{}, 1581 (1978).
|
---
abstract: 'In this paper we will show that for any map $f$ on an infra-nilmanifold, the Nielsen number $N(f)$ of this map is either equal to $|L(f)|$, where $L(f)$ is the Lefschetz number of that map, or equal to the expression $|L(f)-L(f_+)|$, where $f_+$ is a lift of $f$ to a 2-fold covering of that infra-nilmanifold. By exploiting the exact nature of this relationship for all powers of $f$, we prove that the Nielsen dynamical zeta function for a map on an infra-nilmanifold is always a rational function.'
author:
- |
Karel Dekimpe and Gert-Jan Dugardein\
KULeuven Kulak, E. Sabbelaan 53, B-8500 Kortrijk
title: '**Nielsen zeta functions for maps on infra-nilmanifolds are rational**'
---
Introduction
============
Let $X$ be a compact polyhedron and $f:X{\rightarrow}X$ be a self-map. We can attach two different numbers to this map $f$, each one providing information on the number of fixed points of $f$. The first one is the Lefschetz number $L(f)$ which is defined as $$L(f)= \sum_{i=0}^{{\rm dim}\;X} (-1)^i {\rm Tr} \left( f_{\ast,i} : H_i(X,{{\mathbb Q}}) {\rightarrow}H_i(X,{{\mathbb Q}})\right).$$ The main result about the Lefschetz number is that any map homotopic to $f$ has at least one fixed point, when $L(f)\neq 0$. The Nielsen number $N(f)$, on the other hand, is harder to define. It will always be a nonnegative integer which, in general, will be a lot harder to compute than $L(f)$. It gives more information about the self-map $f$, though, since any map homotopic to $f$ will have at least $N(f)$ fixed points. We refer the reader to [@brow71-1] for more information on both the Lefschetz and the Nielsen number.
In discrete dynamical systems, both numbers are used to define a so-called dynamical zeta function ([@fels00-2]). The Lefschetz zeta function of $f$, which was introduced by S. Smale ([@smal67-1]), is given by $$L_f(z)=\exp\left( \sum_{k=1}^{+\infty}\frac{L(f^k)}{k}z^k\right).$$ Analogously, A. Fel’shtyn [@fels88-1; @fp85-1] introduced the Nielsen zeta function, which is given by $$N_f(z)=\exp\left( \sum_{k=1}^{+\infty}\frac{N(f^k)}{k}z^k\right)$$
In [@smal67-1] the following theorem was obtained (although Smale only considered diffeomorphisms on compact manifolds in his paper, the Lefschetz dynamical zeta function is defined for all maps on all compact polyhedra and his result also holds for all of these maps):
\[Smale\] The Lefschetz zeta function for self-maps on compact polyhedra is rational.
It has been shown that the Nielsen zeta function has a positive radius of convergence ([@fp85-1]), but unlike the Lefschetz zeta function, the Nielsen zeta function does not have to be rational in general. The question of when the Nielsen zeta function is rational has been studied in several papers, e.g. [@fels00-2; @fels00-1; @fh99-1; @li94-1; @fp85-1; @wong01-1].
In this paper we treat this problem for maps on infra-nilmanifolds. As it is known that the Nielsen zeta function on nilmanifolds is always rational it was very natural to ask the same question for infra-nilmanifolds. Until now, only a very partial result on this problem can be found in [@wong01-1 Theorem 4] where a (rather technical) condition is given under which the rationality of the Nielsen zeta function is guaranteed.
The main result of this paper is that the Nielsen zeta function of any map on any infra-nilmanifold is a rational function (Corollary \[mainresult\]). In order to obtain this result we show that for any map $f$ on an infra–nilmanifold, we either have $N(f)=|L(f)|$ or $N(f)=|L(f) - L(f_+)|$ where $f_+$ is a lift of $f$ to a 2-fold covering of the given infra-nilmanifold $f$. In fact, it was already known that many maps on infra-nilmanifolds satisfy the Anosov relation ([@anos85-1; @ddm04-1; @ddm05-1; @ddp11-1; @fh86-1; @km95-1]) and these are exactly the maps for which the first condition holds. The second condition now clearly shows what happens for those maps that do not satisfy the Anosov relation.
Using these relations we are able to describe the Nielsen zeta function of $f$ in terms of the Lefschetz zeta function of $f$ and $f_+$ from which the rationality then easily follows using Smale’s result.
Infra-nilmanifolds
==================
Let us now describe the class of infra-nilmanifolds in some detail. Any infra-nilmanifold is modeled on a connected and simply connected nilpotent Lie group $G$. Given such a Lie group $G$, we consider its affine group which is the semi-direct product ${{\rm Aff}}(G)= G{{\mathbb o}}{{\rm Aut}}(G)$. The group ${{\rm Aff}}(G)$ acts on $G$ in the following way: $$\forall (g,\alpha)\in {{\rm Aff}}(G),\, \forall h \in G: \;\;^{(g,\alpha)}h= g \alpha(h).$$ Note that when $G={{\mathbb R}}^n$, ${{\rm Aff}}({{\mathbb R}}^n)$ is the usual affine group, acting in the usual way on ${{\mathbb R}}^n$. Note also that since ${{\mathbb R}}^n$ is simply connected and abelian (hence a fortiori nilpotent), this case is included in our discussion. We will use $p:{{\rm Aff}}(G)=G{{\mathbb o}}{{\rm Aut}}(G) {\rightarrow}{{\rm Aut}}(G)$ to denote the natural projection on the second factor.
Let $G$ be a connected and simply connected nilpotent Lie group. A subgroup $\Gamma \subseteq {{\rm Aff}}(G)$ is called an almost–crystallographic group (modeled on $G$) if and only if $p(\Gamma)$ is finite and $\Gamma\cap G$ is a uniform and discrete subgroup of $G$. The finite group $F=p(\Gamma)$ is called the holonomy group of $\Gamma$.
Being a subgroup of ${{\rm Aff}}(G)$, any almost–crystallographic group $\Gamma$ acts on $G$. This action is properly discontinuous and cocompact. In case $\Gamma$ is torsion-free, this action is free and the quotient space $\Gamma\backslash G$ is a manifold (with universal covering space $G$ and fundamental group $\Gamma$). These manifolds are exactly the ones called “infra-nilmanifolds”.
A torsion-free almost–crystallographic group $\Gamma\subseteq {{\rm Aff}}(G) $ is called an almost–Bieberbach group, and the corresponding manifold $\Gamma\backslash G$ is said to be an infra–nilmanifold (modeled on $G$). When $\Gamma \subseteq G$, i.e. when the holonomy group $p(\Gamma)$ is trivial, the corresponding manifold $\Gamma\backslash G$ is a nilmanifold.
For any almost–Bieberbach group $\Gamma$ modeled on a Lie group $G$, we have that $N=G\cap \Gamma$ is of finite index in $\Gamma$ and hence, the infra-nilmanifold $\Gamma\backslash G$ is finitely covered by the nilmanifold $N\backslash G$, explaining the name “[*infra*]{}”–nilmanifold. In case $G={{\mathbb R}}^n$, we talk about crystallographic groups and Bieberbach groups. In this case, the infra–nilmanifolds are the compact flat manifolds and any such manifold is covered by a torus $T^n$ (because $N\cong {{\mathbb Z}}^n$).
In order to study the Nielsen theory of an infra–nilmanifold, we need to understand all maps on such a manifold up to homotopy. A complete description of these maps, is given by the work of K.B. Lee [@lee95-2]. Here we formulate the results for maps between two infra-nilmanifolds modeled on the same nilpotent Lie group, but this result has a straightforward extension to infra-nilmanifolds modeled on different Lie groups. In order to formulate this result, we extend the affine group of $G$ to the semigroup of affine endomorphisms ${{\rm aff}}(G)$ of $G$. Here ${{\rm aff}}(G)=G{{\mathbb o}}{{\rm Endo}}(G)$, where ${{\rm Endo}}(G)$ is the semigroup of endomorphisms of $G$. An element of ${{\rm aff}}(G)$ is a pair $({\delta},{{\mathfrak D}})$, where ${\delta}\in G$ and ${{\mathfrak D}}\in {{\rm Endo}}(G)$. Such an element should be seen as an “affine map” on the Lie group $G$ $$({\delta},{{\mathfrak D}}): \; G \rightarrow G:\; h \mapsto {\delta}{{\mathfrak D}}(h).$$ Note that in this way ${{\rm Aff}}(G)\subseteq {{\rm aff}}(G)$.
\[leemaps\] Let $G$ be a connected and simply connected nilpotent Lie group and suppose that $\Gamma, \Gamma'\subseteq {{\rm Aff}}(G)$ are two almost-crystallographic groups modeled on $G$. Then for any homomorphism $\varphi: \Gamma\rightarrow \Gamma'$ there exists an element $ ({\delta}, {{\mathfrak D}})\in {{\rm aff}}(G)$ such that $$\forall \gamma \in \Gamma: \; \varphi(\gamma) ({\delta}, {{\mathfrak D}}) = ({\delta}, {{\mathfrak D}}) \gamma.$$
Note that the equality $ \varphi(\gamma) ({\delta}, {{\mathfrak D}}) = ({\delta}, {{\mathfrak D}}) \gamma$ makes sense, because it involves three elements of ${{\rm aff}}(G)$. From this equality one can see that the affine map $({\delta},{{\mathfrak D}})$ descends to a map $$\overline{({\delta},{{\mathfrak D}})}: \Gamma \backslash G \rightarrow \Gamma' \backslash G: \; \Gamma h \rightarrow \Gamma' {\delta}{{\mathfrak D}}(h)$$ which exactly induces the morphism $\varphi$ on the level of the fundamental groups. We will say that $\overline{({\delta},{{\mathfrak D}})}$ is induced from an affine map.
Now, let $f:\Gamma\backslash G{\rightarrow}\Gamma'\backslash G$ be any map between two infra-nilmanifolds and let $\tilde{f}:G {\rightarrow}G$ be a lift of $f$. Then $\tilde{f}$ induces a morphism $\varphi:\Gamma{\rightarrow}\Gamma'$ determined by $\varphi(\gamma) \circ \tilde{f} = \tilde{f}\circ \gamma$, for all $\gamma\in \Gamma$. From Theorem \[leemaps\] it follows that there also exists an affine map $({\delta},{{\mathfrak D}})\in {{\rm aff}}(G)$ satisfying $\varphi(\gamma) \circ ({\delta},{{\mathfrak D}})= ({\delta},{{\mathfrak D}})\circ \gamma$ for all $\gamma\in \Gamma$. Therefore, the induced map $\overline{({\delta},D)} $ and $f$ are homotopic.
With the notations above, we will say that $({\delta},D)$ is an affine homotopy lift of $f$.
As the Nielsen and Lefschetz numbers are homotopy invariants, it will suffice to study maps induced by an affine map. For those maps there are very convenient formulae to compute the Lefschetz and the Nielsen numbers. Let us fix an infra-nilmanifold $\Gamma\backslash G$ which is determined by an almost–Bieberbach group $\Gamma\subseteq {{\rm Aff}}(G)$ and let $F\subseteq {{\rm Aut}}(G)$ be the holonomy group of $\Gamma$. We will denote the Lie algebra of $G$ by ${{\mathfrak g}}$. Recall that there is an isomorphism between ${{\rm Aut}}(G)$ and ${{\rm Aut}}({{\mathfrak g}})$ which associates to each automorphism ${{\mathfrak A}}\in {{\rm Aut}}(G)$ its differential ${{\mathfrak A}}_\ast\in {{\rm Aut}}({{\mathfrak g}})$ at the identity element of $G$.
\[LeeForm\]Let $\Gamma\subseteq {{\rm Aff}}(G)$ be an almost-Bieberbach group with holonomy group $F\subseteq {{\rm Aut}}(G)$. Let $M=\Gamma\backslash G$ be the associated infra-nilmanifold. If $f:M{\rightarrow}M$ is a map with affine homotopy lift $({\delta}, {{\mathfrak D}})$, then $$L(f)=\frac{1}{\# F}\sum_{{{\mathfrak A}}\in F}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast)$$ and $$N(f)=\frac{1}{\# F}\sum_{{{\mathfrak A}}\in F}|\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast)|.$$ (Here $I$ is the identity matrix).
The holonomy representation and maps
====================================
To any almost–crystallographic group, and hence also to any infra–nilmanifold, we can associate its holonomy representation.
Let $\Gamma\subseteq {{\rm Aff}}(G)$ be an almost–crystallographic group modeled on $G$ and with holonomy group $F=p(\Gamma) $. The holonomy representation of $\Gamma$ is the representation $$\rho: F \rightarrow {{\rm GL}}({{\mathfrak g}}): {{\mathfrak A}}\mapsto {{\mathfrak A}}_\ast$$
By choosing a basis of ${{\mathfrak g}}$, we can identify ${{\mathfrak g}}$ with ${{\mathbb R}}^n$ for some $n$ and therefore we can view the holonomy representation $\rho$ as being a real representation $\rho: F \rightarrow {{\rm GL}}_n({{\mathbb R}})$.
There is a strong connection between this holonomy representation and the affine homotopy lift of a map on the corresponding infra–nilmanifold.
\[semi-affconj\] Let $\Gamma\subseteq {{\rm Aff}}(G)$ be an almost–Bieberbach group and let $M=\Gamma\backslash G$ be the corresponding infra–nilmanifold. Let $\rho:F {\rightarrow}{{\rm GL}}({{\mathfrak g}})$ be the associated holonomy representation. If $f:M{\rightarrow}M$ is a map with affine homotopy lift $({\delta},{{\mathfrak D}})$, there exists a function $\phi:F {\rightarrow}F$ such that $$\forall x \in F:\; \rho(\phi(x)) {{\mathfrak D}}_\ast = {{\mathfrak D}}_\ast \rho(x) .$$
It is tempting to believe that the function $\phi$ will be a morphism of groups, however as an example in [@ddp11-1] shows, this need not be the case.
In the following proposition, which was mainly proved in [@ddp11-1], we show how a map $f$ on an infra–nilmanifold induces a decomposition of the holonomy representation into two subrepresentations.
\[decomprho\] Let $\rho:F\to {{\rm GL}}_n({{\mathbb R}})$ be a representation of a finite group $F$ and $\phi: F\to F$ be any function. Let $D$ be a linear transformation of ${{\mathbb R}}^n$ (which we view as an $n\times n$ matrix w.r.t. the standard basis). Suppose that $\rho(\phi(x))D=D\rho(x)$ for all $x\in F$. Then we can choose a basis of ${{\mathbb R}}^n$, such that $\rho=\rho_{\leq 1}\oplus \rho_{> 1}$, for representations $\rho_{\leq 1}:F\to {{\rm GL}}_{n_{\leq 1}}({{\mathbb R}})$ and $\rho_{> 1}:F\to {{\rm GL}}_{n_{> 1}}({{\mathbb R}})$ and such that $D$ can be written in block triangular form$$\left(\begin{array}{cc}
D_{\leq 1} & \ast \\
0 & D_{>1}
\end{array}\right),$$where $D_{\leq 1}$ and $D_{>1}$ only have eigenvalues of modulus $\leq 1$ and $>1$, respectively.
The proof of this proposition can be extracted from the more general proof one can find in [@ddp11-1 page 545]. For the reader’s convenience, we will recall those steps of the original proof that suffice for the proof of our proposition. One first shows that the generalized eigenspace of $D$ with respect to the eigenvalue 0 is an $F$–subspace of ${{\mathbb R}}^n$ and so one obtains a decomposition ${{\mathbb R}}^n= V_0 \oplus V_1$ such that $D$ takes up the form $$D= \left(\begin{array}{cc}
D_0 & \ast \\
0 & D_1
\end{array}\right),$$with respect to a basis of ${{\mathbb R}}^n$ consisting of a basis of $V_0$ complemented with a basis of $V_1$. Note that $D_0$ only has $0$ as an eigenvalue, while $D_1$ only has non-zero eigenvalues. Also, the representation $\rho$ decomposes as $\rho=\rho_0\oplus \rho_1$. Then the space ${{\mathbb R}}^n/V_0\cong V_1$ together with the representation $\rho_1$ and the linear transformation $D_1$ is considered and it is shown that this space has a direct decomposition $V_1= W_{\leq 1} \oplus W_{>1}$ as $F$–spaces such that $D_1$ is of the form $$D_1= \left(\begin{array}{cc}
D'_{\leq 1} & 0 \\
0 & D'_{>1}
\end{array}\right),$$ where $D'_{\leq 1}$ only has (non-zero) eigenvalues $\lambda$ of modulus $\leq1 $, while $D'_{>1}$ only has eigenvalues $\lambda$ of modulus $>1$. The proof now finishes by taking $V_{\leq 1}=V_0 \oplus W_{\leq 1}$ and $V_{>1}= W_{>1}$, and so $$D_{\leq 1}= \left(\begin{array}{cc}
D_0 & \ast \\
0 & D'_{\leq 1}
\end{array}\right)\mbox{ and } D_{\geq 1} = D'_{\geq1}.$$
Let $M=\Gamma\backslash G$ be an infra–nilmanifold, whose fundamental group is the almost–Bieberbach group $\Gamma\subseteq {{\rm Aff}}(G)$, having $F$ as its holonomy group and $\rho:F\rightarrow {{\rm GL}}({{\mathfrak g}})$ as its holonomy representation. Given a self-map $f:M\rightarrow M$ with affine homotopy lift $({\delta},{{\mathfrak D}})$, Proposition \[semi-affconj\] shows that there exists a map $\phi$ such that $\rho:F{\rightarrow}{{\rm GL}}({{\mathfrak g}})$, $\phi$ and ${{\mathfrak D}}_\ast$ satisfy the conditions of Proposition \[decomprho\].
In this specific case, we will refer to the decomposition $\rho=\rho_{\leq 1}\oplus \rho_{>1}$ obtained from Proposition \[decomprho\] as the decomposition of $\rho$ induced by ${{\mathfrak D}}$.
As already indicated in the introduction, it is our aim to have a good understanding of the exact relationship between the Nielsen and the Lefschetz number of a given map $f$ on an infra-nilmanifold. From Theorem \[LeeForm\] it is clear that the terms $\det(I-\rho(x) {{\mathfrak D}}_\ast)$ and especially their signs (in order to obtain the modulus of these terms) will play a crucial role in understanding this relationship. In the following lemma and proposition, we will therefore deduce how these signs behave.
\[expanding\] Let $\rho:F\to {{\rm GL}}_n({{\mathbb R}})$ be a representation of a finite group $F$ and $\phi: F\to F$ be any function. Let $D$ be a linear transformation of ${{\mathbb R}}^n$. Suppose that $\rho(\phi(x))D=D\rho(x)$ for all $x\in F$. Suppose that $|\lambda|>1$ for all eigenvalues $\lambda$ of $D$. Then the following statement holds: $$\forall x \in F: \ \det(\rho(x))\det(I-D)\det(I-\rho(x)D)>0.$$
This proof is largely based on the proof of [@ddm05-1 Theorem 3.2]. Choose an arbitrary $x \in F$. We can define a sequence $(x_i)_{i\in {{\mathbb N}}}$ of elements in $F$ by taking $x_1=x$ and such that $x_{i+1}= \phi(x_i)$. Since $F$ is finite, this sequence will become periodic from a certain point onwards. By [@ddm05-1 Lemma 3.1], we know that $$\label{induct1}\forall i \in {{\mathbb N}}: \;\det(I-\rho(x_i) D) =\det(I-\rho(x_{i+1}) D).$$ Also, by the same lemma, there exists an $l\in {{\mathbb N}}$ and an element $x_j$ in our sequence such that $(\rho(x_j)D)^l=D^l$. As every eigenvalue of $D$ has modulus $>1$, we know that every eigenvalue of $\rho(x_j)D$ will also have modulus $>1$. Let us call those eigenvalues $\lambda_1,\dots ,\lambda_n$, then $$\det(I-\rho(x_j) D)=(1-\lambda_1)\dots(1-\lambda_n).$$ Note that the complex eigenvalues, which always come in conjugate pairs, together with the negative real eigenvalues of $\rho(x_j) D$ can only give a positive contribution to this product. So the sign of $\det(I-\rho(x_j)D)$ is completely determined by the parity of the number of positive real eigenvalues. Analogously, the sign of $\det(I-D)$ is completely determined by the parity of the number of real positive eigenvalues of $D$.
As $\rho(x_j)$ is a real matrix of finite order, we know that $\det(\rho(x_j))$ equals $1$ or $-1$. If $\det(\rho(x_j))=1$, then $\det(\rho(x_j)D)=\det(D)$. A fortiori, $\det(\rho(x_j)D)$ and $\det(D)$ have the same sign (and are both non-zero). Hence, the parity of the number of negative real eigenvalues of $\rho(x_j)D$ and $D$ is the same, and therefore also the parity of the number of positive real eigenvalues of both matrices is the same (since complex eigenvalues come in conjugate pairs). It follows that in this case $\det(I-D)$ and $\det(I-\rho(x_j)D)$ have the same sign and, hence $$\label{inequality1}
\det(\rho(x_j))\det(I-D)\det(I-\rho(x_j)D)=\det(I-D)\det(I-\rho(x_j)D)>0.$$
When $\det(\rho(x_j))=-1$, we deduce in a similar way that the parity of the number of positive eigenvalues of $\rho(x_j)D$ and $D$ is different, hence $\det(I-D)$ and $\det(I-\rho(x_j)D)$ have an opposite sign and we find $$\label{inequality2}
\det(\rho(x_j))\det(I-D)\det(I-\rho(x_j)D)=-\det(I-D)\det(I-\rho(x_j)D)>0.$$
Note that for every $i \in {{\mathbb N}}$, we have$$\det(\rho(x_{i+1}))\det(D)=\det(\rho(x_{i+1})D)=\det(D\rho(x_{i}))=\det(D)\det(\rho(x_{i})).$$Because $\det(D)\neq 0$, this means that $$\det(\rho(x_{i+1}))=\det(\rho(x_{i})).$$By using an inductive argument on this expression and on expression , we find that $$\det(\rho(x))=\det(\rho(x_{j})) \textrm { and } \det(I-\rho(x) D)
=\det(I-\rho(x_{j}) D).$$ This, together with the two inequalities (\[inequality1\]) and (\[inequality2\]), proves this lemma.
\[Sign\] Let $\Gamma\subseteq {{\rm Aff}}(G)$ be an almost–Bieberbach group and let $M=\Gamma\backslash G$ be the corresponding infra–nilmanifold. Let $F$ be the holonomy group of $\Gamma$ and $\rho:F\to {{\rm GL}}_n({{\mathfrak g}})$ be the associated holonomy representation. Choose an arbitrary self-map $f:M\to M$ and let $({\delta},{{\mathfrak D}})\in {{\rm aff}}(G)$ be an affine homotopy lift of $f$. Let $\rho=\rho_{\leq 1}\oplus \rho_{>1}$ be the decomposition of $\rho$ induced by ${{\mathfrak D}}$. For every $x \in F$, the following statements hold: $$\det(\rho_{>1}(x))=1\Rightarrow \det(I-{{\mathfrak D}}_\ast)\det(I-\rho(x){{\mathfrak D}}_\ast)\geq 0$$and $$\det(\rho_{>1}(x))=-1\Rightarrow \det(I-{{\mathfrak D}}_\ast)\det(I-\rho(x){{\mathfrak D}}_\ast)\leq 0.$$
From Propositions \[semi-affconj\] and \[decomprho\], we know that there exists a function $\phi:F\to F$, such that $\rho(\phi(x)) {{\mathfrak D}}_\ast = {{\mathfrak D}}_\ast \rho(x)$, for all $x\in F$ and there exists a decomposition of ${{\mathfrak g}}$ into two subspaces leading to the decomposition $\rho=\rho_{\leq 1}\oplus \rho_{> 1}$, while ${{\mathfrak D}}_\ast$ can be written in block diagonal form $$\left(\begin{array}{cc}
D_{\leq 1} & \ast \\
0 & D_{>1}
\end{array}\right),$$ where $D_{\leq 1}$ and $D_{>1}$ only have eigenvalues of modulus $\leq 1$ and $>1$, respectively.
For every $x\in F$ we have that $$\det(I-\rho(x){{\mathfrak D}}_\ast)=\det(I-\rho_{>1}(x)D_{>1})\det(I-\rho_{\leq 1}(x)D_{\leq 1}).$$ Analogously as in the proof of Lemma \[expanding\], we can show that $\rho_{\leq 1}(x)D_{\leq 1}$ only has eigenvalues of modulus $\leq 1$, from which it follows that $\det(I-\rho_{\leq 1}(x)D_{\leq 1})\geq 0$ (see also [@ddm05-1 Theorem 4.6]) and so the second factor in the equality above does not influence the sign of $\det(I-\rho(x){{\mathfrak D}}_\ast)$.
Therefore, for every $x\in F$, the following holds: $$\det(I-\rho(x){{\mathfrak D}}_\ast)\det(I-\rho_{>1}(x)D_{>1})\geq 0.$$ In particular, this inequality is true for the identity element of $F$: $$\det(I-{{\mathfrak D}}_\ast)\det(I-D_{>1})\geq 0.$$ By using both inequalities and because of Lemma \[expanding\] (for $D_{>1}$ and $\rho_{> 1}$), we deduce that $$\det(I-\rho(x){{\mathfrak D}}_\ast)\det(I-{{\mathfrak D}}_\ast)\det(\rho_{>1}(x))\geq 0,$$ which concludes this proposition.
Nielsen numbers and the positive part of a map
==============================================
In this section we will prove the main results of this paper. In the next theorem, we show how certain maps on a given infra-nilmanifold give rise to a specific 2-fold covering of that infra-nilmanifold, in such a way that the map under consideration lifts to that covering.
\[positivepart\] Let $M=\Gamma\backslash G$ be an infra-nilmanifold modeled on a connected, simply connected nilpotent Lie group $G$, with fundamental group the almost-Bieberbach group $\Gamma\subseteq {{\rm Aff}}(G)$. Let $p:\Gamma{\rightarrow}F$ denote the projection of $\Gamma$ onto its holonomy group and denote the holonomy representation by $\rho:F\to {{\rm GL}}_n({{\mathfrak g}})$. Choose an arbitrary self-map $f:M\to M$ and let $({\delta},{{\mathfrak D}})\in {{\rm aff}}(G)$ be an affine homotopy lift of $f$. Let $\rho=\rho_{\leq 1} \oplus \rho_{>1}$ be the decomposition of $\rho$ induced by ${{\mathfrak D}}$. Then $$\Gamma_+ = \{ \gamma \in \Gamma\;|\; \det(\rho_{>1}(p(\gamma))) =1 \}$$ is a normal subgroup of $\Gamma$ of index 1 or 2. It follows that $\Gamma_+$ is also a Bieberbach group, that the corresponding infra-nilmanifold $M_+= \Gamma_+\backslash G$ is either equal to $M$ or a 2-fold covering of $M$. In the later case, the map $f$ lifts to a map $f_+:M_+{\rightarrow}M_+$ which has the same affine homotopy lift $({\delta},{{\mathfrak D}})$ as $f$.
Remark: when $\rho=\rho_{\leq 1}$ (so when ${{\mathfrak D}}_\ast$ has no eigenvalues of modulus $>1$), we will take $\Gamma_+=\Gamma$.
We may assume that ${{\mathfrak D}}_\ast$ has at least one eigenvalue of modulus $>1$, otherwise the theorem is trivially true. Note that $\Gamma_+$ is the kernel of the morphism $$\Gamma {\rightarrow}\{-1,+1\}: \gamma \mapsto \det( \rho_{>1} (p(\gamma)))$$ and hence $\Gamma_+$ is either equal to $\Gamma$ (in case $ \det( \rho_{>1} (p(\gamma)))=1$ for all $\gamma$) or $[\Gamma:\Gamma_+]=2$ (in case $ \exists \gamma \in \Gamma: \det( \rho_{>1} (p(\gamma)))=-1$). In any of these two cases, $\Gamma_+$ is still an almost–Bieberbach group and when $[\Gamma:\Gamma_+]=2$, we have that $M_+$ is a 2-fold covering of $M$.
There is a lift $\tilde{f}: G {\rightarrow}G$ of $f$ and a morphism $\varphi:F {\rightarrow}F$ such that $$\forall \gamma \in \Gamma:\; \varphi(\gamma) \tilde{f} = \tilde{f} \gamma$$ and also $$\label{voorsubiet}
\forall \gamma \in \Gamma:\; \varphi(\gamma) ({\delta}, {{\mathfrak D}}) = ({\delta},{{\mathfrak D}}) \gamma.$$
We need to show that $\tilde{f}$ also induces a map on $M_+=\Gamma_+\backslash G$. For this, we need to prove that $\varphi(\Gamma_+) \subseteq \Gamma_+$. As before, we can assume that $${{\mathfrak D}}_\ast= \left( \begin{array}{cc}
D_{\leq 1} & \ast \\
0 & D_{>1}
\end{array}\right).$$ Let $\gamma=(a,{{\mathfrak A}})\in \Gamma_+$ and assume that $\varphi(\gamma)= (b,{{\mathfrak B}})$. As $\gamma \in \Gamma_+$, we have that $\det(\rho_{>1}({{\mathfrak A}})) = 1$. Equation (\[voorsubiet\]) implies that $${{\mathfrak B}}{{\mathfrak D}}= {{\mathfrak D}}{{\mathfrak A}}\Rightarrow {{\mathfrak B}}_\ast {{\mathfrak D}}_\ast = {{\mathfrak D}}_\ast {{\mathfrak A}}_\ast \Rightarrow
\rho_{>1}({{\mathfrak B}}) D_{>1} = D_{>1} \rho_{>1}({{\mathfrak A}})$$ As $\det(D_{>1})\neq 0$, this last equality implies that $\det(\rho_{>1}({{\mathfrak B}}))=\det( \rho_{>1}({{\mathfrak A}}))$, from which it follows that $\varphi(\gamma)=(b,{{\mathfrak B}})\in \Gamma_+$.
With the notations from Theorem \[positivepart\], we will call $\Gamma_+$ the positive part of $\Gamma$ with respect to $f$. We will say that $f_+$ is the positive part of $f$.
Note that for any $k\in {{\mathbb N}}$ we can take $({\delta},{{\mathfrak D}})^k$ as an affine homotopy lift of $f^k$. Therefore, the decomposition of $\rho$ into a direct sum $\rho=\rho_{\leq 1} \oplus \rho_{>1}$ is independent of $k$. It follows that the positive part $\Gamma_+$ of $\Gamma$ with respect to $f$ is the same as the positive part of $\Gamma$ with respect to $f^k$ for any $k\in {{\mathbb N}}$. We also have that $(f_+)^k=(f^k)_+$.
The proof of the following lemma can be left to the reader.
\[LemmaSign\] Let $D\in {{\mathbb R}}^{n\times n}$ be an arbitrary matrix. Let $p$ denote the number of real positive eigenvalues of $D$ which are strictly bigger than 1 and let $n$ denote the number of negative real eigenvalues of $D$ which are strictly smaller than $-1$, then for all $k\in {{\mathbb N}}$: $${
[cc]{} (-1)\^p (I-D\^k)0 & $k$\
(-1)\^[p+n]{} (I-D\^k)0 & $k$ .\
.$$It follows that one of the following holds$$k[[N]{}]{}: (I-D\^k)(I-D\^[k+1]{})0$$or$$k[[N]{}]{}: (I-D\^k)(I-D\^[k+1]{})0.$$
We are now ready to show the exact relationship between the Nielsen number of (any power of) a map $f$ and the Lefschetz number of (any power of) that map $f$ and its positive part $f_+$.
\[NielsenLefschetz\] Let $G$ be a connected, simply connected, nilpotent Lie group, $\Gamma\subseteq {{\rm Aff}}(G)$ an almost–Bieberbach group, $M=\Gamma\backslash G$ the corresponding infra-nilmanifold and $f:M\to M$ a map with affine homotopy lift $({\delta},D)$. Let $p$ denote the number of positive real eigenvalues of ${{\mathfrak D}}_\ast$ which are strictly bigger than $1$ and let $n$ denote the number of negative real eigenvalues of ${{\mathfrak D}}_\ast$ which are strictly smaller than $-1$. Then we can express $N(f^k)$, for $k\in {{\mathbb N}}$, in terms of $L(f^k)$ and $L(f_+^k)$, where $f_+$ is the positive part of $f$ as follows:
------------------ ---------------------------- ---------------------------------------
$k$ odd $N(f^k)=(-1)^p L(f^k)$ $N(f^k)=(-1)^{p} (L(f_+^k)-L(f^k))$
\[1ex\] $k$ even $N(f^k)=(-1)^{p+n} L(f^k)$ $N(f^k)=(-1)^{p+n} (L(f_+^k)-L(f^k))$
\[1ex\]
------------------ ---------------------------- ---------------------------------------
Theorem \[LeeForm\] gave us the following formulas: $$\label{LefschetzFormula} L(f^k)=\frac{1}{\# F}\sum_{{{\mathfrak A}}\in F}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)$$ and $$\label{NielsenFormula} N(f^k)=\frac{1}{\# F}\sum_{{{\mathfrak A}}\in F}|\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)|.$$ Due to Proposition \[Sign\] and Theorem \[positivepart\], we know that all elements of the form $\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)$ have the same sign as $\det(I-{{\mathfrak D}}_\ast^k)$ if and only if $\Gamma=\Gamma_+$. If $\Gamma\neq \Gamma_+$, on the other hand, then only half of these elements will have the same sign, since $\Gamma_+$ is an index-two-subgroup of $\Gamma$.
First, suppose $\Gamma=\Gamma_+$ and $k$ is odd. By using Lemma \[LemmaSign\], we find that $$|\det(I-{{\mathfrak D}}_\ast^k)|=(-1)^p\det(I-{{\mathfrak D}}_\ast^k)\geq 0.$$Since all terms in equation (\[LefschetzFormula\]) have the same sign, we can replace the absolute values in equation (\[NielsenFormula\]) by multiplying with $(-1)^p$. Hence, we get$$N(f^k)=(-1)^p L(f^k).$$If $k$ is even, a similar argument shows that $$N(f^k)=(-1)^{p+n} L(f^k).$$
Now, suppose $\Gamma\neq\Gamma_+$ and $k$ is odd. Let us denote the holonomy group of $\Gamma_+$ by $F_+$. Note that $[F:F_+]=2$. In an obvious way, we can rewrite equation (\[NielsenFormula\]) as follows$$N(f^k)=\frac{1}{\# F}\left(\sum_{{{\mathfrak A}}\in F_+}|\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)|+\sum_{{{\mathfrak A}}\in F\setminus F_+}|\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)|\right).$$ By using some of the previous arguments, this gives us$$N(f^k)=\frac{(-1)^p}{\# F}\left(\sum_{{{\mathfrak A}}\in F_+}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)-\sum_{{{\mathfrak A}}\in F\setminus F_+}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)\right).$$Finally, this can be rewritten as$$N(f^k)=(-1)^p\frac{1}{\# F}\left(-\sum_{{{\mathfrak A}}\in F}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)\right)+(-1)^p\frac{2}{\# F}\left(\sum_{{{\mathfrak A}}\in F_+}\det(I-{{\mathfrak A}}_\ast{{\mathfrak D}}_\ast^k)\right).$$ Since $[F:F_+]=2$, we have $$N(f^k)=(-1)^{p}(-L(f^k)+L(f_+^k)).$$ If $k$ is even, we can deduce the following formula in a similar manner:$$N(f^k)=(-1)^{p+n} (L(f_+^k)-L(f^k)).$$
Finally, we can use the results above to describe the Nielsen zeta function of $f$ in terms of the Lefschetz zeta functions of $f$ and $f_+$.
\[Nielsenzeta\] Let $G$ be a connected, simply connected, nilpotent Lie group, $\Gamma\subseteq {{\rm Aff}}(G)$ an almost–Bieberbach group, $M=\Gamma\backslash G$ the corresponding infra-nilmanifold and $f:M\to M$ a map with affine homotopy lift $({\delta},D)$. Let $p$ denote the number of positive real eigenvalues of ${{\mathfrak D}}_\ast$ which are strictly bigger than $1$ and let $n$ denote the number of negative real eigenvalues of ${{\mathfrak D}}_\ast$ which are strictly smaller than $-1$. Then $N_f(z)$ can be expressed in terms of $L_f(z)$ and $L_{f_+}(z)$ by the corresponding entry in the following table:
--------------------------- ----------------------------------------------- --------------------------------------------------------
$p$ even, $n$ even $N_f(z)= L_f(z) $ $N_f(z)= {\displaystyle}\frac{L_{f_+}(z)}{L_{f}(z)}$
\[1ex\] $p$ even, $n$ odd $N_f(z)= {\displaystyle}\frac{1}{L_{f}(-z)} $ $N_f(z)= {\displaystyle}\frac{L_{f}(-z)}{L_{f_+}(-z)}$
\[1ex\] $p$ odd, $n$ even $N_f(z)= {\displaystyle}\frac{1}{L_{f}(z)} $ $N_f(z)= {\displaystyle}\frac{L_{f}(z)}{L_{f_+}(z)}$
\[1ex\] $p$ odd, $n$ odd $N_f(z)= L_f(-z) $ $N_f(z)= {\displaystyle}\frac{L_{f_+}(-z)}{L_{f}(-z)}$
\[1ex\]
--------------------------- ----------------------------------------------- --------------------------------------------------------
Let us first consider the case where $n$ is even. By Theorem \[NielsenLefschetz\], we find, $\forall k\in {{\mathbb N}}$, $$N(f^k)=(-1)^p L(f^k) \textrm{ or } N(f^k)=(-1)^p (L(f_+^k)-L(f^k)),$$depending on whether $\Gamma$ equals $\Gamma_+$ or not. It is then straightforward to see that$$N_f(z)=L_f(z)^{(-1)^p} \textrm{ or } N_f(z)=\left(\frac{L_{f_+}(z)}{L_f(z)}\right)^{(-1)^p}.$$Now, suppose $n$ is odd and $\Gamma=\Gamma_+$. Then $$N(f\^k)={
[cc]{} (-1)\^p L(f\^k) & $k$\
-(-1)\^[p]{} L(f\^k) & $k$ .\
.$$Therefore,$$N\_f(z)=(-(-1)\^[p]{}\_[k=1]{}\^L(f\^k)),$$which means that$$N\_f(z)=()\^[(-1)\^p]{}.$$When $\Gamma\neq \Gamma_+$, a similar argument gives us the two remaining expressions.
As an immediate consequence of the above theorem, we can conclude that the Nielsen zeta function for maps on infra-nilmanifolds is indeed rational.
\[mainresult\] Let $G$ be a connected, simply connected, nilpotent Lie group. Let $M$ be an infra-nilmanifold modeled on $G$. Choose an arbitrary continuous self-map $f:M\to M$. Let $N_f(z)$ be the Nielsen zeta function of $f$, then $N_f(z)$ is a rational function.
This follows easily from Theorem \[Smale\] and Theorem \[Nielsenzeta\].
The class of infra-solvmanifolds of type $(R)$ is a class of manifolds which contains the class of infra-nilmanifolds and which still shares a lot of the good (algebraic) properties of the class of infra-nilmanifolds (see [@hlp10-1; @ll09-1]). Although we formulated the theory in terms of infra-nilmanifolds in this paper, the reader who is familiar with the the class of infra-solvmanifolds of type $(R)$ will have noticed that all results and proofs directly generalize to this class of manifolds. Therefore the Nielsen zeta function for maps on infra-solvmanifolds of type $(R)$ will always be a rational function. We have chosen to formulate everything in terms of infra–nilmanifolds because this class of manifolds is much wider known and because the original rationality question was formulated in terms of these manifolds.
Some examples
=============
In this section, we will illustrate our results by considering maps on a 3–dimensional flat manifold, so we are considering the case $G={{\mathbb R}}^3$. This situation is notationally much simpler than the general case, because we can identify the Lie algebra of ${{\mathbb R}}^3$ with ${{\mathbb R}}^3$ itself and so, for example, we will have that ${{\mathfrak D}}_\ast={{\mathfrak D}}$, etc. On the other hand, this situation is general enough to illustrate all possibilities of the formulas above.
Take $\{e_1,e_2,e_3\}$ as the standard basis of ${{\mathbb R}}^3$. Let $\Gamma$ be the $3$-dimensional Bieberbach group generated by the elements $(e_1,I), (e_2,I),(a,{{\mathfrak A}})$, with $${{\mathfrak A}}= \left(\begin{array}{ccc}
-1 & 0 & 0 \\
0& -1 & 0 \\
0& 0 & 1
\end{array}\right) \textrm{ and } a= \left(\begin{array}{c}
0 \\
0 \\
\frac{1}{2}
\end{array}\right).$$Note that $(a,{{\mathfrak A}})^2=(e_3,I)$, hence $F\cong {{\mathbb Z}}_2$ and $\Gamma\cap {{\mathbb R}}^3={{\mathbb Z}}^3$ is a lattice of ${{\mathbb R}}^3$.
Consider the affine map $({\delta},{{\mathfrak D}}):{{\mathbb R}}^3\to {{\mathbb R}}^3$ with $${{\mathfrak D}}= \left(\begin{array}{ccc}
4 & 2 & 0 \\
-1 & 1 & 0 \\
0& 0 & 5
\end{array}\right) \textrm{ and } {\delta}= \left(\begin{array}{c}
0 \\
0 \\
0
\end{array}\right).$$
One can check that $$(e_3,I)^2 (a,{{\mathfrak A}})({\delta},{{\mathfrak D}}) = ({\delta},{{\mathfrak D}})(a,{{\mathfrak A}})$$ from which it now easily follows that $({\delta},{{\mathfrak D}}) \Gamma \subseteq \Gamma({\delta},{{\mathfrak D}})$ and hence there is a morphism $\varphi:\Gamma \rightarrow \Gamma$ such that $$\forall \gamma\in \Gamma:\; \varphi(\gamma) ({\delta},{{\mathfrak D}}) = ({\delta},{{\mathfrak D}}) \gamma,$$ showing that $({\delta},{{\mathfrak D}})$ induces a map $f:\Gamma\backslash {{\mathbb R}}^3 \rightarrow \Gamma\backslash {{\mathbb R}}^3$.
The eigenvalues of ${{\mathfrak D}}$ are $2,\;3$ and $5$. By using the formulas from Theorem \[LeeForm\], we find that $$L(f^k)=\frac{(1-5^k)((1-2^k)(1-3^k)+(1+2^k)(1+3^k))}{2}=(1-5^k)(1+6^k)=1^k-5^k+6^k-30^k.$$By using the fact that $$\sum_{k=1}^{\infty}\frac{\lambda^k z^k}{k}=-\ln(1-\lambda z),$$we find that $$L_f(z)=\frac{(1-5z)(1-30z)}{(1-z)(1-6z)}.$$Because every eigenvalue of ${{\mathfrak D}}$ is strictly larger than $1$, we have that $D_{>1}={{\mathfrak D}}$ and because $\det({{\mathfrak A}})=1$, it follows immediately that $\Gamma=\Gamma_+$. With the same notation as above, we see that $p=3$ and $n=0$. Therefore, by Theorem \[NielsenLefschetz\] and Theorem \[Nielsenzeta\], we find that $$N(f^k)=-L(f^k) \textrm{ and } N_f(z)=L_f(z)^{-1}= \frac{(1-z)(1-6z)}{(1-5z)(1-30z)}.$$
Now consider the map $g:\Gamma\backslash {{\mathbb R}}^3\to \Gamma\backslash {{\mathbb R}}^3$, induced by the affine map $({\delta}',{{\mathfrak D}}'):{{\mathbb R}}^3\to {{\mathbb R}}^3$ with $${{\mathfrak D}}'= \left(\begin{array}{ccc}
-2 & 8 & 0 \\
-1 & 4 & 0 \\
0& 0 & -3
\end{array}\right) \textrm{ and } {\delta}'= \left(\begin{array}{c}
0 \\
0 \\
0
\end{array}\right).$$ The fact that $({\delta}',{{\mathfrak D}}')$ induces a map on $\Gamma \backslash {{\mathbb R}}^3$, follows from the fact that $$(e_3,I)^{-2}(a,{{\mathfrak A}})({\delta}',{{\mathfrak D}}') = ({\delta}',{{\mathfrak D}}')(a,{{\mathfrak A}}).$$ Again, a straightforward calculation shows that $0$, $2$ and $-3$ are the eigenvalues of ${{\mathfrak D}}'$. In a similar way as before, one can check that $$L(g^k)=1-(-3)^k \textrm{ and } L_g(z)=\frac{1+3z}{1-z}.$$Note that ${{\mathfrak A}}$ and ${{\mathfrak D}}'$ are simultaneously diagonalizable. Using this diagonalization, we have that $$D_{>1}=\left(\begin{array}{cc}
2 & 0 \\ 0 & -3
\end{array}\right)\mbox{ and }
\rho_{>1} ({{\mathfrak A}})=\left( \begin{array}{cc}
-1 & 0 \\0 & 1
\end{array}\right) .$$ Since $\det(\rho_{>1}({{\mathfrak A}}))=-1$ we know $\Gamma\neq \Gamma_+$. In fact $\Gamma_+=\Gamma\cap {{\mathbb R}}^3$, from which it follows that $g_+$ is a map on the $3$-dimensional torus $T^3$, such that $$L(g_+^k)=\det(I-{{\mathfrak D}})=(1-2^k)(1-(-3)^k)=1^k-2^k-(-3)^k+(-6)^k.$$We deduce that $$L_{g_+}(z)=\frac{(1-2z)(1+3z)}{(1-z)(1+6z)}.$$Because $p=n=1$ and because of Theorem \[Nielsenzeta\], we find $$N_g(z)=\frac{L_{g_+}(-z)}{L_{g}(-z)}=\frac{1+2z}{1-6z}.$$Note that this expression for the Nielsen zeta function allows us to say that $$N(g^k)=6^k-(-2)^k,$$which could also be computed by using Theorem \[NielsenLefschetz\].
[10]{}
Anosov, D. . Uspekhi. Mat. Nauk, 1985, 40 4(224), pp. 133–134. English transl.: Russian Math. Surveys, 40 (no. 4), 1985, pp. 149–150.
Brown, R. F. . Scott, Foresman and Company, 1971.
Dekimpe, K., De Rock, B., and Malfait, W. . J. Geom. Phys., 2004, 52, pp. 174–185.
Dekimpe, K., De Rock, B., and Malfait, W. . Monatschefte für Mathematik, 2007, 150, pp. 1–10.
Dekimpe, K., De Rock, B., and Penninckx, P. . Asian J. Math., 2011, 15 4, 539–548.
Fadell, E. and Husseini, S. On a theorem of [A]{}nosov on [N]{}ielsen numbers for nilmanifolds. In [*Nonlinear functional analysis and its applications ([M]{}aratea, 1985)*]{}, volume 173 of [*NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.*]{}, pages 47–53. Reidel, Dordrecht, 1986.
Fel’shtyn, A. . Mem. Amer. Math. Soc., 2000, 147 699, xii+146.
Fel’shtyn, A. . Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI), 2000, 266 Teor. Predst. Din. Sist. Komb. i Algoritm. Metody. 5, 312–329.
Fel’shtyn, A. and Hill, R. Dynamical zeta functions, congruences in [N]{}ielsen theory and [R]{}eidemeister torsion. In [*Nielsen theory and [R]{}eidemeister torsion ([W]{}arsaw, 1996)*]{}, volume 49 of [*Banach Center Publ.*]{}, pages 77–116. Polish Acad. Sci., Warsaw, 1999.
Fel’shtyn, A. L. New zeta functions for dynamical systems and [N]{}ielsen fixed point theory. In [*Topology and geometry—[R]{}ohlin [S]{}eminar*]{}, volume 1346 of [*Lecture Notes in Math.*]{}, pages 33–55. Springer, Berlin, 1988.
Ha, K. Y., Lee, J. B., and Penninckx, P. . P. Am. Math. Soc., 2010. To be published.
Keppelmann, E. C. and McCord, C. K. . Pacific J. Math., 1995, 170, No. 1, pp. 143–159.
Lee, J. B. and Lee, K. B. . Nagoya Math. J., 2009, 196 117–134.
Lee, K. B. . Pacific J. Math., 1995, 168, 1, pp. 157–166.
Li, L. . Adv. in Math. (China), 1994, 23 3, 251–256.
Pilyugina, V. B. and Fel’shtyn, A. L. . Funktsional. Anal. i Prilozhen., 1985, 19 4, 61–67, 96.
Smale, S. . Bull. Amer. Math. Soc., 1967, 73, pp. 747–817.
Wong, P. . J. Korean Math. Soc., 2001, 38 6, 1107–1116.
|
---
address: |
$^\star$ Politecnico di Torino, Italy\
$^\dagger$ INRIA, France
bibliography:
- 'discos\_spt.bib'
title: Exact Performance Analysis of the Oracle Receiver for Compressed Sensing Reconstruction
---
A sparse or compressible signal can be recovered from a certain number of noisy random projections, smaller than what dictated by classic Shannon/Nyquist theory. In this paper, we derive the *closed–form* expression of the mean square error performance of the *oracle* receiver, knowing the sparsity pattern of the signal. With respect to existing bounds, our result is *exact* and does not depend on a particular realization of the sensing matrix. Moreover, our result holds irrespective of whether the noise affecting the measurements is white or correlated. Numerical results show a perfect match between equations and simulations, confirming the validity of the result.
Compressed Sensing, Oracle Receiver, Wishart Matrix
Introduction {#sec:intro}
============
Compressed sensing (CS) [@donoho2006cs; @candes2006nos] has emerged in past years as an efficient technique for sensing a signal with fewer coefficients than dictated by classic Shannon/Nyquist theory. The hypothesis underlying this approach is that the signal to be sensed must have a sparse – or at least compressible – representation in a convenient basis. In CS, sensing is performed by taking a number of linear projections of the signal onto pseudorandom sequences. Therefore, the acquisition presents appealing properties. First, it requires *low encoding complexity*, since no sorting of the sparse signal representation is required. Second, the choice of the sensing matrix distribution is blind to the source distribution.
Several different techniques can be used to reconstruct a signal from CS measurements. Often, for performance assessment, the ideal *oracle* receiver, *i.e.,* a receiver with perfect knowledge of the signal sparsity support, is considered as a benchmark. But even for this ideal receiver, only upper and lower performance bounds are available. For example, in [@eldar2012compressed] a bound depending on a particular realization of the sensing matrix was derived. This bound represents a worst–case scenario since it depends on the maximum norm of the noise vector. An average (over noise) bound was presented in [@DBLP:journals/corr/abs-1104-4842] for white noise and in [@laska2012regime] for correlated noise. Both bounds depend on the Restricted Isometry Property (RIP) constant of the sensing matrix, a parameter taking different values from realization to realization of the sensing matrix and whose evaluation represents a combinatorial complexity problem. Even if there exist classes of matrix respecting the RIP with a certain constant with high probability, this would give a probabilistic result, restricted to a specific class of sensing matrices. Moreover, note that [@laska2012regime] overestimates the reconstruction error giving a result which depends on the maximum eigenvalue of the noise covariance matrix. Other results can be found in [@arias2013fundamental] and [@candes2013well].
In this paper, we present the *exact* average performance of the oracle receiver. The average is taken over noise distribution but also over the sensing matrix distribution, and does not depend on the RIP constant of a specific sensing matrix (or family of sensing matrices), but only on system or signal parameters. Using some recent results about Wishart random matrix theory [@Cook2011], we show that the performance depends, apart from system parameters, on the variance of the noise, only, and not on its covariance. Hence, our result can be applied both to systems where measurements are corrupted by either white or correlated noise.
Background {#sec:background}
==========
Compressed Sensing {#sec:background_CS}
------------------
In the standard CS framework, introduced in [@donoho2006cs; @candes2006nos], the signal $\x\in\Ri^{N\times 1}$, having a $K$–sparse representation in some basis $\Ps\in\Ri^{N\times N}$, *i.e.*: $\x = \Ps \bm{\theta},\quad \lzeronorm{\bm{\theta}} = K,\quad K\ll N$, can be recovered by a smaller vector of noisy linear measurements $\y = \Ph\x+\z$, $\y\in\Ri^{M\times 1}$ and $K<M<N$, where $\Ph\in\Ri^{M\times N}$ is the *sensing matrix* and $\mat{z}\in\Ri^{M\times 1}$ is the vector representing additive noise such that $\ltwonorm{\mat{z}} < \varepsilon$, by solving the $\ell_1$ minimization with inequality constraints $$\label{eq:CS_recovery_relaxed}
\widehat{\bm{\theta}}=\arg\min_{\bm{\theta}}\lonenorm{\bm{\theta}}\ \quad \text{s.t.}\quad \ltwonorm{\Ph\Ps\bm{\theta} - \y} < \varepsilon~$$ and $\widehat{\x}=\Ps\widehat{\bm{\theta}}$, known as basis pursuit denoising, provided that $M = O(K\log(N/K))$ and that each submatrix consisting of $K$ columns of $\Ph\Ps$ is (almost) distance preserving [@eldar2012compressed Definition 1.3]. The latter condition is the *Restricted Isometry Property* (RIP). Formally, the matrix $\Ph\Ps$ satisfies the RIP of order $K$ if $\exists \delta_K \in (0,1]$ such that, for any $\bm{\theta}$ with $\lzeronorm{\bm{\theta}} \le K$: $$(1-\delta_K)\ltwonorm{\bm{\theta}}^2\le\ltwonorm{\Ph\Ps\bm{\theta}}^2\le(1+\delta_K)\ltwonorm{\bm{\theta}}^2,
\label{eq:RIP}$$ where $\delta_K$ is the RIP constant of order $K$. It has been shown in [@baraniuk2008spr] that when $\Ph$ is an i.i.d. random matrix drawn from any subgaussian distribution and $\Ps$ is an orthogonal matrix, $\Ph\Ps$ satisfies the RIP with overwhelming probability.
Wishart Matrices {#sec:background_wishart}
----------------
Let $\x_i$ be a zero–mean Gaussian random vector with covariance matrix $\bm{\Sigma}$. Collect $n$ realizations of $\x_i$ as rows of the $n\times p$ matrix $\X$. Hence, $\trasp{\X}\X$ is distributed as a $p$-dimensional Wishart matrix with scale matrix $\bm{\Sigma}$ and $n$ degrees of freedom [@press1982applied]: $$\mat{W} = \trasp{\X}\X\sim \W_p\left(\bm{\Sigma},n\right)~.$$ When $n>p$, $\mat{W}$ can be inverted. The distribution of $\inv{\mat{W}}$ is the *Inverse Wishart*, whose distribution and moments were derived in [@von1988moments]: $$\inv{\mat{W}} \sim \inv{\W}_p\left(\inv{\bm{\Sigma}},n\right)~.$$ On the other hand, when $n<p$, $\mat{W}$ is rank–deficient, hence not invertible. Its Moore–Penrose pseudoinverse $\mat{W}^\dagger$ follows a generalized inverse Wishart distribution, whose distribution is given in [@DiazGarcia2006] and mean and variance were recently derived in [@Cook2011 Theorem 2.1], under the assumptions that $p>n+3$ and $\mat{\Sigma} = \mat{I}$.
Performance of the Oracle Receiver {#sec:oracle_performance}
==================================
System model {#sec:system_model}
------------
Consider the vector $\x = \Ps \bm{\theta} \in \R^N$. The nonzero components of the $K$–sparse vector $\thet$ are modeled as i.i.d. centred random variables with variance $\var_{\theta}$.
The vector $\x$ is observed through a smaller vector of noisy Gaussian measurements defined as the vector $\y\in\Ri^{M}$ such that $\y = \Ph\x +\z$, where the sensing matrix $\Ph\in\Ri^{M \times N}$, with $M<N$, is a random matrix with i.i.d. entries drawn from a zero–mean Gaussian distribution with variance $\var_{\Phi}$ and $\z\in\Ri^{M\times 1}$, representing the noise, is drawn from a zero–mean multivariate random distribution with covariance matrix $\mat{\Sigma}_z$.
We remark here that in our analysis we consider measurements affected both by white noise, *i.e.,* the case where $\mat{\Sigma}_z = \mat{I}$, like thermal noise or quantization noise deriving from uniform scalar quantizer in the high–rate regime, as well as correlated noise, like the one affecting measurements quantized using vector quantization or the noise at the output of a low–pass filter.
Error affecting the oracle reconstruction {#subsec:RD_CS reconstruction}
-----------------------------------------
We now evaluate the performance of CS reconstruction with noisy measurements. The performance depends on the amount of noise affecting the measurements. In particular, the distortion ${\ltwonorm{\widehat{\x}-\x}^2}$ is upper bounded by the noise variance up to a scaling factor [@candes2006ssr; @candes2008restricted] $\ltwonorm{\widehat{\x}-\x}^2 \le c^2\varepsilon^2$, where the constant $c$ depends on the realization of the measurement matrix, since it is a function of the RIP constant. Since we consider the average[^1] performance, we need to consider the worst case $c$ and this upper bound will be very loose [@eldar2012compressed Theorem 1.9].
Here, we consider the *oracle* estimator, which is the estimator knowing exactly the sparsity support $\Omega=\{i|\bm{\theta}_i\neq 0\}$ of the signal $\x$.
Let $\mat{U}_{\Omega}$ be the submatrix of $\U$ obtained by keeping the columns of $\Ph\Ps$ indexed by $\Omega$, and let $\Omega^c$ denote the complementary set of indexes. The optimal reconstruction is then obtained by using the pseudo–inverse of $\mat{U}_{\Omega}$, denoted by $\U_{\Omega}^\dagger$: $$\begin{aligned}
\label{eq:rec_oracle}
&\left\{\begin{array}{ll}
\widehat\thet_{\Omega} & =
\mat{U}^\dagger_{\Omega} \y := \left(\trasp{\mat{U}_{\Omega}}\mat{U}_{\Omega}\right)^{-1}\trasp{\mat{U}_{\Omega}}\y \\
\widehat\thet_{\Omega^c} & = \0
\end{array}\right.\\
&\widehat\x = \Ps \widehat\thet\end{aligned}$$
For the oracle estimator, upper and lower bounds depending on the RIP constant can be found, for example in [@DBLP:journals/corr/abs-1104-4842] when the noise affecting the measurements is white and in [@laska2012regime] when the noise is correlated. Unlike [@DBLP:journals/corr/abs-1104-4842; @laska2012regime], in this paper the average performance of the oracle, depending on system parameters only, is derived exactly. Relations with previous work will be thoroughly described in section \[sec:prev\_work\].
As we will show in the following sections, the characterization of the ideal oracle estimator allows to derive the reconstruction RD functions with results holding also when non ideal estimators are used.
\[th:RD reconstruction non distributed\] Let $\x$ and $\y$ be defined as in section \[sec:system\_model\]. Assume reconstruction by the oracle estimator, when the support $\Omega$ of $\x$ is available at the receiver. The average reconstruction error of any reconstruction algorithm is lower bounded by that of the oracle estimator that satisfies $$\label{eq:theo_1}
\mean{\ltwonorm{\widehat\x-\x}^2} = \frac{K}{M (M-K-1)} \frac{\trace{\mat{\Sigma}_z}}{\sigma^2_\Phi}$$
**[Proof.]{}** We derive a lower bound on the achievable distortion by assuming that the sparsity support $\Omega$ of $\x$ is known at the decoder.
Hence, $$\begin{aligned}
\mean{\ltwonorm{\widehat\x-\x}^2} &= \mean{\ltwonorm{\widehat\thet-\thet}^2} = \mean{\ltwonorm{\widehat\thet_{\Omega}-\thet_{\Omega}}^2}
\label{eq:rc 1}\\
&= \mean{\ltwonorm{ \UpO \z}^2}
\label{eq:rc 2}\\
&= \mean{\z^T \mean{(\UO\UO^T)^\dagger} \z }
\label{eq:rc 3}\end{aligned}$$ The first equality in follows from the orthogonality of the matrix $\Ps$, whereas the second one follows from the assumption that $\Omega$ is the true support of $\thet$. comes from the definition of the pseudo-inverse, and from the equality $\UpOT \UpO = (\UO\UO^T)^\dagger$ and from the statistical independence of and . Then, if $M>K+3$, $$\begin{aligned}
\mean{\ltwonorm{\widehat\x-\x}^2} &= \mean{\z^T \frac{K}{M (M-K-1)} \frac{1}{\sigma^2_\Phi} \I \ \z }
\label{eq:rc 4}\\
&= \frac{K}{M (M-K-1)} \frac{\trace{\mat{\Sigma}_z}}{\sigma^2_\Phi}
%\label{eq:rc 5}\end{aligned}$$ where comes from the fact that, since $M>K$, $\UO \UO^T$ is rank deficient and follows a singular $M$-variate Wishart distribution with $K$ degrees of freedom and scale matrix $\sigma^2_\Phi \I$ [@DiazGarcia2006]. Its pseudo-inverse follows a generalized inverse Wishart distribution, whose distribution is given in [@DiazGarcia2006] and the mean value is given in [@Cook2011 Theorem 2.1], under the assumption that $M>K+3$. Note that the condition $M>K+3$ is not restrictive since it holds for all $K$ and $M$ of practical interest. It can be noticed that the distortion of the oracle only depends on the variance of the elements of and not on its covariance matrix. Therefore, our result holds even if the noise is correlated (for instance if vector quantization is used). As a consequence, we can apply our result to any quantization algorithm or to noise not resulting from quantization. Note that, if the elements of have the same variance, reduces to $$\label{eq:theo_1_1}
\mean{\ltwonorm{\widehat\x-\x}^2} = \frac{K}{M-K-1} \frac{\sigma_z^2}{\sigma^2_\Phi}$$ $\square$
### Relations with previous work {#sec:prev_work}
The results obtained in Theorem \[th:RD reconstruction non distributed\] provide a twofold contribution with respect to results already existing in literature about the oracle reconstruction. First, they are exact and not given as bounds. Second, they do not depend on parameters which cannot be evaluated in practical systems, *e.g.,* the RIP constant of the sensing matrices. For example, in [@eldar2012compressed] the following worst–case upper bound was derived $$\ltwonorm{\widehat{\x}-\x}^2 \le \frac{1}{1-\delta_{2K}}\ltwonorm{\z}^2~,$$ which depends on a particular realization of the sensing matrix, since it depends on its RIP constant $\delta_{2K}$, and is very conservative, since it is function of the maximum $\ell_2$ norm of the noise vector. An average evaluation (over noise) was given in [@DBLP:journals/corr/abs-1104-4842 Theorem 4.1] where the performance of the oracle receiver with measurements affected by white noise was derived $$\label{eq:oracle_liter}
\frac{K}{1+\delta_{K}}\var_{z}\le \meanover{\z}{\ltwonorm{\widehat{\x}-\x}^2} \le \frac{K}{1-\delta_{K}}\var_{z}$$ but still the equation depends on the RIP constant of the sensing matrix and hence, on a particular realization. The result of was generalized in [@laska2012regime] to correlated noise $$\label{eq:oracle_corr_liter}
\meanover{\z}{\ltwonorm{\widehat{\x}-\x}^2} \le \frac{K}{1-\delta_{K}}\lambda_{\max}(\mat{\Sigma}_z)~,$$ where $\mat{\Sigma}_z$ is the covariance matrix of and $\lambda_{\max}(\cdot)$ represents the maximum eigenvalue of the argument. Hence, represents an even looser bound, since the contribution of the noise correlation is upper bounded by using its biggest eigenvalue.
Finally, the results of Theorem \[th:RD reconstruction non distributed\] can help to generalize related results, *e.g.,* the Rate–Distortion performance of systems based on Compressed Sensing. See for example [@dai2011quantized section III.C], where a lower bound is derived, or [@coluccia2013operational], where the exact RD performance is derived.
Numerical Results {#sec:num_res}
=================
In this section, we show the validity of the results of Theorem \[th:RD reconstruction non distributed\] by comparing the equations to the results of simulations. Here and in the following sections, signal length is $N=512$ with sparsity $K=16$. $M=128$ measurements are taken. The nonzero elements of the signal are distributed as $\N(0,1)$. The sparsity basis is the DCT matrix. The sensing matrix is composed by i.i.d. elements distributed as zero–mean Gaussian with variance $1/M$. The noise vector is Gaussian with zero mean, while the covariance matrix depends on the specific test and will be discussed later. The reconstructed signal $\widehat{\x}$ is obtained using the oracle estimator. A different realization of the signal, noise and sensing matrix is drawn for each trial, and the reconstruction error, evaluated as $\mean{\ltwonorm{\widehat\x-\x}^2}$, is averaged over 1,000 trials.
White noise
-----------
In this first experiment, the measurement vector is corrupted by white Gaussian noise, *i.e.,* $\z\sim\N_p(\mat{0}, \var_z\mat{I}_M)$. Fig. \[fig:oracle\_white\] shows the comparison between the simulated reconstruction error and . It can be easily noticed that the match between simulated and theoretical curve is perfect. As a term of comparison, we plot also the upper and lower bounds of , for $\delta_K = 0$ (ideal case) and $\delta_K = 0.5$. It can be noticed that for $\delta_K = 0$ the two bounds match and are close to the simulated curve, but even the upper bound is lower than the real curve. Instead, for $\delta_K = 0.5$ the two bounds are almost symmetric with respect to the realistic curve but quite far from it. The conclusion is that bounds in the form of are difficult to use due to the lack of knowledge of the RIP constant. Even if the sensing matrix belongs to a class where a probabilistic expression of the RIP constant exists, like the ones in [@vershynin2012nonasymptotic], a specific value depending on system parameters only is usually difficult to obtain since it depends on constants whose value is unknown or hard to compute. Tests with generic diagonal $\mat{\Sigma}_z$ have also been run, confirming a perfect match with .
![[]{data-label="fig:oracle_white"}](oracle_white_icassp.pdf){width="0.95\columnwidth"}
### Uniform scalar quantization
A practical application of the white noise case is a system where the measurement vector is quantized using an uniform scalar quantizer with step size $\Delta$. In this case, equation is very handy because it is well known that in the high–rate regime the quantization noise can be considered as uncorrelated and its variance is equal to $\frac{\Delta^2}{12}$. In Fig. \[fig:oracle\_white\_quant\], we plot the reconstruction error of the oracle from quantized measurements vs. the step size $\Delta$. It can be noticed that the match between simulations and proposed equation is perfect in the high–rate regime, *i.e.,* when the step size gets small.
![[]{data-label="fig:oracle_white_quant"}](oracle_white_icassp_quant.pdf){width="0.95\columnwidth"}
Correlated noise
----------------
We also report in Fig. \[fig:oracle\_corr\] the results obtained reconstructing with the oracle receiver the measurements corrupted by correlated noise. In particular, the $i,j$-th element of the noise covariance matrix will be given by $(\mat{\Sigma}_z)_{i,j} = \var_z\rho^{|i-j|}$. The correlation coefficient takes the values of $\rho = 0.9$ and $0.999$. We compare the simulations with and with the upper bound of , for $\delta_K = 0$ (ideal case) and $\delta_K = 0.5$. First, it can be noticed from Fig. \[fig:oracle\_corr\] that simulations confirm the result that the performance of the oracle does not depend on noise covariance but only on its variance. This is shown by the fact that simulations for $\rho=0.9$ overlap the ones for $\rho = 0.999$, and both match , confirming the validity of Theorem \[th:RD reconstruction non distributed\] even in the correlated noise scenario. Second, Fig. \[fig:oracle\_corr\] shows that the upper bounds of highly overestimate the real reconstruction error of the oracle, even for the ideal $\delta_K = 0$ case. This can be explained by considering that in , for the chosen correlation model, $\lambda_{\max}$ tends to $\var_zM$ when $ \rho$ tends to $1$.
![[]{data-label="fig:oracle_corr"}](oracle_corr_icassp.pdf){width="0.95\columnwidth"}
Conclusions and future work {#sec:conclusions}
===========================
In this paper, we derived the closed–form expression of the average performance of the *oracle receiver* for Compressed Sensing. Remarkably, this result is exact, and does not depend on the RIP constant or the noise covariance matrix. We showed that the theoretical results perfectly match the ones obtained by numerical simulations. This represents a significant improvement with respect to existing results, which consist in bounds depending on parameters that are hardly available.
As a future activity, this work can be extended to non ideal receivers, with a mismatched knowledge of the signal sparsity pattern. In that case, the performance will depend both on the noise affecting the signal and on the number of misestimated position in the sparsity pattern.
[^1]: The average performance is obtained averaging over all random variables *i.e.* the measurement matrix, the non-zero components $\thet$ and noise, as for example in [@laska2012regime].
|
\[section\] \[thm\][Corollary]{} \[thm\][Lemma]{} \[thm\][Proposition]{}
[**A Strong threshold for the size of random caps to cover a sphere.**]{}\
by\
\
Faculty of Engineering and Sciences,\
Indian Institute of Information Technology (DM)-Jabalpur, India.\
[**Abstract**]{}
\
: 05C80, 91D30.\
[*Keywords:*]{} Coverage Problem, Random Caps, Threshold Function.
Introduction.
=============
Let $V_1,V_2,\ldots,V_N$ be the spherical caps on the surface of a unit sphere with their centers $v_1,v_2,\ldots,v_N$ respectively, on the surface of a unit sphere. Also let $v_1,v_2,\ldots,v_N$ are independently and uniformly distributed on the surface of a unit sphere. H. Maehara, [@maehara] gives the threshold function $p_0(N) = \frac{\log\:N}{N}$ for the coverage of the surface of a unit sphere. H. Maehara, proves that for $\frac{p(N)\cdot
N}{\log\:N}< 1,$ probability that $N$ spherical caps cover the entire surface of the unit sphere is converges to $0,$ and for $\frac{p(N)\cdot N}{\log\:N}>1,$ probability that each point of the sphere is covered by $n$ caps is converges to $1.$ Since both of these are convergence in probability sense, the threshold $p_0(N)$ is a weak threshold. Also in article [@maehara], instead of exact bounds author use lose approximations. Due to these approximations threshold suggested in [@maehara] is different from threshold suggested in this article.\
Now using the same model and notations as in H. Maehara, [@maehara], we are giving the strong threshold function for the coverage of the surface of a unit sphere.\
Basic Model and Definitions.
=============================
Here, we recall the same model as it is given by H. Maehara, [@maehara]. We made some modification in the language for making the things more clear.
Let ‘$S$’ be the surface area of a unit sphere in $3-$dimensional space. Let $V_1,V_2,\ldots,$ be the spherical caps on the surface of a unit sphere with their centers $v_1,v_2,\ldots,$ respectively, and uniformly distributed on the surface of a unit sphere. The area of a spherical cap of angular distance (angular radius) ‘$a$’ is $2\pi
(1-\cos(a)) = 4\pi \sin^2(a/2).$
Let ‘$p$’ be the probability that any point on the surface of unit sphere covered by a specified spherical cap of angular distance (angular radius) ‘$a$’. Then $$p := \frac{\mbox{Area of a spherical cap of angular distance `$a$'}
}{\mbox{Surface area of unit sphere}} = \frac{4\pi
\sin^2(a/2)}{4\pi} = \sin^2(a/2)\label{p}$$ Let there are ‘$N$’ random caps of angular distance ‘$a$’ on the surface ‘$S$’ of unit sphere. Let $U_0(N,p),$ be the set of those points which remains uncovered by ‘$N$’ spherical caps and $u_0(N,p)$ be the proportion of the area covered by $U_0(N,p),$ i.e., $u_0(N,p)$ be the proportion of the area which remains uncovered by ‘$N$’ spherical caps: $$u_0(N,p) := \frac{\{\mbox{the area of }
U_0(N,p)\}}{4\pi}\:.\label{u_n}$$ Then $$E(u_0(N,p)) = \frac{1}{4\pi}\int_SP[x \in U_0(N,p)]dx.\label{Eu}$$ Now consider, $$\begin{aligned}
P[x \in U_0(N,p)] & = & P[x \mbox{ is remains uncovered }]\nonumber\\
& = & \prod_{i=1}^{N}\left(1-P[x \in V_i, ]\right)\nonumber\\
& = & (1-p)^N.\label{x_in_U}\end{aligned}$$ Hence, from (\[Eu\]), we have $$E(u_0(N,p)) = (1-p)^N. \label{Eu_n}$$ Similarly, as in H. Maehara, [@maehara], we have $$E(u^2_0(N,p)) = \frac{1}{16\pi^2}\int_S\int_SP[x,y \in U_0(N,p)]dx =
\frac{1}{4\pi}\int_SP[x_0,y \in U_0(N,p)]dy, \label{Eu_n2}$$ where $x_0$ is a fixed point on $S.$ Let $x_0$ and $y$ subtend an angle ‘$\theta$’ at the center of sphere. Then $$P[x_0,y \in U_0(N,p)] = \left(1-(2p-q(\theta))\right)^N,$$ where, $q(\theta)$ be the area of intersection between two spherical caps of angular distance ‘$a$’. Substituting the above probability in (\[Eu\_n2\]), we get $$E(u^2_0(N,p)) = \frac{1}{4\pi}\int_S\left(1-(2p-q(\theta))\right)^Ndy.$$ Since points $x_0$ and $y$ subtend an angle between $\theta$ and $\theta +d\theta$ at the center of the sphere. Then $$E(u^2_0(N,p)) = \int_{0}^{\pi}\left(1-(2p-q(\theta))\right)^N
(1/2)\sin(\theta)d\theta.\label{eu2}$$ Since $q(\theta) = 0$ for $\theta >2a,$ $$\begin{aligned}
E(u^2_0(N,p)) & < & \int_{0}^{2a}(1-p)^N(1/2)\sin(\theta)d\theta
+\int_{2a}^{\pi}(1-2p)^N(1/2)\sin(\theta)d\theta\nonumber\\
& < & (1-p)^N[-(1/2)\cos(\theta)]_{0}^{2a} + (1-2p)^N\nonumber\\
& = & (1-p)^N\frac{1-\cos(2a)}{2} + (1-2p)^N.\label{x1}\end{aligned}$$ Using (\[Eu\_n\]), in (\[x1\]), we have $$\begin{aligned}
E(u^2_0(N,p)) & < & (1-p)^N\frac{1-\cos(2a)}{2} +
(1-2p)^{N}\nonumber\\
& = & 4p(1-p)^{N+1}+(1-2p)^{N},\label{eu2uper}\end{aligned}$$ since $\frac{1-\cos(2a)}{2} = 4p(1-p).$\
Now for the lower bound of $E(u^2_0(N,p)),$ from (\[eu2\]) we have $$E(u^2_0(N,p)) > \int_{2a}^{\pi}\left(1-(2p-q(\theta))\right)^N
(1/2)\sin(\theta)d\theta,$$ since $q(\theta) = 0$ for $\theta >2a.$ $$\begin{aligned}
E(u^2_0(N,p))
& > & \frac{(1-2p)^N}{2}\int_{2a}^{\pi}\sin(\theta)d\theta\nonumber\\
& = & \frac{(1-2p)^N}{2}[-\cos(\theta)]_{2a}^{\pi}\nonumber\\
& = & (1-2p)^N\frac{\cos(2a)+1}{2}\nonumber\\
& = & (1-2p)^N(1-4p(1-p)),\label{eu2lower}\end{aligned}$$ since $\frac{\cos(2a)+1}{2} = 1-4p(1-p).$\
Let $\Theta$ be a fixed monotone property.
A function $\del_{\Theta}(c): Z^+ \rar {R}^+$ is a [*strong threshold*]{} function for $\Theta$ if the following is true for every fixed $\ep>0,$
- if $P[\del_{\Theta}(c-\ep) \in \Theta] = 1 - o(1),$ and
- if $P[\del_{\Theta}(c+\ep) \in \Theta] = o(1)\ ,$
where ‘$c$’ is some constant. $\Box$\
Main Result.
============
Let $p = \frac{c\log\:N}{N},$ then for $c> \frac{1}{2},$ we have the surface of unit sphere is completely covered by ‘$N$’ spherical caps, i.e., $$U_0(N,p) = \phi,\qquad \mbox{almost surely},$$ and for $c\leq \frac{1}{2},$ surface of unit sphere is not completely covered by ‘$N$’ spherical caps, i.e., $$U_0(N,p) \neq \phi,\qquad \mbox{almost surely.}$$
**Proof.** For arbitrary small $\ep.$ We have, $$P[U_0(N,p) \neq \phi] \simeq P[\mid u_0(N,p)\mid \geq \ep].
\label{e3}$$ From the Markov’s inequality, we have $$\begin{aligned}
P[ \mid u_0(N,p)\mid \geq \ep] & \leq &
\frac{E[u^2_0(N,p)]}{\ep^2}\nonumber\\
& < & \frac{1}{\ep^2}\left(4p(1-p)^{N+1}+(1-2p)^{N}\right),\end{aligned}$$ using the upper bound of $E[u^2_0(N,p)]$ from (\[eu2uper\]). Now taking $p = \frac{c\log\:N}{N},$ where $c$ is some constant. Then $$\begin{aligned}
P[ \mid u_0(N,p)\mid \geq \ep] & < &
\frac{1}{\ep^2}\left(\frac{4c\log\:N}{N}\left(1-\frac{c\log\:N}{N}\right)^{N+1}+
\left(1-\frac{2c\log\:N}{N}\right)^N\right)\nonumber\\
& < &
\frac{1}{\ep^2}\left(\frac{4c\log\:N}{N}\left(1-\frac{c\log\:N}{N}\right)e^{-c\log\:N}+
e^{-2c\log\:N}\right)\nonumber\\
& < &
\frac{1}{\ep^2}\left(\frac{4c\log\:N}{N^{1+c}}+\frac{1}{N^{2c}}\right).\label{e1}\end{aligned}$$ If $c > 1/2,$ the above probability is summable, i.e., $$\sum_{N=0}^{\infty} P[\mid u_0(N,p)\mid \geq \ep] < \infty,$$ and hence from (\[e3\]), we have $$\sum_{N=0}^{\infty}P[U_0(N,p) \neq \phi] < \infty.$$ Then by the Borel-Cantelli’s Lemma, we have $$P[U_0(N,p) \neq \phi, \qquad i.o.] = 0.$$ Thus, the set $U_0(N,p)\neq \phi$ for only finitely many time, i.e., eventually $U_0(N,p)= \phi$ happens infinitely times with probability $1.$ Hence for $c > \frac{1}{2},$ we have $$U_0(N,p) = \phi,\qquad \mbox{ almost surely.}$$\
Now, by the lower bound of Chebyshev’s inequality (Page 55, Shiryayev [@Shiryayev].), we have $$P[ u_0(N,p) \geq \ep] \geq \frac{E[ u_0(N,p)^2]-\ep^2}{16\pi^2},$$ since $u_0(N,p)\geq 0$ and $\mid u_0(N,p) \mid \leq 4 \pi.$ Now using the lower bound of $E[ u_0(N,p)^2]$ from (\[eu2lower\]) and taking $\ep = o\left(\frac{1}{N}\right),$ we have $$\begin{aligned}
P[ u_0(N,p) \geq \ep] & \geq &
\frac{(1-2p)^N(1-4p(1-p))-\ep^2}{16\pi^2}\nonumber\\
& \geq & C_1(1-2p)^N(1-4p(1-p))-C_2,\end{aligned}$$ where $C_1= \frac{1}{16\pi^2}$ and $C_2= \frac{\ep^2}{16\pi^2}.$\
Substituting $p = \frac{c\log\:N}{N},$ in the above expression we get $$\begin{aligned}
P[u_0(N,p) \geq \ep] & \geq &
C_1\left(1-\frac{2c\log\:N}{N}\right)^N\left(1-\frac{4c\log\:N}{N}\left(1-\frac{c\log\:N}{N}\right)\right)-C_2\nonumber\\
& \geq & C_3e^{-2c\log\:N} = \frac{C_3}{N^{2c}},\label{e2}\end{aligned}$$ where $C_3$ is some constant. If we take $c \leq 1/2,$ then the probability (\[e2\]), is not summable with respect to $N,$ i.e., $$\sum_{N=1}^{\infty}P[ u_0(N,p) \geq \ep] = \infty.$$ Then by the Borel-Cantelli’s Lemma, we have $$P[u_0(N,p) \geq \ep, \qquad i.o.] = 1,$$ since $u_0(N,p)$ are independent. Thus $u_0(N,p) \geq \ep$ happens infinitely many time with probability $1.$ Hence for $c \leq
\frac{1}{2},$ we have $$u_0(N,p) \geq \ep, \qquad \mbox{almost surely.}$$ This implies for $c\leq 1/2,$ we have $U_0(N,p) \neq \phi$ almost surely.$\Box$\
[99]{} Y.S.Chow, H.Teicher(2004), Probaiblity Theory, thired edition, *Springer Text in Statistics.* H. Maehara, (1988), A threshold for the size of random caps to cover a sphere, *Annal of the Institute of Statistical Mathematics*, Vol. 40, No.4, 665 - 670. Shiryayev, A.N.(1984), Probability, second edition, *Springer-Verlag, New York Inc.* M. Karonski, E.R.Scheinerman and K.B.Singer-Cohen, (1999), On random intersection graphs: the subgraph problem, *Combinatorics, Probability and Computing*, Vol. 8, 131-159.
|
---
abstract: 'In the present paper, some concepts of modern differential geometry are used as a basis to develop an invariant theory of mechanical systems, including systems with gyroscopic forces. An interpretation of systems with gyroscopic forces in the form of flows of a given geodesic curvature is proposed. For illustration, the problem of the motion of a rigid body about a fixed point in an axially symmetric force field is examined. The form of gyroscopic forces of the reduced system is calculated. It is shown that this form is a product of the momentum constant, the volume form of the 2-sphere, and an explicitly written everywhere positive function on the sphere.'
author:
- 'M.P.Kharlamov[^1]'
title: 'Some applications of differential geometry in the theory of mechanical systems[^2]'
---
[**Published: *Mekh. Tverd. Tela*, 1979, No. 11, pp. 37–49**]{}[^3]
[http://www.ams.org (Reference)](http://www.ams.org/mathscinet-getitem?mr=536269)
[http://www.ics.org.ru (Russian)](http://www.ics.org.ru/doc?pdf=1157&dir=r)
[https://www.researchgate.net (Russian)](https://www.researchgate.net/publication/253671029)
Introduction
============
Qualitative investigation of the problems in classical mechanics uses, during the last years, a widening area of mathematical disciplines. This, in turn, supposes the high level of formalization in the description of corresponding mechanical systems. Such level is already achieved in Hamiltonian mechanics, and the abstract theory of Hamiltonian systems gave a lot of perfect results. As far as the Lagrangian mechanics is concerned, its contemporary presentations [@Godbil; @Abra] sometimes become too huge when applied to concrete systems. Moreover in mechanics, there exist systems which globally do not have a quadratic Lagrangian [@Kh1976; @KhFuncAn]. Thus we come to a necessity, on one hand, to simplify and, on the other hand, to generalize the basic notions of the theory. Such an attempt is made in this article. Note that the presence of local coordinates in some proofs is not inevitable; in fact, one can make all reasoning invariant (not using any coordinates).
Basing on a given formalism, we describe the reduction in mechanical systems with symmetry. At the same time, some global relations between the used objects of the differential geometry are revealed. With this approach, the reduced system is still interpreted as a mechanical one.
Natural systems
===============
A natural mechanical system is a triple $$\label{eq01}
(M,m,V),$$ where $M$ is a smooth manifold (the configuration space of the system), $m$ is a Riemannian metric on $M$, and $V$ is a function on $M$ (the potential function of the system, or shortly, the potential). The metric $m$ generates on the tangent space $T(M)$ the function $$\label{eq02}
K(w)=\frac{1}{2}{\langle w,w \rangle}, \quad w\in T(M).$$ Here ${\langle , \rangle}$ denotes the scalar product in the metric $m$. The function $K$ is called the kinetic energy of system .
For any manifold $N$ we denote by $p_N:T(N)\to N$ the projection to the base of a tangent bundle. The total energy of is the function $H$ on $T(M)$ defined as $$\label{eq03}
H=K+V\circ p_M.$$
Let $w\in T(M)$. In the tangent space $T_w T(M)$ we define the linear form $\tq_w$ by putting for each $X\in T_w(T(M))$ $$\label{eq04}
\tq_w(X)={\langle w,Tp_M(X) \rangle}.$$
\[prop1\] The map $$\label{eq05}
\tq: w\mapsto \tq_w$$ defines a differential $1$-form on $T(M)$. Its exterior derivative $$\label{eq06}
\sg=\rmd \tq$$ makes $T(M)$ a symplectic manifold.
For natural coordinates [@Abra; @Kh1976] $(\bq, \dot \bq)$ in a neighborhood of $w\in T(M)$ the differential forms $(\rmd \bq,\rmd \dot \bq)$ give a basis in each overlying fiber of the cotangent space $T^*(T(M))$. If $A=\|a_{ij}(\bq)\|$ is the definitely positive symmetric matrix of the metric $m$ $$m_{\bq}(\dot \bq_{(1)},\dot \bq_{(2)})=a_{ij}(\bq) \dot q_{(1)}^i \dot q_{(2)}^j,$$ then according to the map has the form $$\label{eq07}
\tq_{(\bq,\dot \bq)} = a_{ij}(\bq)\dot q^i \rmd q^j,$$ and, in particular, is smooth. Hence, is a smooth section of $T^*(T(M))$, i.e., a differential 1-form.
Recall that a symplectic structure on a manifold is a closed non-degenerate 2-form on it. Obviously, is closed. Applying the exterior derivative to , we obtain $$\sg_{(\bq, \dot \bq)}=a_{ij}(\bq)\rmd \dot q^i \wedge \rmd \dot q^j+ \dot q^i\frac{\partial a_{ij}(\bq)}{\partial q^k} \rmd q^k \wedge \rmd q^j,$$ therefore the matrix of the form $\sg$ $$\label{eq08}
S=\begin{Vmatrix} * & -A \\ A & 0 \end{Vmatrix}$$ has a non-zero determinant $\det S =(\det A)^2$. Hence $\sg$ is non-degenerate.
The differential forms $\tq$ and $\sg$ defined by – will be called the Lagrange forms on $T(M)$ generated by the metric $m$. Note that in terms of the book [@Abra], the Lagrange forms generated by the Legendre transformation $\mathbf{F} K$ of the function are $\tq$ and $(-\sg)$.
Let $\rmi_Y\oq$ denote the inner product of a vector field $Y$ and a form $\oq$. For the function and the non-degenerate form there exists a unique vector field $X$ on $T(M)$ such that $$\label{eq09}
\rmi_X\sg =-\rmd H.$$ The field $X$ is a second-order equation [@Abra; @Godbil] $Tp_M\circ X={\mathop{\rm id}\nolimits_{T(M)}}$, so each integral curve $w(t)$ of the field $X$ is the derivative of its projection $$w(t)=(p_M\circ w)'(t).$$ For an integral curve $w(t)$ of the field $X$, we call $x(t)=p_M\circ w(t)$ a **motion** in system .
\[prop2\] The total energy is a first integral of system .
Indeed, according to , $H$ is the Hamilton function for the field $X$ on the symplectic manifold $(T(M),\sg)$.
The Maupertuis principle gives the following geometric interpretation of motions in a natural system.
\[theo1\] Let $h$ be a constant such that the region $M_h=\{x\in M: V(x)<h\}$ is not empty. Define the Riemannian metric $m_h=2(h-V)m$ in $M_h$. Then the motions in the natural system having the energy constant $h$ are geodesics of the metric $m_h$.
Gyroscopic forces
=================
Along with natural systems, in mechanics we often come across the systems having forces that does not produce work. The existence of such forces, called gyroscopic, is usually expressed by the linear in velocities terms of Lagrangians. But these terms in the general case are defined only locally and up to adding some linear function generated by a closed 1-form. Therefore gyroscopic forces are naturally defined by some 2-form on the configuration space.
A **mechanical system with gyroscopic forces** is a 4-tuple $$\label{eq10}
(M,m,V,\vk),$$ where $M$ is a manifold, $m$ a Riemannian metric on $M$, $V$ a function on $M$ and $\vk$ a closed 2-form on $M$. All objects are supposed smooth. Again $M$ is the configuration space, $V$ the potential. The function is the kinetic energy and the total energy of the system.
Let $\tq$ and $\sg$ be the Lagrangian forms generated by the metric $m$.
\[prop3\] There exists a unique vector field $X$ on $T(M)$ such that $$\label{eq11}
\rmi_X(\sg+p_M^*\vk)=-\rmd(K+V\circ p_M).$$ This field is a second-order equation.
For the proof we note that the image of the form $\vk$ under the pull-back $p_M^*$ does not contain, in local representation, forms of the type $\rmd \dot q^i$, therefore the matrix of the form $\sg+p_M^*\vk$ differs from only in the left upper block and thus has a non-zero determinant. Hence, the form $\sg+p_M^*\vk$ is non-degenerate and the field $X$ exists and is unique. The second assertion is easily checked in natural coordinates on $T(M)$.
The vector field $X$ is the dynamical system corresponding to the mechanical system . The same as above we call $x(t)=p_M\circ w(t)$ a motion in system if $w(t)$ is an integral curve of the field $X$.
\[prop4\] The total energy is a first integral of system .
This fact immediately follows from the definition .
Geometric interpretation of motions in a system with gyroscopic forces can be obtained in the following way.
Let $h\in {\mathbf{R}}$ be the value of the energy integral $$\label{eq12}
K+V\circ p_M=h$$ such that the region of possible motions $M_h=\{x\in M: V(x)<h\}$ is not empty. Define the Riemannian metric $m_h=2(h-V)m$ in $M_h$. Denote by $\Pi_h$ the operator taking 1-forms on $M_h$ to vector fields by the rule $$\label{eq13}
m_h(\Pi_h(\lqq),Y)=\lqq(Y).$$
\[theo2\] Let $x(t)$ be a motion in system satisfying the integral condition . If we denote by ${{\overline x}}(\tau)$ the same curve but parameterized by the arclength $\tau$ of the metric $m_h$, then $$\label{eq14}
\frac{D}{d\tau} \frac{d{{\overline x}}}{d\tau}=-\Pi_h\left(\rmi_{\frac{d{{\overline x}}}{d\tau}} \vk\right),$$ where the covariant derivative is calculated in the metric $m_h$. Inversely, let ${{\overline x}}(\tau)$ be a curve parameterized by the arclength $\tau$ of the metric $m_h$ and satisfying . Then there exists a change of the parameter $\tau=\tau(t)$ such that the curve $x(t)={{\overline x}}(\tau(t))$ is a motion in system on which the condition holds.
Let us start with the second assertion. It obviously has local character, therefore we use some coordinates $\bq=(q^1,\ldots,q^m)$ on $M$. Let $a_{ij},\Gamma^i_{jk}$ and ${{\overline a}}_{ij},{{\overline \Gamma}}^i_{jk}$ be the metric tensor and the Christoffel symbols of $m$ and $m_h$ respectively. Let $\vk=\vk_{ij}\rmd q^i \wedge \rmd q^j$. Denote ${{\overline \vk}}_{ij}=\vk_{ij}-\vk_{ji}$.
Suppose that the curve ${{\overline x}}(\tau)=(q^1(\tau),\ldots,q^m(\tau))$ satisfies . In local representation $$\label{eq15}
\frac{d^2q^i}{d\tau^2}+{{\overline \Gamma}}^i_{jk}\frac{d q^j}{d\tau}
\frac{d q^k}{d\tau}={{\overline a}}^{ik}{{\overline \vk}}_{kj}\frac{d q^j}{d\tau},$$ where ${{\overline a}}^{ij}{{\overline a}}_{jk}=\delta^i_k$. By definition, the following relations hold $$\label{eq16}
{{\overline a}}_{ij}=2(h-V)a_{ij}, \qquad a^{ij}=2(h-V){{\overline a}}^{ij}.$$ Substituting into we obtain $$\label{eq17}
\begin{array}{l}
{\displaystyle}\frac{d^2q^i}{d\tau^2}+\Gamma^i_{jk}
\frac{d q^j}{d\tau}
\frac{d q^k}{d\tau}+\frac{a^{i\ell}}{2(h-V)}\left[ a_{\ell j}\frac{\partial (h-V)}{\partial q^k}+a_{\ell k}\frac{\partial (h-V)}{\partial q^j} - \right.\\
{\displaystyle}\qquad \left. - a_{j k}\frac{\partial (h-V)}{\partial q^\ell}\right]\frac{d q^j}{d\tau}
\frac{d q^k}{d\tau}=\frac{a^{ik}}{2(h-V)} {{\overline \vk}}_{kj}\frac{d q^j}{d\tau}.
\end{array}$$
By assumption ${{\overline x}}(\tau)\in M_h$, then $h-V({{\overline x}}(\tau))>0$ and we can make the following monotonous change of the parameter $$\label{eq18}
{\displaystyle}dt=\frac{d\tau}{2(h-V({{\overline x}}(\tau)))}.$$ Since $\tau$ is the natural parameter on ${{\overline x}}$, we have $$\label{eq19}
2(h-V)a_{ij}\frac{d q^i}{d\tau}
\frac{d q^j}{d\tau}=1.$$ Then applying the change to we obtain the equation $$\frac{d^2q^i}{dt^2}+\Gamma^i_{jk}\frac{d q^j}{d t}
\frac{d q^k}{dt}+a^{ij}\frac{\partial V}{\partial q^j}=a^{ik}{{\overline \vk}}_{kj}\frac{d q^j}{dt},$$ which is a local representation of the fact that $x(t)={{\overline x}}(\tau(t))$ is a solution of the second-order equation $X$ defined according to . The conservation law holds due to the choice of the change and the condition .
The proof of the first assertion can be obtained by making all substitutions in reversed order. The variational proof for the case of 2-dimensional $M$ can be found in [@AnSin].
The flows on iso-energetic manifolds defined by can naturally be called the **flows of given curvature**. If $\vk \equiv 0$, we obtain usual geodesic flows.
The case when $\dim M=2$ is essentially special because in this case the geodesic curvature of trajectories depends only on the point of $M$ rather than on the direction of trajectories. Namely, let $o_h$ be the volume form on $M_h$ corresponding to the metric $m_h$. Then there exists a function $k_h$ on $M_h$ such that $\vk=k_h o_h$. A simple calculation shows that for each vector $v$ tangent to $M_h$ at some point $x$ and having length 1 in the metric $m_h$, the vector $w=-\Pi_h(\rmi_v \vk)$ is orthogonal to $v$ in the metric $m_h$ and its length is $\|w\|_{m_h}=|k_h(x)|$. According to , $$\begin{array}{rcl}
\vk(w,v)& = &\vk(v,\Pi_h(\rmi_v \vk))=(\rmi_v \vk)(\Pi_h(\rmi_v \vk))= \\
{}& = & m_h(\Pi_h(\rmi_v \vk), \Pi_h(\rmi_v \vk))=k_h^2>0.
\end{array}$$ This means that the basis $\{w,v\}$ in $T_x(M)$ defines in $T_x(M)$ the same orientation as the 2-form $\vk$. Therefore, for 2-dimensional systems we proved the following statement.
\[prop5\] Let in $\dim M=2$. A curve $x(t)$ satisfying is the motion in system if and only if, being parameterized by the arclength of the metric $m_h$, it has the geodesic curvature $|\vk/o_h|$ and the basis $\{$*the curvature vector, the tangent vector*$\}$ gives the same orientation of $T_{x(t)}(M)$ as the form $\vk$.
Invariant theory of reduction\
in systems with symmetry
==============================
Let us suppose that a one-parameter group $G=\{g^\tau\}$ acts as diffeomorphisms of the configurational space of the natural mechanical system and this action generates a principal $G$-bundle [@Bishop] $$\label{eq20}
\mB=(M,p,{{\tilde M}}),$$ where ${{\tilde M}}=M/G$ is the quotient manifold and $p:M \to {{\tilde M}}$ the factorization map. Suppose also that all $g^\tau$ preserve the metric $m$ and the potential $V$. Obviously, diffeomorphisms from the group $G_T=\{Tg^\tau: g^\tau \in G\}$ preserve the kinetic energy of and, consequently, the total energy . For the generating vector fields $$\begin{aligned}
v(x) &=& {\left.\frac{d}{d\tau}\right|}_{\tau=0}g^\tau(x), \qquad x\in M, \label{eq21}\\
v_T(w) &=& {\left.\frac{d}{d\tau}\right|}_{\tau=0}Tg^\tau(w), \qquad w\in T(M) \label{eq22}\end{aligned}$$ we have $$\label{eq23}
vV\equiv 0, \qquad v_T K\equiv 0, \qquad v_T H\equiv 0.$$ The group $G$ satisfying is called the symmetry group of system . The theory of natural systems with symmetries was created by S.Smale [@Smale]. The starting point of it is the momentum integral. For a one-parameter group let us use a simpler definition of the momentum [@Ta1973] connected with Noether’s theorem [@Arnold].
The momentum of a mechanical system with symmetry $G$ is the function $J$ on $T(M)$ defined as $$\label{eq24}
J(w)={\langle v,w \rangle},$$ where $v$ is the vector field . It is clear that $J$ is everywhere regular and $G_T$-invariant. Let us show that it is a first integral of system .
\[lem1\] The Lagrange forms $\tq$ and $\sg$ generated by the metric $m$ are preserved by the group $G_T$.
For all $w\in T(M),Y\in T_w(T(M))$ we have $$\label{eq25}
\begin{array}{l}
\tq_{Tg^\tau(w)}(TTg^\tau(Y))={\langle Tg^\tau(w),Tp_M\circ TTg^\tau(Y) \rangle}=\\
\qquad ={\langle Tg^\tau(w),Tg^\tau\circ Tp_M(Y) \rangle}={\langle w,Tp_M(Y) \rangle}.
\end{array}$$ The last equality follows from the fact that $g^\tau$ are isometries of $m$. Equations and yield $(Tg^\tau)^*\tq=\tq$, hence, according to , $(Tg^\tau)^*\sg=\sg$.
\[thecor1\] The field $X$ defining dynamics of system is preserved by the group $G_T$, i.e., for all $g^\tau\in G$ $$TTg^\tau\circ X=X\circ Tg^\tau.$$ The generating field commutes with $X$: $$\label{eq26}
[v_T,X]\equiv 0.$$
The proof follows immediately from the invariance of $H$ and definition .
Now let us note that the fields and satisfy $$\label{eq27}
Tp_M\circ v_T=v.$$ Therefore, using definition , we can calculate the derivative of the momentum along $X$ as $XJ=X\tq(v_T)$. Let us add to the right-hand part the terms $\tq([v_T,X])$ and $-v_T\tq(X)=-2 v_T K$ equal to zero in virtue of , and use the rule for the exterior derivative of a 1-form [@Godbil]. Then we obtain $$XJ=\rmd \tq(X,v_T)=-\rmd H(v_T)=-v_T H \equiv 0.$$ Here we used for $v_T H$ and definition . Thus, the momentum $J$ is a first integral of the field $X$. In particular, for any $k\in {\mathbf{R}}$ the set $J_k=J^{-1}(k)$ is a $G_T$-invariant integral submanifold in $T(M)$ of codimension 1.
It is clear that $J_k(x)=J_k \cap T_x(M)$ is a hyperplane in $T_x(M)$ and it contains zero if and only if $k=0$. The subspace $J_0(x)$ is the orthogonal supplement, in metric $m$, to the line $T_x^v$ spanned by the generating vector . The hyperplane $J_k(x)$ is parallel to $J_0(x)$ and therefore the intersection $T_x^v \cap J_k(x)$ consists of a unique vector $$\label{eq28}
v^k(x)=k v/{\langle v,v \rangle}.$$ The vector field $v^k$ is smooth and $G_T$-invariant.
The set of subspaces $J_0(x)$ generates a connexion [@Bishop] in the principal $G$-bundle . Let $\mh$ be the form of the connexion $J_0$, $$\label{eq29}
\mh(w)={\langle v,w \rangle}/{\langle v,v \rangle},$$ and $\mg$ the corresponding curvature form, $$\label{eq30}
\mg =\rmd \mh$$ (here we have the standard exterior derivative since $G$ is commutative). Let $\U_k: J_0 \to J_k$ be the diffeomorphism defined by $$\label{eq31}
\U_k(w)=w+v^k(x), \qquad w\in J_0(x).$$ Denote by $\tq^k$ and $\sg^k$ the differential forms induced on $J_k$ by the Lagrange forms of the metric $m$ under the embedding $J_k \subset T(M)$.
\[prop6\] The following equalities hold $$\begin{aligned}
\U_k^*\tq^k &=& \tq^0+k\, p_M^*\mh, \label{eq32}\\
\U_k^*\sg^k &=& \sg^0+k\, p_M^*\mg. \label{eq33}\end{aligned}$$
Let $w\in J_0, Y\in T_w(J_0)$. From , $$\label{eq34}
\tq^0_w(Y)={\langle w,Tp_M(Y) \rangle}.$$ On the other hand, $$\begin{array}{l}
( \U_k^*\tq^k)_w(Y)=\tq^k_{\U_k(w)}(T\U_k(Y))= {\langle \U_k(w),Tp_M\circ T\U_k(Y) \rangle} =\\
\qquad = {\langle w+v^k,T(p_M\circ \U_k)(Y) \rangle}={\langle w,Tp_M(Y) \rangle}+{\langle v^k,Tp_M(Y) \rangle}.
\end{array}$$ Here we used the identity $p_M\circ \U_k = p_M$ from . From and we have $${\langle v^k,Tp_M(Y) \rangle}=k\, \mh(Tp_M(Y)),$$ therefore, $$\label{eq35}
( \U_k^*\tq^k)_w(Y)= {\langle w,Tp_M(Y) \rangle}+k \,p_M^*\mh (Y).$$ Comparing with , we obtain . Now follows from and since the exterior derivative commutes with pull-back mappings of forms.
Let us introduce the map $$\ro_k:J_k \to T({{\tilde M}})$$ as the restriction to $J_k$ of the map $Tp:T(M)\to T({{\tilde M}})$. Using an atlas of the bundle $\mB$ one can show that the triple $$\label{eq36}
\mB_k=(J_k,\ro_k,T({{\tilde M}}))$$ is a principle $G_T$-bundle.
\[theo3\] The forms $\tq^k$ and $\sg^k$ are preserved by the group $G_T$. The form $\sg^k$ is horizontal in the sense of the bundle . The form $\tq^k$ is horizontal if and only if $k=0$.
The first assertion follows from Lemma \[lem1\]. The form $\tq^k$ is horizontal if for all $w\in J_k$ $$\tq_w^k(v_T)=0.$$ This in virtue of and means that ${\langle w,v \rangle}=0$ for all $w\in J_k$. But $v^k\in J_k$ and ${\langle v^k,v \rangle}=k$. So the form $\tq^k$ is horizontal only for $k=0$. The fact that $\sg^k$ is horizontal follows from the structural equation for horizontal forms [@Bishop] and the fact that $G_T$ is commutative.
Note that the field $v_T$ is preserved by diffeomorphisms $$T\U_k\circ v_T=v_T \circ \U_k.$$ Hence, in virtue of , $$\label{eq37}
\U_k^*\rmi_{v_T}\sg^k=\rmi_{v_T}\U_k^*\sg^k=\rmi_{v_T}\sg^0+k\,\rmi_{v_T}p_M^*\mg.$$ The first term in the right-hand part is zero because $\sg^0$ is horizontal. For the second term we get $$k\,\rmi_{v_T}p_M^*\mg =k\,p_M^*\rmi _{Tp_M(v_T)}\mg=k\,p_M^*\rmi_v \mg=0,$$ since the curvature form $\mg$ is horizontal in the sense of the bundle . The theorem is proved.
As a corollary of Theorem \[theo3\] we get the existence of differential forms ${{\tilde \tq}}^0$ and ${{\tilde \sg}}^k$ such that $\tq^0=\ro_0^*{{\tilde \tq}}^0, \sg^k=\ro_k^*{{\tilde \sg}}^k$. In turn, for the form we have $\mg=p^*{{\tilde \mg}}$ for some 2-form ${{\tilde \mg}}$ on ${{\tilde M}}$. Then from we obtain $$\U_k^*\ro_k^*{{\tilde \sg}}^k=\ro_0^*{{\tilde \sg}}^0+k\,p_M^* p^* {{\tilde \mg}}.$$ Whence, having the obvious equalities $\ro_k \circ \U_k=\ro_0$ and $p\circ p_M=p_{{{\tilde M}}}\circ Tp$, $$\label{eq38}
{{\tilde \sg}}^k={{\tilde \sg}}^0+k\,p_{{{\tilde M}}}^* {{\tilde \mg}}.$$
Let us define a Riemannian metric ${{\tilde m}}$ on ${{\tilde M}}$ putting for every ${{\tilde w}}_1,{{\tilde w}}_2\in T_{{{\tilde x}}}({{\tilde M}})$ $$\label{eq39}
{{\tilde m}}({{\tilde w}}_1,{{\tilde w}}_2)={\langle w_1,w_2 \rangle},$$ where $w_1,w_2\in J_0(x)$ are chosen to give $\ro_0(w_i)={{\tilde w}}_i$ (in particular, $p(x)={{\tilde x}}$).
\[prop7\] The differential forms ${{\tilde \tq}}^0$ and ${{\tilde \sg}}^0$ are the Lagrange forms on $T({{\tilde M}})$ generated by the metric ${{\tilde m}}$.
According to it is sufficient to show that for all ${{\tilde Y}}\in T_{{{\tilde w}}}(T({{\tilde M}}))$ $$\label{eq40}
{{\tilde \tq}}^0_{{{\tilde w}}}({{\tilde Y}})={{\tilde m}}({{\tilde w}},Tp_{{{\tilde M}}}({{\tilde Y}})).$$ Take $w\in J_0$ and $Y\in T_w(J_0)$ such that $\ro_0(w)={{\tilde w}}$ and $T\ro_0(Y)={{\tilde Y}}$. We can write $$\label{eq41}
Tp_M(Y)=w^0+c \,v,$$ where $w^0\in J_0$ and $v$ is the vector . Since $v$ is orthogonal to $J_0$, we have $$\tq_w^0(Y)={\langle w,Tp_M(Y) \rangle}={\langle w,w^0 \rangle}.$$ But, according to , ${\langle w,w^0 \rangle}={{\tilde m}}(\ro_0(w),\ro_0(w^0))$. Then in virtue of $Tp(v)=0$ we get from $$\ro_0(w^0)=Tp\circ Tp_M(Y)=T(p_{{{\tilde M}}}\circ Tp)(Y)=Tp_{{{\tilde M}}}\circ T\ro_0(Y)=Tp_{{{\tilde M}}}({{\tilde Y}}).$$ This yields .
\[thecor2\] The pair $(T({{\tilde M}}),{{\tilde \sg}}^k)$ is a symplectic manifold.
**Definition**. The **reduced system** corresponding to the momentum value $k$ is a vector field ${{\tilde X}}_k$ on $T({{\tilde M}})$ such that on $J_k$ the following identity holds $$\label{eq42}
T\ro_k \circ X={{\tilde X}}_k\circ \ro_k.$$
According to Corollary \[thecor1\], the field ${{\tilde X}}_k$ exists and is unique. It follows from that the set of its integral curves is the $\ro_k$-image of the set of integral curves of the field $X$ with the momentum $k$.
Denote by $H_k$ the restriction of the total energy $H$ of system to the submanifold $J_k$. Since $H$ is $G_T$-invariant, there exists a unique function ${{\tilde H}}_k$ (**the reduced energy**) satisfying the relation $$\label{eq43}
H_k={{\tilde H}}_k\circ \ro_k.$$ It is easily shown that $$\label{eq44}
{{\tilde H}}_k={{\tilde K}}+{{\tilde V}}_k\circ p_{{{\tilde M}}},$$ where ${{\tilde K}}({{\tilde w}})=\frac{1}{2} {{\tilde m}}({{\tilde w}},{{\tilde w}})$ is the kinetic energy of the reduced metric ${{\tilde m}}$ and the function ${{\tilde V}}_k$ on ${{\tilde M}}$ (called the **amended or effective potential**) is defined by $$\label{eq45}
{{\tilde V}}_k(p(x))=V(x)+\frac{k^2}{2{\langle v(x),v(x) \rangle}}.$$
\[theo4\] The reduced system ${{\tilde X}}_k$ is a Hamiltonian field on the symplectic manifold $(T({{\tilde M}}),{{\tilde \sg}}^k)$ with the Hamilton function equal to the reduced energy.
Indeed, from , , and we obtain $$\label{eq46}
\rmi_{{{\tilde X}}_k} {{\tilde \sg}}^k = - \rmd {{\tilde H}}_k,$$ and this is a definition of the Hamiltonian field for ${{\tilde H}}_k$.
\[theo5\] The reduced system ${{\tilde X}}_k$ is the dynamical system corresponding to the mechanical system with gyroscopic forces $$\label{eq47}
({{\tilde M}},{{\tilde m}},{{\tilde V}}_k,k\,{{\tilde \mg}}),$$ where the 2-form ${{\tilde \mg}}$ is induced by the curvature form of the connexion $J_0$ in the principal bundle $(M,p,{{\tilde M}})$.
According to , , and we have $$\rmi _{{{\tilde X}}_k}({{\tilde \sg}}^0+p_{{{\tilde M}}}^*(k\,{{\tilde \mg}}))=-\rmd({{\tilde K}}+{{\tilde V}}_k \circ p_{{{\tilde M}}}),$$ so the assertion of the theorem follows from Proposition \[prop7\] and definition .
\[thecor3\] The reduced system ${{\tilde X}}_k$ is a second-order equation on ${{\tilde M}}$. A curve ${{\tilde x}}(t)$ in ${{\tilde M}}$ is a motion in system if and only if ${{\tilde x}}(t)=p\circ x(t)$, where $x(t)$ is a motion in system with the momentum $J(x'(t))=k$.
Reduced system in rigid body dynamics
=====================================
The problem of the motion of a rigid body having a fixed point in the axially symmetric force field (e.g. the gravity field or the field of a central Newtonian force) with an appropriate choice of variables has a cyclic coordinate and admits the reduction by Routh method. However, as shown in [@KhFuncAn], this method can be applied only locally, and this fact is not connected with singularities of local coordinate systems, but reflects the essence of the problem as a whole. The above described approach makes it possible to describe the reduced system globally in terms of the redundant variables (direction cosines), which are applicable in the same way everywhere on the reduced configuration space.
Suppose that the body is fixed in its point $O$ at the origin of the cartesian coordinate system $O\bi_1\bi_2\bi_3$ of the inertial space ${\mathbf{R}}^3$. The components of vectors from ${\mathbf{R}}^3$ in the basis $\mathbf{\bi}=\|\bi_1,\bi_2,\bi_3\|$ will be written in a column. Let the unit vectors $\be_1,\be_2,\be_3$ go along the principal inertia axes at $O$ and $I_1,I_2,I_3$ be the corresponding principal moments of inertia. The row $\mathbf{\be}=\|\be_1,\be_2,\be_3\|$ is an orthonormal basis in ${\mathbf{R}}^3$.
To any position $\mathbf{\be}$ of the body we assign the matrix $Q\in SO(3)$ such that $$\label{eq48}
\mathbf{\bi}Q=\mathbf{\be}.$$ It is clear that the map $\mathbf{\be}\mapsto Q$ is one-to-one and the group $SO(3)$ can be considered as the configuration space of a rigid body with a fixed point [@Arnold]. The Lie algebra of $SO(3)$ (the tangent space at the unit) is the 3-dimensional space $\mathfrak{so}(3)$ of skew-symmetric $3{\times}3$ matrices with the standard commutator $$\label{eq49}
[\Omega_1,\Omega_2]=\Omega_1 \Omega_2-\Omega_2 \Omega_1.$$ Obviously, for any $Q\in SO(3)$ $$\label{eq50}
T_Q(SO(3))=Q\,\mathfrak{so}(3) = \mathfrak{so}(3)\,Q.$$
We fix an isomorphism $f$ of the vector spaces $\mathfrak{so}(3)$ and ${\mathbf{R}}^3$. Namely, $$\Omega=\begin{Vmatrix} 0 & -\oq_3&\oq_2\\
\oq_3 & 0 & -\oq_1\\
-\oq_2 & \oq_1 & 0
\end{Vmatrix} \quad \mapsto \quad f(Q)=\begin{Vmatrix} \oq_1\\
\oq_2\\
\oq_3 \end{Vmatrix}.$$ It is shown straightforwardly that $f$ takes the commutator to the standard cross product $$\label{eq51}
f([\Omega_1,\Omega_2])=f(\Omega_1)\times f(\Omega_2).$$
The tangent bundle of the Lie group is trivial. One of the possible trivializations of $T(SO(3))$ is given by the map $$\label{eq52}
T(SO(3))\to SO(3)\times {\mathbf{R}}^3: (Q,\dot Q) \mapsto (Q,f(Q^{-1}\dot Q)),$$ which is well defined in virtue of . For the sake of being short, we call the vector $$\label{eq53}
\oq=f(Q^{-1}\dot Q) \in {\mathbf{R}}^3$$ the spin of the rotation velocity $\dot Q$, although it is a slight abuse of terminology. Let us describe the mechanical sense of it. Differentiating , we obtain $$\label{eq54}
\dot {\mathbf{\be}} = \mathbf{\bi}\dot Q =\mathbf{\be}Q^{-1}\dot Q.$$ Denote $$\label{eq55}
\oq=\begin{Vmatrix} \oq_1\\
\oq_2\\
\oq_3
\end{Vmatrix}.$$ Equation in virtue of definition takes the form $$\dot\be_1=\oq_3\be_2-\oq_2\be_3,\qquad \dot\be_2=\oq_1\be_3-\oq_3\be_1,\qquad \dot\be_3=\oq_2\be_1-\oq_1\be_2,$$ i.e., the spin components in the basis $\mathbf{\bi}$ are the projections of the angular velocity vector to the moving axes. The vector defined by is also called the **angular velocity in the body** [@Arnold]. It is clear that the set of all rotation velocities $\dot Q$ with the same spin is a left invariant vector field on $SO(3)$.
Let us consider the one-parameter subgroup $\{Q^\tau\}\subset SO(3)$ consisting of the matrices $$Q^\tau = \begin{Vmatrix} 1 & 0 & 0\\
0 & \cos\tau & -\sin\tau \\
0 & \sin\tau & \cos\tau
\end{Vmatrix}.$$ It acts as a one-parameter group $G=\{g^\tau\}$ of diffeomorphisms of $SO(3)$, $$\label{eq56}
g^\tau(Q)=Q^\tau Q.$$ The generating vector field $$v(Q)=\left. \frac{d}{d\tau}\right|_{\tau=0}g^\tau(Q)=\left. \frac{dQ^\tau}{d\tau}\right|_{\tau=0}Q$$ is right invariant; at the point $$\label{eq57}
Q = \begin{Vmatrix} \aq_1 & \aq_2 & \aq_3\\
\aq'_1 & \aq'_2 & \aq'_3 \\
\aq''_1 & \aq''_2 & \aq''_3
\end{Vmatrix}$$ it has the spin $$\label{eq58}
\nu =f (Q^{-1}\left. \frac{dQ^\tau}{d\tau}\right|_{\tau=0}Q) =\begin{Vmatrix} \aq_1 \\
\aq_2 \\
\aq_3\end{Vmatrix}.$$ Comparing with and we see that $G$ rotates the body about the fixed in space vector $\bi_3$, the direction of which is usually said to be vertical.
The map $p: Q\mapsto \nu$ defined by takes $SO(3)$ to the unit sphere in ${\mathbf{R}}^3$ $$\label{eq59}
\aq_1^2+\aq_2^2+\aq_3^2=1.$$ This sphere is called the Poison sphere. The inverse image of each point is exactly an orbit of $G$ and therefore $$\label{eq60}
p: SO(3) \to S^2$$ is the quotient map. Since $p$ is smooth and $G$ is compact, the triple $\mB=(SO(3),p,S^2)$ is a principle $G$-bundle [@Arnold].
The symmetric inertia operator, diagonal in the basis $\mathbf{\bi}$, $$I=\begin{Vmatrix} I_1 & {} & {}\\
{}&I_2&{}\\
{}& {}& I_3 \end{Vmatrix}$$ defines the Riemannian metric on $SO(3)$ which in the structure of is $$\label{eq61}
m_Q(\oq^1,\oq^2)=I\oq^1\cdot \oq^2$$ (the dot stands for the standard scalar product in ${\mathbf{R}}^3$). The metric is left invariant (since the components of the spin are left invariant) and, in particular, is preserved by the transformations . The corresponding kinetic energy has the classical form $$\label{eq62}
K=\frac{1}{2} (I_1\oq_1^2+I_2\oq_2^2+I_3\oq_3^2).$$
Supposing that the force field has a symmetry axis, we can choose the basis $\mathbf{\bi}$ in such a way that the symmetry axis is the vertical $O\bi_1$. Then the transformations preserve the potential energy $V:SO(3)\to {\mathbf{R}}$ and, therefore, $$\label{eq63}
V={{\tilde V}}\circ p,$$ where $p$ is the map and ${{\tilde V}}={{\tilde V}}(\aq_1,\aq_2,\aq_3)$ is a function on the sphere .
Thus, the problem of the motion of a rigid body with a fixed point is described by the mechanical system $$\label{eq64}
(SO(3),m,V)$$ with symmetry $G$, where $m$ and $V$ are defined by and , $G$ acts according to .
By Theorem \[theo5\], system generates the mechanical system with gyroscopic forces having the Poisson sphere as the reduced configuration space. Such system obviously defines the motion of the direction vector of the vertical in the coordinate system fixed in the body. Let us calculate the elements of this system.
The momentum corresponding to the symmetry group $G$ is found from , , , and , $$\label{eq65}
J(Q,\dot Q)=I_1\aq_1\oq_1+I_2\aq_2\oq_2+I_3\aq_3\oq_3.$$
\[lem2\] In the product structure the map tangent to is $$\label{eq66}
Tp(Q,\oq)=p(Q)\times \oq.$$
Denote ${\displaystyle}{\Omega= \left.\frac{dQ^\tau}{d\tau}\right|_{\tau=0}\in \mathfrak{so}(3)}$. By definition $$\begin{array}{l}
{\displaystyle}Tp(Q,\oq)=\frac{d}{dt}f(Q^{-1}\Omega Q)=f(Q^{-1}\Omega\dot Q+\dot{Q}^{-1}\Omega Q)= f(Q^{-1}\Omega\dot Q- Q^{-1}\dot{Q}Q^{-1}\Omega Q)=\\
{\displaystyle}\qquad = f([Q^{-1}\Omega Q,Q^{-1}\dot Q])=f(Q^{-1}\Omega Q)\times f(Q^{-1}\dot Q) =p(Q)\times \oq.
\end{array}$$ Here we used the identity $\dot Q ^{-1}Q+Q^{-1}\dot Q \equiv 0$ and the property .
The tangent map $Tp$ establishes an isomorphism of the horizontal subspace $J_0(Q)$ in $T_Q(SO(3))$ and the tangent plane to the Poisson sphere at the point $p(Q)$, $$\label{eq67}
\aq_1\dot{\aq}_1+\aq_2\dot{\aq}_2+\aq_3\dot{\aq}_3=0.$$ Denote by $$\oq^0 =\begin{Vmatrix} \oq^0_1 \\
\oq^0_2 \\
\oq^0_3\end{Vmatrix}$$ the spin of the horizontal vector from $T(SO(3))$ covering the tangent vector $\dot \nu \in T(S^2)$, $$\dot \nu =\begin{Vmatrix} \dot \aq_1 \\
\dot \aq_2 \\
\dot \aq_3\end{Vmatrix}.$$ Then from – we get $\oq^0\cdot I\nu=0$, $\dot\nu =\nu \times \oq^0$, $\nu \cdot \dot\nu =0$. This immediately yields ${\displaystyle}{\oq^0=\frac{\dot \nu \times I\nu}{I\nu \cdot \nu}}$. In the coordinate form $$\label{eq68}
\begin{array}{c}
{\displaystyle}\oq^0_1=\frac{I_3\aq_3 \dot{\aq}_2-I_2\aq_2 \dot{\aq}_3}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2},\; {\displaystyle}\oq^0_2=\frac{I_1\aq_1 \dot{\aq}_3-I_3\aq_3 \dot{\aq}_1}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2},\;
{\displaystyle}\oq^0_3=\frac{I_2\aq_2 \dot{\aq}_1-I_1\aq_1 \dot{\aq}_2}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}
\end{array}$$ we obtain a partial case of the relations found by G.V.Kolosov [@Kolosov].
The latter equations can be considered from another point of view. According to the definition of the spin, its components $\oq_1,\oq_2,\oq_3$ can be treated as 1-forms on $SO(3)$. Then $\oq^0_1,\oq^0_2,\oq^0_3$ are the horizontal parts of these forms. Since $Tp$ induces an isomorphism between horizontal $G$-invariant forms on $SO(3)$ and forms on the Poisson sphere, the formulas $$\label{eq69}
\begin{array}{c}
{\displaystyle}\oq^0_1=\frac{I_3\aq_3 \rmd {\aq}_2 -I_2 \aq_2 \rmd{\aq}_3}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2},\; {\displaystyle}\oq^0_2=\frac{I_1\aq_1 \rmd{\aq}_3-I_3\aq_3 \rmd{\aq}_1}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2},\;
{\displaystyle}\oq^0_3=\frac{I_2\aq_2 \rmd{\aq}_1-I_1\aq_1 \rmd{\aq}_2}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}
\end{array}$$ give the explicit expression of this isomorphism.
The reduced metric on $S^2$ is found from , , and , $$\label{eq70}
{{\tilde m}}(\dot \nu,\dot \nu)=I_1(\oq_1^0)^2+I_2(\oq_2^0)^2+I_3(\oq_3^0)^2=\frac{I_1 I_2 I_3 \left(\displaystyle{\frac{\dot{\aq}_1^2}{I_1}+\frac{\dot{\aq}_2^2}{I_2}+\frac{\dot{\aq}_3^2}{I_3}}\right)}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}.$$ Here we also used equality . It is easily seen that the metric ${{\tilde m}}$ is conform equivalent to the ellipsoidal one [@Kolosov].
Using we find the form $\mh$ of the connexion $J_0$, $$\label{eq71}
\mh_Q(\dot Q)=\frac{{\langle \dot Q,v(Q) \rangle}}{{\langle v(Q),v(Q) \rangle}}=\frac{I_1\aq_1\oq_1+I_2\aq_2\oq_2+I_3\aq_3\oq_3}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}.$$ The exterior derivative of gives the curvature form $$\label{eq72}
\begin{array}{l}
{\displaystyle}\mg=\rmd\frac{1}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}\wedge (I_1\aq_1\oq_1+I_2\aq_2\oq_2+I_3\aq_3\oq_3)+ \\[3mm]
{\displaystyle}\quad \frac{I_1\aq_1 \rmd\oq_1+I_2\aq_2 \rmd\oq_2+I_3\aq_3 \rmd\oq_3+I_1\rmd\aq_1\wedge \oq_1+I_2\rmd\aq_2\wedge \oq_2+I_3\rmd\aq_3\wedge \oq_3}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}.
\end{array}$$
\[prop8\] The components of the spin $\oq_1,\oq_2,\oq_3$ considered as $1$-forms on $SO(3)$ satisfy the relations $$\label{eq73}
\rmd \oq_1=\oq_3\wedge \oq_2, \qquad \rmd \oq_2=\oq_1\wedge \oq_3, \qquad \rmd \oq_3=\oq_2\wedge \oq_1.$$
The forms $\oq_1,\oq_2,\oq_3$ give a basis in the space of left invariant 1-forms on $SO(3)$. Let us introduce the left invariant fields $w^1,w^2,w^3$ such that the spin of $w^i$ is $\bi_i \in {\mathbf{R}}^3$. The fields bracket $[w^1,w^2]$ is also left invariant and its spin due to is $\bi_1{\times}\bi_2=\bi_3$, therefore $$\label{eq74}
[w^1,w^2]=w^3.$$ Analogously, $$\label{eq75}
[w^2,w^3]=w^1, \qquad [w^3,w^1]=w^2.$$ Now equations follow from and since, obviously, the basis $\{\oq_1,\oq_2,\oq_3\}$ is dual to $\{w^1,w^2,w^3\}$.
Let us substitute in and restrict the form $\mg$ to the horizontal subspace $J_0$. The restriction is obtained just by replacing $\oq_i$ with $\oq_i^0$. We get $$\begin{array}{l}
{\displaystyle}\mg|{J_0}= \frac{1}{I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2}\left[ I_1 \rmd \aq_1 \wedge \oq_1^0+ I_2 \rmd \aq_2 \wedge \oq_2^0+I_3 \rmd \aq_3 \wedge \oq_3^0 - \right.\\
\qquad \left. - (I_1 \aq_1 \oq_2^0\wedge \oq_3^0+ I_2 \aq_2 \oq_3^0\wedge \oq_1^0+I_3 \aq_3 \oq_1^0\wedge \oq_2^0)\right].
\end{array}$$ Here we used the above mentioned property $I_1\aq_1\oq^0_1+I_2\aq_2\oq^0_2+I_3\aq_3\oq^0_3=0$. To find the form of gyroscopic forces of the reduced system ${{\tilde X}}_k$, let us use the diffeomorphism . We get $$\label{eq76}
\begin{array}{l}
{\displaystyle}k\,{{\tilde \mg}}=k \frac{(I_2+I_3-I_1)I_1\aq_1^2+(I_3+I_1-I_2)I_2\aq_2^2+(I_1+I_2-I_3)I_3\aq_3^2}{(I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2)^2} \times \\
\qquad \times (\aq_1 \rmd \aq_2 \wedge \rmd \aq_3+\aq_2 \rmd \aq_3 \wedge \rmd \aq_1+\aq_3 \rmd \aq_1 \wedge \rmd \aq_2).
\end{array}$$
The amended potential is found from , , , and , $$\label{eq77}
{\displaystyle}{{\tilde V}}_k(\aq_1,\aq_2,\aq_3)= {{\tilde V}}(\aq_1,\aq_2,\aq_3)+\frac{k^2}{2(I_1\aq_1^2+I_2\aq_2^2+I_3\aq_3^2)}.$$
Finally, the reduced system in the dynamics of a rigid body is a mechanical system with gyroscopic forces $$(S^2,{{\tilde m}},{{\tilde V}}_k, k\,{{\tilde \mg}})$$ the elements of which are defined by , , and .
Note that in the expression for the form of gyroscopic forces , the multiplier $$\aq_1 \rmd \aq_2 \wedge \rmd \aq_3+\aq_2 \rmd \aq_3 \wedge \rmd \aq_1+\aq_3 \rmd \aq_1 \wedge \rmd \aq_2$$ is the volume form of $S^2$ induced from ${\mathbf{R}}^3$ and the coefficient in front of it, in the case $k\ne 0$, has constant sign on the sphere in virtue of the triangle inequalities for the inertia moments. Using Proposition \[prop5\] we get the following interesting property of the trajectories of the vertical direction vector on the Poison sphere: trajectories having the energy constant $h$ do not have inflection points in the metric ${{\tilde m}}_h$. Standing on the outer side of the sphere we see that trajectories turn to the right of the corresponding geodesics when $k>0$ and to the left when $k<0$.
[99]{}
Foundations of Mechanics. – New York. – 1967. – 296 p. (Later edition [*Abraham R., Marsden J.*]{} Foundations of Mechanics. – Benjamin, Mass., Readings. – 1978. – 806 p.)
Some smooth ergodic systems // Russ. Math. Surveys. – 1967. – [**22**]{}, 5. – P. 103–167.\
<http://dx.doi.org/10.1070/RM1967v022n05ABEH001228>
Matematicheskie metody klassicheskov mekhaniki. – M.: Nauka. – 1974. – 432 p. (Later English transl. [*Arnold V.I.*]{} Mathematical Methods of Classical Mechanics. – Springer Science+Business Media, Inc. – 1978, 1989. – 518 p.)
Geometry of Manifolds. – Academic Press. – 1964. – 273 p.
Géométrie différentielle et mécanique analytique. – Hermann. – 1969. – 184 p.
Characteristic class of a bundle and the existence of a global Routh function // Functional Analysis and Its Applications. – 1977. – [**11**]{}, 1. – P. 80–81.\
<http://dx.doi.org/10.1007/BF01135548>\
<http://arxiv.org/abs/1401.1396>
Reduction in mechanical systems with symmetry // Mekh. Tverd. Tela. – 1976. – No. 8. – P. 4–18. (In Russian)\
<http://arxiv.org/abs/1401.4393>
On Certain Modifications in Hamilton’s Principle in Application to Solving Problems in the Mechanics of a Rigid Body. – Saint-Petersburg. – 1903. – 76 p. (In Russian)
Topology and mechanics // Inventiones Math. – 1970. – [**10**]{}, 4. – P. 305–331.
To investigation of the phase topology of compact configurations with symmetry // Vestnik Mosk. Univ. Math.-Mech. – 1973. – No. 5. – P. 70–77. (In Russian)
[^1]: Donetsk Physico-Technical Institute.
[^2]: Submitted on July 29, 1977.
[^3]: Russian Journal “Mechanics of Rigid Body”.
|
---
author:
- 'R. Hagala[^1]'
- 'C. Llinares'
- 'D. F. Mota'
bibliography:
- 'references.bib'
title: |
Cosmological simulations with disformally\
coupled symmetron fields
---
=1
[We investigate statistical properties of the distribution of matter at redshift zero in disformal gravity by using $N$-body simulations. The disformal model studied here consists of a conformally coupled symmetron field with an additional exponential disformal term. ]{} [We conduct cosmological simulations to discover the impact of the new disformal terms in the matter power spectrum, halo mass function, and radial profile of the scalar field.]{} [We calculated the disformal geodesic equation and the equation of motion for the scalar field. We then implemented these equations into the $N$-body code [<span style="font-variant:small-caps;">Isis</span>]{}, which is a modified gravity version of the code [<span style="font-variant:small-caps;">Ramses</span>]{}.]{} [The presence of a conformal symmetron field increases both the power spectrum and mass function compared to standard gravity on small scales. Our main finding is that the newly added disformal terms tend to counteract these effects and can make the evolution slightly closer to standard gravity. We finally show that the disformal terms give rise to oscillations of the scalar field in the centre of the dark matter haloes.]{}
Introduction
============
Since 1998, it has been known that the universe expands at an accelerating rate that is consistent with the existence of a cosmological constant $\Lambda$ [@1998AJ....116.1009R; @1999ApJ...517..565P]. The standard model for cosmology, $\Lambda$CDM, gives an excellent fit to most modern precision observations of large scale structures and of the cosmic microwave background, but it does not explain what the source of $\Lambda$ is. Attempts to calculate the vacuum energy from particle physics yields answers that are several orders of magnitude off from the measured cosmological value of $\Lambda$. A cancellation of that many terms is very improbable and would require extreme fine tuning. This is known as the cosmological constant problem and is a severe problem in modern physics (see @1989RvMP...61....1W for an early introduction to this issue).
A viable solution to the cosmological constant problem is to assume that the particle physics vacuum energy is totally concealed on gravitational scales, while other mechanisms are responsible for the measured expansion. One way to search for such mechanisms consist of introducing a slight modification to standard general relativity (GR) in such a way that the equations for gravity will give rise to accelerated expansion on large scales. There are innumerable models for modified gravity (see @2012PhR...513....1C for a review).
To be considered viable, an important requirement that any modification of gravity must fulfil is that it should reduce to GR on solar system scales. This is achieved through so-called screening mechanisms, described in detail by @Joyce:2014kja. Screening mechanisms are needed because conventional GR is tested and confirmed to very high precision on solar system scales, meaning any modifications to gravitational physics must give similar results within very tight constraints on these scales (see the review by @2009aosp.conf..203R for an overview over solar system tests and constraints).
In this paper we investigate a specific form for modified gravity by using $N$-body simulations, namely the disformal model of gravity. Disformal models were first introduced by @1993PhRvD..48.3641B, and have now been widely studied by applying them to inflation, dark energy, and dark matter [@dis1; @dis2; @dis3; @dis4; @dis5; @dis6; @dis7; @dis8; @dis9; @dis11; @dis14; @dis12; @dis13; @dis10; @2015JCAP...04..036V]. However, this is the first time disformal models have been studied on non-linear scales.
We consider a model where the scalar field has both conformal and disformal couplings to matter. For the conformal part we use a symmetron potential [@2005PhRvD..72d3535P; @2008PhRvD..77d3524O; @2010PhRvL.104w1301H], while for the disformal part we use an exponential term.
Conformally invariant symmetron models have already been investigated in the non-linear regime , as well as several other conformally invariant scalar tensor theories [@Boehmer:2007ut; @Li:2008fa; @baldi_coupled_quintessence_code; @zhao_baojiu_forf_code; @Li2; @dgp_code_durham; @nbody_chameleon; @2014PhRvL.112v1102H]. The aim of this paper is to investigate, for the first time, the effects of adding a disformal term to the symmetron field. We will focus our analysis mainly on the statistical properties (i.e. the power spectrum and halo mass function) of the simulated matter distribution on non-linear scales. The simulations are performed with a modified version of the $N$-body code [<span style="font-variant:small-caps;">Isis</span>]{} , which is itself a modified gravity version of the code [<span style="font-variant:small-caps;">Ramses</span>]{} .
The paper is organised as follows: Section 2 describes the equation of motion of the scalar field and the associated geodesic equation; it also summarises how these are implemented into [<span style="font-variant:small-caps;">Isis</span>]{} and tested. Section 3 describes the cosmological simulations that are used for the analysis and Section 4 contains the results from the analysis. We discuss the results and draw conclusions in Section 5.
The equations and the code
==========================
The model
---------
The model is defined by the following scalar-tensor action $$S=\intop\left[\sqrt{-g}\left(\frac{R}{16\pi G}-\frac{1}{2}\phi^{,\mu}\phi_{,\mu}-V\left(\phi\right)\right)+\sqrt{-\bar{g}}\bar{\mathcal{L}_{\mathrm{m}}}\right]\mathrm{d}^{4}x,\label{eq:action}$$ where $g$ and $\bar{g}$ are the Einstein and Jordan frame metrics, $R$ is the Ricci scalar, and $\bar{\mathcal{L}}_{m}$ is the Lagrangian density of matter (computed using the Jordan frame metric $\bar{g}$ whenever applicable). The field potential $V\left(\phi\right)$ can have many different forms, but we choose the quartic symmetron potential with the three free parameters $\mu$, $\lambda$, and $V_{0}$, $$V\left(\phi\right)=-\frac{1}{2}\mu^{2}\phi^{2}+\frac{1}{4}\lambda\phi^{4}+V_{0}.$$ In disformal gravity the Jordan frame metric $\bar{g}$ is related to the Einstein frame metric according to $$\bar{g}_{\mu\nu}=A\left(\phi\right)g_{\mu\nu}+B\left(\phi\right)\phi_{,\mu}\phi_{,\nu}.$$ The specific forms of $A$ and $B$ that we will study in this paper are as follows: $$\begin{aligned}
A\left(\phi\right) & = 1+\left(\frac{\phi}{M}\right)^{2},\\
B\left(\phi\right) & = B_{0}\exp\left(\beta\frac{\phi}{\phi_{0}}\right), \end{aligned}$$ where $B_{0}$ and $\beta$ are free parameters for the disformal coupling. The normalization constant $\phi_{0}$ is chosen to be the vacuum expectation value of the symmetron field $\phi_{0}\equiv\frac{\mu}{\sqrt{\lambda}}$. The mass scale $M$ is a free parameter, which decides the interaction strength of the conformal coupling.
This specific choice of $A$, $B$, and $V$ gives a symmetron model with an additional disformal term described by $B\left(\phi\right)$. The model reduces to the symmetron model when fixing $B_{0}=0$. Both the matter-coupled symmetron part $A$ and the disformal part $B$ can lead to two separate screening effects [@2010PhRvL.104w1301H; @dis4]. We use natural units where $c=\hbar=1$.
The equation of motion for the scalar field
-------------------------------------------
The equation of motion for the scalar field that results from varying the action defined in Eq. is: $$\begin{gathered}
\left(1+\gamma^{2}\rho\right)\ddot{\phi}+3H\dot{\phi}-\frac{1}{a^{2}}\nabla^{2}\phi= \\
\gamma^{2}\rho\left(\frac{A_{,\phi}\left(\phi\right)}{A\left(\phi\right)}\dot{\phi}^{2}-\frac{B_{,\phi}\left(\phi\right)}{2B\left(\phi\right)}\dot{\phi}^{2}-\frac{A_{,\phi}\left(\phi\right)}{2B\left(\phi\right)}\right) - V_{,\phi}\left(\phi\right),
\label{eq:eom}\end{gathered}$$ where we define $$\gamma^{2}\equiv\frac{B}{A+B\phi^{,\mu}\phi_{,\mu}}.$$ See @dis4 for details on the derivation. Here, a dot represents a partial derivative with respect to cosmic time. The Einstein frame metric is assumed to be a flat Friedmann–Lemaître–Robertson–Walker metric with a scalar perturbation $\Psi$, specifically $$\mathrm{d}s^{2}=-\left(1+2\Psi\right)\mathrm{d}t^{2}+a^{2}\left(t\right)\left(1-2\Psi\right)\left(\mathrm{d}x^{2}+\mathrm{d}y^{2}+\mathrm{d}z^{2}\right).$$ The symbol $a$ is the expansion factor, $H=\frac{\dot{a}}{a}$ is the Hubble parameter, and $\rho$ is the total matter density. Note that we have not neglected the non-static terms in the scalar field since, in this particular model, there is a coupling between the density and the time derivatives, whose effects are still not fully understood in the non-linear regime of cosmological evolution.
For convenience, we normalize the field to the vacuum expectation value of the symmetron field, $$\phi_{0}\equiv\frac{\mu}{\sqrt{\lambda}}.$$ As such, the new dimensionless field $\chi=\phi/\phi_{0}$ should stay in the range $\chi\in[-1,1]$, at least for a symmetron-dominated case when $B_{0}$ is small. Also, for numerical convenience, we introduce the parameter $a_{\mathrm{SSB}}$, which defines the expansion factor at the time of spontaneous symmetry breaking, assuming a uniform matter distribution and no disformal term. We further define the density at which the symmetry is broken: $$\rho_{\mathrm{SSB}}\equiv M^{2}\mu^{2}=\frac{\rho_{0\left(z=0\right)}}{a_{\mathrm{SSB}}^{3}},$$ a dimensionless symmetron coupling constant $$\theta\equiv\frac{\phi_{0}M_{\mathrm{Pl}}}{M^{2}},$$ the range of the symmetron field in vacuum $$\lambda_{0}\equiv\frac{1}{\sqrt{2}\mu},$$ and a dimensionless disformal coupling constant $$b_{0}\equiv B_{0} H_{0}^{2} M_{\mathrm{Pl}}^{2}.$$
By taking into account these definitions, we can rewrite Eq. as $$\begin{gathered}
\left(1+\gamma^{2}\rho\right)\ddot{\chi}+3H\dot{\chi}-\frac{1}{a^{2}}\nabla^{2}\chi= \\
\gamma^{2}\rho\left(\frac{4\theta^{2}\zeta}{A\left(\phi\right)}\chi\dot{\chi}^{2}-\frac{\beta}{2}\dot{\chi}^{2}-\frac{1}{2\zeta B\left(\phi\right)M_{\mathrm{Pl}}^{2}}\chi+\frac{1}{a^{2}}\sum_{i=1,2,3}\Psi_{,i}\chi_{,i}\right) \\
+\left(\chi-\chi^{3}\right)\frac{1}{2\lambda_{0}^{2}}, \end{gathered}$$ where we have also fixed the three free functions as stated above and defined $$\zeta\equiv\frac{3\Omega_{0}H_{0}^{2}\lambda_{0}^{2}}{a_{SSB}^{3}}.$$
What remains to be done, before we can insert this equation into [<span style="font-variant:small-caps;">Isis</span>]{}, is to switch to supercomoving time – the time variable used by [<span style="font-variant:small-caps;">Isis</span>]{}/[<span style="font-variant:small-caps;">Ramses</span>]{}, defined by @1998MNRAS.297..467M – and to split this second order differential equation into a set of two coupled first-order differential equations. Supercomoving time $\tau$ is related to the cosmic time $t$ by $\mathrm{d}\tau=\frac{1}{a^{2}}\mathrm{d}t$. Finally, we introduce the variable $$q=a\chi',$$ which leads to the following set of first-order differential equations for $q$ and $\chi$: $$\begin{aligned}
\chi' = &\frac{q}{a},\label{eq:unitlessEOM} \\
\begin{split}
q' = &\frac{1}{1+\gamma^{2}\rho} \times
\left[\rule{0cm}{0.7cm}a^{3}\nabla^{2}\chi \right.\\
& \left. + \gamma^{2}\rho\left(3\tilde{H}q+ \left[\frac{4\theta^{2}\zeta}{A}\chi-\frac{\beta}{2}\right]\frac{q^{2}}{a}+a\sum_{i=1,2,3}\tilde{\Psi}_{,i}\chi_{,i}\right) \right. \\
& \left. + \left(1-\frac{1}{A+B\phi^{,\mu}\phi_{,\mu}}\frac{a_{SSB}^{3}}{a^{3}}\frac{\rho}{\rho_{0}}-\chi^{2}\right)\chi\frac{a^{5}}{2\lambda_{0}^{2}}\rule{0cm}{0.7cm}\right],
\end{split}\end{aligned}$$ where the primes denote derivatives with respect to supercomoving time. The supercomoving variables with tildes are defined as $\tilde{H}\equiv a^{2}H$ and $\tilde{\Psi}\equiv a^{2}\Psi$. Setting $\gamma^{2}\rho=B=0$ and $A\approx1$ here will result in symmetron equations equivalent to equations (22) and (23) of @2014PhRvD..89h4023L.
These equations are solved using the leapfrog algorithm. The time scales associated with the oscillations of the field are much smaller that those associated with the movement of matter. Because of this, we use shorter leapfrog time steps for the scalar field than for matter, as described in @2014PhRvD..89h4023L.
The geodesic equation
---------------------
In this theory of gravity, dark matter particles move in geodesics determined by the Jordan frame metric. Generally the geodesic equation reads $$\ddot{x}^{\mu}+\bar{\Gamma}_{\alpha\beta}^{\mu}\dot{x}^{\alpha}\dot{x}^{\beta}=0,$$ where $\bar{\Gamma}_{\alpha\beta}^{\mu}$ are the Christoffel symbols that correspond to the Jordan frame metric. Assuming nonrelativistic particles, we neglect quadratic terms in the velocity. The geodesic equation then takes the form $$\ddot{x}^{i}+\bar{\Gamma}_{00}^{i}+2\bar{\Gamma}_{j0}^{i}\dot{x}^{j}=0.$$ To find the necessary barred Christoffel symbols, we use the equation from @2013PhRvD..87h3010Z, $$\bar{\Gamma}_{\alpha\beta}^{\mu}=\Gamma_{\alpha\beta}^{\mu}+\frac{1}{2}\bar{g}^{\mu\nu}\left[\nabla_{\alpha}\bar{g}_{\beta\nu}+\nabla_{\beta}\bar{g}_{\alpha\nu}-\nabla_{\nu}\bar{g}_{\alpha\beta}\right].$$ The resulting equation of motion for the dark matter particles is given by $$\begin{aligned}
\begin{split}
\ddot{x}^{i}+\frac{\Psi_{,i}}{a^{2}}-\frac{2}{AM^{2}a^{2}}\gamma^{2}\phi\phi_{,i}\dot{\phi}^{2}+2\left(H+\frac{\phi\dot{\phi}}{AM^{2}}\right)\dot{x}^{i}\\
+\frac{1}{a^{2}}\gamma^{2}\left(\ddot{\phi}-\frac{1}{a^{2}}\sum_{k=1,2,3}\Psi_{,k}\phi_{,k}+\frac{1}{2}\frac{\beta}{\phi_{0}}\dot{\phi}^{2}\right)\phi_{,i}\\
+2\frac{1}{a^{2}}\gamma^{2}\left(\dot{\phi}_{,j}-H\phi_{,j}-\Psi_{,j}\dot{\phi}+\frac{\beta}{2\phi_{0}}\dot{\phi}\phi_{,j}\right)\phi_{,i}\dot{x}^{j}\\
+\frac{1}{M^{2}a^{2}}\frac{1}{A+B\phi^{,\mu}\phi_{,\mu}}\phi\phi_{,i}-4\gamma^{2}\frac{\phi}{a^{2}AM^{2}}\phi_{,i}\phi_{,j}\dot{\phi}\dot{x}^{j} & = 0.
\end{split}\end{aligned}$$ Here it is possible to recognise the acceleration terms associated with perturbed FLRW geodesics in standard gravity, namely $$\ddot{x}^{i}+\frac{\Psi_{,i}}{a^{2}}+2H\dot{x}^{i}=0,$$ where the second term is the standard gravity force and the third term is the Hubble friction. These terms are already included in [<span style="font-variant:small-caps;">Ramses</span>]{}, so we will focus on the other terms.
The effect of modified gravity on the trajectory of the particles is through the addition of a fifth force as well as new damping terms that are proportional to $\dot{\mathbf{x}}$. As a first attempt to simulate this model, we neglect the damping terms and focus our analysis on the impact of the extra terms associated with the fifth force on a slowly moving matter distribution. The only friction term that we keep is the one associated with the expansion of the universe, which is automatically taken into account in the solution when writing the equation on supercomoving form. The extra acceleration that results from the fifth force, using supercomoving time and with the dimensionless field $\chi$, that we inserted into the code is given by $$\begin{aligned}
\begin{split}
\label{eq:fifthforce}
x_{\mathrm{fifth}}^{\prime\prime i} & =
-\frac{2\theta^{2}\zeta}{A+B\phi^{,\mu}\phi_{,\mu}} \, \chi_{,i} \times
\left\{ \rule{0cm}{0.8cm} a^{2}\chi \, + \, \frac{2\zeta}{a^{4}}\frac{b_{0}\exp\left(\beta\chi\right)}{H_{0}^{2}} \times \right.\\
& \left. \left( a\left[q'-3\tilde{H}q-a\sum_{k=1}^3\tilde{\Psi}_{,k}\chi_{,k}\right]+\left[\beta-\frac{4\theta^{2}\chi}{A}\right]\zeta q^{2}\right) \rule{0cm}{0.8cm} \right\} .
\end{split}\end{aligned}$$
Tests
-----
To test that our modifications to the code [<span style="font-variant:small-caps;">Isis</span>]{} were properly implemented, we compare results that were obtained with our code in the pure conformal limit (i.e. when we set $B_0=0$) to those that were presented by [@2014PhRvD..89h4023L] using a pure disformal symmetron non-static code. We also test the dependence of the final results on the initial conditions that are used for the non-static solver.
### Cosmological comparison to symmetron results
We run two simulations using both the code presented in this paper and the one described in @2014PhRvD..89h4023L with the following symmetron parameters: $\theta=1$, $\lambda_{0}=1\,\mathrm{Mpc}$, and $a_{\mathrm{SSB}}=0.5$. For the disformal part of the model, we used $B_{0}=0$, such that the disformal code should be able to reproduce the already published symmetron non-static results. Both simulations are done with the same initial conditions, which are generated with the code [<span style="font-variant:small-caps;">Grafic</span>]{} [@2001ApJS..137....1B] using a box size of 64 $\mathrm{Mpc}/h$ and $128^3$ particles. The comparison of the simulations is focused on the power spectrum of over-density perturbations and the field profiles of the most massive halo.
Figure \[fig:symmcomp-128\] shows a comparison of the field profile for the most massive halo found in the simulation (with a mass of $2.5\times 10^{14}M_{\odot}/h$). Because the field oscillates, we compare mean values in time that are calculated by averaging over the time interval from $a=0.995$ to $a=1$. The differences in the field profiles from the two codes are always below 0.5% and are sufficiently small to be attributed to the initial random values of the field. A comparison between the power spectra at redshift $z=0$ from both simulations is presented in Fig. \[fig:symmcomp-powdiff\]. The relative difference between the power spectra is well below 0.5% in the whole domain of simulated frequencies.
### Dependence on initial conditions for the scalar field
The initial conditions for the scalar field are generated in the same way as in [@2014PhRvD..89h4023L]: the initial values for $\chi$ are drawn from a uniform distribution around zero with a small dispersion ($\chi_{0}\in\left[-\varepsilon,\,\varepsilon\right]$). We test the sensitivity of the redshift zero scalar field profiles against changes in the amplitude of the initial conditions. To this end, we run simulations with the only aim of tracking the evolution of the scalar field. As we are only interested in the field profiles, we use standard gravity in the geodesic equation to ensure identical matter distributions. The box size and number of particles are 64 $\mathrm{Mpc}/h$ and $64^3$ respectively. We do several runs changing the amplitude of the initial conditions with values ranging from $10^{-5}$ to $10^{-17}$. Figure \[fig:icsig\] shows the time averaged scalar field profile that corresponds to the most massive object found at redshift $z=0$ for all these simulations, which has a mass of $1.2\times 10^{14}M_{\odot}/h$. We find no significant deviations when varying $\varepsilon$. For the scientific simulations that we present in the following section, we chose to use $\varepsilon=10^{-13}$.
Cosmological simulations
========================
To quantify the effects of the disformal terms in the cosmological evolution, we run a set of cosmological simulations using Newtonian gravity, a pure symmetron model, and the symmetron model plus the disformal terms. The initial matter distribution is generated with the package [<span style="font-variant:small-caps;">Grafic</span>]{} [@2001ApJS..137....1B] with standard gravity. The approximation that we make when not including modified gravity in the initial conditions is fully justified by the fact that because of screening effects, modified gravity starts to act only after the redshift of symmetry breaking has passed, which we choose to be at a much later time. All simulations use exactly the same initial matter distribution and assume a flat $\Lambda$CDM background cosmology provided by the Planck collaboration: $\Omega_{m}=0.3175$, $\Omega_{\Lambda}=0.6825$, and $H_{0}=67.11$ km/s/Mpc [@planck_param]. The number of particles is $256^{3}$, and the size of the box is 64 $\mathrm{Mpc}/h$.
The initial values of the dimensionless scalar field $\chi$ at the initial time is calculated by assuming that the scalar field is fully screened at the initial time of the simulation, and thus taken from a uniform random distribution with $\chi_{0}\in[-10^{-13},\,10^{-13}]$. The initial time derivative of the scalar field is assumed to be zero. As discussed in the previous section, we found that the evolution of the scalar field is not very sensitive to the assumed initial conditions.
Simulation $b_{0}$ $\beta$
----------------------- --------- ---------
GR (no modifications) … …
Symmetron-like 0 0
Disformal A 1 1
Disformal B 2 2
Disformal C 1 0
: \[tab:model\_params\]Model parameters for the different simulation runs.
Table \[tab:model\_params\] describes the model parameters employed in the simulations. The model Disformal A has what we consider to be standard parameters. Disformal B has an amplified disformal part owing to increased $b_{0}$ and $\beta$. Disformal C has $\beta=0$, which sets $B\left(\phi\right)$ to be constant. A constant disformal term might give some insight into the effect of the disformal term alone, since the equations show that the positive and negative parts of the disformal effective potential have different shapes when $\beta\neq 0$. The Symmetron-like simulation is run with the code presented in this paper, but with the disformal part of the equations turned off by setting $b_{0}=0$. In all of the modified gravity simulations, we use the symmetron model parameters $(\lambda_0, a_{SSB}, \theta)=(1 ~\mathrm{Mpc}/h, 0.5, 1)$.
After doing one simulation for each set of parameters, as described in Table \[tab:model\_params\], we found that Disformal B gave the most unexpected results – as will be presented in the next section. We therefore performed a total of four simulations of Disformal B, with different initial seeds for the scalar field, but with identical parameters and initial matter distribution. In two of these simulations, the scalar field ended up in the positive minimum of the effective potential ($\phi \approx +\phi_0$), while in two others, the scalar field ended up in the negative minimum ($\phi \approx - \phi_0$).
Results
=======
Power spectrum and halo mass function
-------------------------------------
We study the impact of disformal terms in the power spectrum of density perturbations. The estimation of the power spectrum is made using a Fourier based method. Discreteness effects are corrected using the method proposed by @2005ApJ...620..559J. Figure \[fig:powerspec\] shows the relative difference between the modified gravity and GR power spectra.
We find that the pure conformal model has the same effects that were found by @2012ApJ...748...61D and @2014PhRvD..89h4023L: the fifth force has no effect at large scales (which makes the model consistent with the observed normalization of the perturbations) and gives an excess of power at small scales. Regarding the disformal terms, we found that in the simulations Disformal A and C, the disformal part of the model has no significant impact on the modifications made to GR by the symmetron model. In the case of the first simulation of Disformal B, the stronger disformal terms counteract the effects of the symmetron, bringing the power spectrum closer to GR.
Figure \[fig:powerspecx\] shows the relative difference between all four simulations of Disformal B, and the GR power spectrum. The two disformal simulations where the field fell to the positive potential minimum, have the strongest suppression of power on all scales. For the two other disformal simulations, where the field stabilized around the negative potential minimum, there is some reduction on small scales ($k > 3 h \mathrm{Mpc}^{-1}$), but actually an increase on large scales ($k \approx 1 h \mathrm{Mpc}^{-1}$).
Additionally, we study the halo mass function. To extract this quantity from the simulation data, we identify the dark matter haloes with the halo finder [@2013ApJ...762..109B], which uses a 6D friends-of-friends algorithm. The cumulative mass functions for all the simulations are presented in Fig. \[fig:massfunc\]. The behaviour is the same as that found in the power spectrum: the symmetron model increases the number of small haloes, while the strong disformal term in Disformal B acts in the opposite direction, diminishing the symmetron effect.
These findings on non-linear scales agree with earlier results from linear evolution in disformal theory. In the paper by @2015JCAP...04..036V, they find on the linear perturbation level, that disformal terms counteract the conformal coupling to some degree.
Field profiles
--------------
We present the radial profile of the scalar field for the most massive halo found in the simulation at redshift $z=0$, which has a mass of $M=5.1\times 10^{14}M_{\odot}/h$. In the case of static simulations, it is enough to extract information from the last snapshot that is output by the simulation code. However, in the case of non-static simulations like those presented in this paper, the scalar field has oscillations which makes it difficult to make a comparison between different simulations (the scalar field will often be in a different phase at a given time). To overcome this problem, we calculate a mean value in time taking into account several oscillation periods. The mean is calculated over the interval from $a=0.995$ to $a=1$. We assume here that the variations produced by the displacement of matter during that interval are much smaller than those related to the oscillations of the field.
The top panel of Fig. \[fig:fieldprof\] shows the mean profile of the scalar field. In the Symmetron-like, Disformal A, and Disformal C simulations, the scalar field chose to oscillate around the negative minimum (the symmetron potential has two different minima with different signs). In these cases we show $-\langle\chi\rangle$ for a better comparison of the shape of the field profile. The fact that the potential has two minima should lead to the formation of domain walls [@2013PhRvL.110p1101L; @2014PhRvD..90l4041L; @2014PhRvD..90l5011P], which we do not find in our simulations owing to the small size of the box. Domain walls might have formed in our box at an earlier stage, however, and collapsed before redshift zero.
The scalar field profile tends to the vacuum value far from the halo and presents a gradient responsible for a fifth force when approaching the halo. The innermost region of the cluster is screened, and thus its corresponding field value tends towards zero in that region. The profile shows almost no variation among the different models. However, we find significant differences between standard and disformal symmetron models in the dispersion of the field profiles, which is a measurement of how large the amplitudes of the scalar field oscillations are. This information is shown in the bottom panel of Fig. \[fig:fieldprof\]. In the symmetron case, we find that the dispersion of the profile goes to zero in the centre of the cluster, which means that the scalar field is completely at rest there and only has oscillations in the outer regions and the voids surrounding the halo. In the disformal case, the situation is different. We find that the amplitude of the oscillations does not go to zero in the centre of the cluster and that, in fact, the amplitude can even increase in the central region with respect to the values found outside of the halo. We also see that the dispersion is slightly less than in the symmetron case in the low density region ($r > 1.5 r_{\mathrm{vir}}$). The reason for this may be related to the speed of waves of the scalar field being less in high density regions (see Eq. , which has the form of a wave equation where the factor of $(1 + \gamma^2 \rho)$ in front of $\ddot{\phi}$ can be regarded as $1/c_s^2$). Thus, inside massive haloes, the waves of the scalar field can cluster and give rise to extra oscillations. Further analysis must be made to confirm this hypothesis.
Conclusions
===========
For the first time, we run cosmological simulations with disformally coupled scalar fields. The model includes both a conformal and a disformal coupling to matter. For the conformal part we choose a symmetron potential and a conformal factor $A = 1+\left(\phi / M \right)^{2}$. The disformal part is given by an exponential factor $B = B_{0}\exp\left(\beta \phi / \phi_{0}\right)$. The aim of the paper is to test the effects of the disformal factor on the already known results for the symmetron model.
We present the formalism and modifications that we made to the code [<span style="font-variant:small-caps;">Isis</span>]{} to be able to simulate disformal gravity. The disformal code presented here is based on the non-static solver for scalar fields that is part of the [<span style="font-variant:small-caps;">Isis</span>]{} code [@2014PhRvD..89h4023L]. We present results obtained from a set of five cosmological simulations.
In the set of simulations, we find that the stronger disformal terms can counteract some of the clustering effects of the symmetron field: the power spectrum of density perturbations and the halo mass functions are both smaller than in the symmetron case. However, at least in the region of the parameter space studied, the conformal part of the model is dominant. Therefore, the end result is nevertheless a small-scale increase of both the power spectrum and the mass function with respect to GR.
In the first simulation we found that Disformal B was the only model where the field fell to the positive minimum of the symmetron potential. This could be very important, since the disformal factor $B \left( \chi \right) \propto b_0 \exp \left( \beta \chi \right)$ will change by several orders of magnitude when $\chi \rightarrow -\chi$. The reason for the field falling to one minimum or the other in a particular simulation is complex and can be attributed to the chaotic nature of the equation of motion for the field. We did a total of four simulations on Disformal B with varying initial field distributions. The results show that the power spectrum was reduced significantly only for positive field values. This indicates that the exponential shape of $B(\phi)$ is more important than the value of $b_0$ to achieve the masking of the symmetron clustering. Furthermore, we will probably see new physics and observational signatures of this model in future studies of domain walls [see @2014PhRvD..90b3521C for a description of domain walls in asymmetric potentials].
To understand the differences in the distribution of dark matter that appear because of the new terms, we study the field profiles that correspond to the most massive halo found in the simulations. We find almost no differences in the field profile, but we do find differences in the amplitude of the oscillations of the field, which are larger for the disformaly coupled models. This implies that the differences in the power spectra and mass functions when comparing the disformal models and the symmetron are not the result of the field value, but of the field oscillations. While the gradient of the scalar field is independent of the presence of the disformal terms, it is important to keep in mind that the expression for the fifth force includes the time derivatives of the scalar field. Hence, the fifth force can be modified with respect to the purely conformal fifth force by only increasing the derivatives.
Possible observational consequences of the field oscillations in haloes are (time varying) changes in how photons are lensed by the oscillating disformally coupled field. Because light rays will follow geodesics that are dictated by the modified Jordan frame metric, anomalies in cluster lensing masses could indeed be a ‘disformal smoking gun’. This possibility will be studied in a future paper.
The simulations were performed on the NOTUR cluster HEXAGON, the computing facilities at the University of Bergen. CLL and DFM acknowledge support from the Research Council of Norway through grant 216756.
[^1]: E-mail: robert.hagala@astro.uio.no
|
---
author:
- |
Nabamita Banerjee$^{a}$, Suvankar Dutta$^b$, Sachin Jain$^c$ , R. Loganayagam$^d$ and Tarun Sharma$^c$\
$^a$ NIKHEF, Science Park 105, Amsterdam, The Netherlands\
$^b$ Department of Physics,\
Indian Institute of Science Education and Research(IISER) Bhopal, India\
$^c$Dept. of Theoretical Physics,\
Tata Institute of Fundamental Research,\
Homi Bhabha Rd,Mumbai 400005, India.\
$^d$Junior Fellow, Harvard Society of Fellows,\
Harvard University, Cambridge, MA 02138 .\
Email: [**nbaner@nikhef.nl, suvankar@iiserbhopal.ac.in, sachin@theory.tifr.res.in, nayagam@physics.harvard.edu, tarun@theory.tifr.res.in** ]{}
bibliography:
- 'anomaly.bib'
title: Constraints on Anomalous Fluid in Arbitrary Dimensions
---
Introduction {#Sec:Intro}
============
Anomalies are a fascinating set of phenomena exhibited by field theories and string theories. For the sake of clarity let us begin by distinguishing between three quite different phenomena bearing that name.
The first phenomenon is when a symmetry of a classical action fails to be a symmetry at the quantum level. One very common example of an anomaly of this kind is the breakdown of classical scale invariance of a system when we consider the full quantum theory. This breakdown results in *renormalization group flow*, i.e., a scale-dependence of physical quantities even in a classically scale-invariant theory. Often this classical symmetry cannot be restored without seriously modifying the content of the theory. Anomalies of this kind are often serve as a cautionary tale to remind us that the symmetries of a classical action like scale invariance will often not survive quantisation.
The second set of phenomena are what are termed as gauge anomalies. A system is said to exhibit a gauge anomaly if a particular classical gauge redundancy of the system is no more a redundancy at a quantum level. Since such redundancies are often crucial in eliminating unphysical states in a theory, a gauge anomaly often signifies a serious mathematical inconsistency in the theory. Hence this second kind of anomalies serve as a consistency criteria whereby we discard any theory exhibiting gauge anomaly as most probably inconsistent.
The third set of phenomena which we would be mainly interested in this work is when a genuine symmetry of a quantum theory is no more a symmetry when the theory is placed in a non-trivial background where we turn on sources for various operators in the theory. This lack of symmetry is reflected in the fact that the path integral with these sources turned on is no more invariant under the original symmetry transformations. If the sources are non-trivial gauge/gravitational backgrounds (corresponding to the charge/energy-momentum operators in the theory) the path integral is no more gauge-invariant. In fact as is well known the gauge transformation of the path-integral is highly constrained and the possible transformations are classified by the Wess-Zumino descent relations[^1].
Note that unlike the previous two phenomena here we make no reference to any specific classical description or the process of quantisation and hence this kind of anomalies are well-defined even in theories with multiple classical descriptions (or theories with no known classical description). Unlike the first kind of anomalies the symmetry is simply recovered at the quantum level by turning off the sources. Unlike the gauge anomalies the third kind of anomalies do not lead to any inconsistency. In what follows when we speak of anomaly we will always have in mind this last kind of anomalies unless specified otherwise.
Anomalies have been studied in detail in the least few decades and their mathematical structure and phenomenological consequence for zero temperature/chemical potential situations are reasonably well-understood. However the anomaly related phenomena in finite temperature setups let alone in non-equilibrium states are still relatively poorly understood despite their obvious relevance to fields ranging from solid state physics to cosmology. It is becoming increasingly evident that there are universal transport processes which are linked to anomalies present in a system and that study of anomalies provide a non-perturbative way of classifying these transport processes say in solid-state physics[@2012PhRvB..85d5104R].
While the presence of transport processes linked to anomalies had been noticed before in a diversity of systems ranging from free fermions[^2] to holographic fluids[^3] a main advance was made in [@Son:2009tf]. In that work it was shown using very general entropy arguments that the $U(1)^3$ anomaly coefficient in an arbitrary $3+1d$ relativistic field theory is linked to a specific transport process in the corresponding hydrodynamics. This argument has since then been generalised to finite temperature corrections [@Neiman:2010zi; @Loganayagam:2011mu] and $U(1)^{n+1}$ anomalies in $d=2n$ space time dimensions [@Kharzeev:2011ds; @Loganayagam:2011mu]. In particular the author of [@Loganayagam:2011mu] identified a rich structure to the anomaly-induced transport processes by writing down an underlying Gibbs-current which captured these processes in a succinct way. Later in a microscopic context in ideal Weyl gases, the authors of [@Loganayagam:2012pz] identified this structure as emerging from an adiabatic flow of chiral states convected in a specific way in a given fluid flow.
While these entropy arguments are reasonably straightforward they appear somewhat non-intuitive from a microscopic field theory viewpoint. It is especially important to have a more microscopic understanding of these transport processes if one wants to extend the study of anomalies far away from equilibrium where one cannot resort to such thermodynamic arguments. So it is crucial to first rephrase these arguments in a more field theory friendly terms so that one may have a better insight on how to move far away from equilibrium.
Precisely such a field-theory friendly reformulation in $3+1d$ and $1+1d$ was found recently in the references [@Banerjee:2012iz] and [@Jain:2012rh] respectively. Our main aim in this paper is to generalise their results to arbitrary even space time dimensions. So let us begin by repeating the basic physical idea behind this reformulation in the next few paragraphs.
Given a particular field theory exhibiting certain anomalies, one begins by placing that field theory in a time-independent gauge/gravitational background at finite temperature/chemical potential. We take the gauge/gravitational background to be spatially slowly varying compared to all other scales in the theory. Using this one can imagine integrating out all the heavy modes[^4] in the theory to generate an effective Euler-Heisenberg type effective action for the gauge/gravitational background fields at finite temperature/chemical potential.
In the next step one expands this effective action in a spatial derivative expansion and then imposes the constraint that its gauge transformation be that fixed by the anomaly. This constrains the terms that can appear in the derivative expansion of the Euler-Heisenberg type effective action. As is clear from the discussion above, this effective action and the corresponding partition function have a clear microscopic interpretation in terms of a field-theory path integral and hence is an appropriate object in terms of which one might try to reformulate the anomalous transport coefficients.
The third step is to link various terms that appear in the partition function to the transport coefficients in the hydrodynamic equations. The crucial idea in this link is the realisation that the path integral we described above is essentially dominated by a time-independent hydrodynamic state (or more precisely a hydrostatic state ). This means in particular that the expectation value of energy/momentum/charge/entropy calculated via the partition function should match with the distribution of these quantities in the corresponding hydrostatic state.
These distributions in turn depend on a subset of transport coefficients in the hydrodynamic constitutive relations which determine the hydrostatic state. In this way various terms that appear in the equilibrium partition function are linked to/constrain the transport coefficients crucial to hydrostatics. Focusing on just the terms in the path-integral which leads to the failure of gauge invariance we can then identify the universal transport coefficients which are linked to the anomalies. This gives a re derivation of various entropy argument results in a path-integral language thus opening the possibility that an argument in a similar spirit with Schwinger-Keldysh path integral will give us insight into non-equilibrium anomaly-induced phenomena.
Our main aim in this paper is twofold - first is to carry through in arbitrary dimensions this program of equilibrium partition function thus generalising the results of [@Banerjee:2012iz; @Jain:2012rh] and re deriving in a path-integral friendly language the results of [@Kharzeev:2011ds; @Loganayagam:2011mu].
Our second aim is to clarify the relation between the Gibbs current studied in [@Loganayagam:2011mu; @Loganayagam:2012pz] and the partition function of [@Banerjee:2012iz; @Jain:2012rh]. Relating them requires some care on carefully distinguishing the consistent from covariant charge , the final result however is intuitive : the negative logarithm of the equilibrium partition function (times temperature) is simply obtained by integrating the equilibrium Gibbs free energy density (viz. the zeroeth component of the Gibbs free current) over a spatial hyper surface. This provides a direct and an intuitive link between the local description in terms of a Gibbs current vs. the global description in terms of the partition function.
The plan of the paper is following. We will begin by mainly reviewing known results in Section §\[sec:prelim\]. First we review the formalism/results of [@Loganayagam:2011mu] in subsection§§\[subsec:LogaReview\] where entropy arguments were used to constrain the anomaly-induced transport processes a Gibbs-current was written down which captured those processes in a succinct way. This is followed by subsection§§\[subsec:PartitionReview\] where we briefly review the relevant details of the equilibrium partition function formalism for fluids as developed in [@Banerjee:2012iz]. A recap of the relevant results in (3+1) and (1+1) dimensions[@Banerjee:2012iz; @Jain:2012rh] and a comparison with results in this paper are relegated to appendix \[app:oldresult\].
Section §\[sec:2ndimu1\] is devoted to the derivation of transport coefficients for $2n$ dimensional anomalous fluid using the partition function method. The next section§\[sec:entropy\] contains construction of entropy current for the fluid and the constraints on it coming from partition function. This mirrors similar discussions in [@Banerjee:2012iz; @Jain:2012rh]. We then compare these results to the results of [@Loganayagam:2011mu] presented before in subsection§§\[subsec:LogaReview\] and find a perfect agreement.
Prodded by this agreement, we proceed in next section§\[sec:IntByParts\] to a deeper analysis of the relation between the two formalisms. We prove an intuitive relation whereby the partition function could be directly derived from the Gibbs current of [@Loganayagam:2011mu] by a simple integration (after one carefully shifts from the covariant to the consistent charge).
This is followed by section§\[sec:2ndimmul\] where we generalise all our results for multiple $U(1)$ charges. We perform a $CPT$ invariance analysis of the fluid in section §\[sec:CPT\] and this imposes constraints on the fluid partition function. We end with conclusion and discussions in section§\[sec:conclusion\].
Various technical details have been pushed to the appendices for the convenience of the reader. After the appendix \[app:oldresult\] on comparison with previous partition function results in (3+1) and (1+1) dimensions, we have placed an appendix \[app:hydrostatics\] detailing various specifics about the hydrostatic configuration considered in [@Banerjee:2012iz]. We then have an appendix \[app:variationForms\] where we present the variational formulae to obtain currents from the partition function in the language of differential forms. This is followed by an appendix \[app:formConventions\] on notations and conventions (especially the conventions of wedge product etc.).
Preliminaries {#sec:prelim}
=============
In this section we begin by reviewing and generalising various results from [@Loganayagam:2011mu] where constraints on anomaly-induced transport in arbitrary dimensions were derived using adiabaticity (i.e., the statement that there is no entropy production associated with these transport processes). Many of the zero temperature results here were also independently derived by the authors of [@Kharzeev:2011ds].
We will then review the construction of equilibrium partition function (free energy) for fluid in the rest of the section. The technique has been well explained in [@Banerjee:2012iz] and familiar readers can skip this part.
Adiabaticity and Anomaly induced transport {#subsec:LogaReview}
------------------------------------------
Hydrodynamics is a low energy (or long wavelength) description of a quantum field theory around its thermodynamic equilibrium. Since the fluctuations are of low energy, we can express physical data in terms of derivative expansions of fluid variables (fluid velocity $u(x)$, temperature $T(x)$ and chemical potential $\mu(x)$) around their equilibrium value.
The dynamics of the fluid is described by some conservation equations. For example, the conservation equations of the fluid stress-tensor or the fluid charge current. These are known as constitutive equations. The stress tensor and charged current of fluid can be expressed in terms of fluid variables and their derivatives. At any derivative order, a generic form of the stress tensor and charged current can be written demanding symmetry and thermodynamics of the underlying field theory. These generic expressions are known as constitutive relations. As it turns out, validity of 2nd law of thermodynamics further constraints the form of these constitutive relations.
The author of [@Loganayagam:2011mu] assumed the following form for the constitutive relations describing energy, charge and entropy transport in a fluid $$\begin{split}
T^{\mu\nu} &\equiv \varepsilon u^\mu u^\nu + p P^{\mu\nu} + q^\mu_{anom}u^\nu + u^\mu q^\nu_{anom} + T^{\mu\nu}_{diss}\\
J^{\mu} &\equiv q u^\mu + J^{\mu}_{anom}+J^{\mu}_{diss} \\
J^\mu_S &\equiv s u^\mu + J^\mu_{S,anom}+J^\mu_{S,diss}\\
\end{split}$$ where $u^\mu$ is the velocity of the fluid under consideration which obeys $u^\mu u_\mu =-1$ when contracted using the space time metric $g_{\mu\nu}$. Further, $P^{\mu\nu}\equiv g^{\mu\nu}+u^\mu u^\nu$ , pressure of the fluid is $p$ and $\{\epsilon,q,s\}$ are the energy,charge and the entropy densities respectively. We have denoted by $\{q^\mu_{anom},J^{\mu}_{anom},
J^\mu_{S,anom}\}$ the anomalous heat/charge/entropy currents and by $\{T^{\mu\nu}_{diss},J^{\mu}_{diss},
J^\mu_{S,diss}\}$ the dissipative currents.
### Equation for adiabaticity
A convenient way to describe adiabatic transport process is via a **covariant** anomalous Gibbs current ${\left ( \mathcal{G}^{Cov}_{anom} \right )}^\mu$.
The adjective **covariant** refers to the fact that the Gibbs free energy and the corresponding partition function are computed by turning on chemical potential for the **covariant** charge. This is to be contrasted with the **consistent** partition function and the corresponding **consistent** anomalous Gibbs current ${\left ( \mathcal{G}^{Consistent}_{anom} \right )}^\mu$.
Since this distinction is crucial let us elaborate this in the next few paragraphs - it is a fundamental result due to Noether that the continuous symmetries of a theory are closely linked to the conserved currents in that theory. Hence when the path integral fails to have a symmetry in the presence of background sources, there are two main consequences - first of all it directly leads to a modification of the corresponding charge conservation and a failure of Noether theorem. The second consequence is that various correlators obtained by varying the path integral are not gauge-covariant and a more general modifications of Ward identities occur.
A simple example is the expectation value of the current obtained by varying the path integral with respect to a gauge field (often termed the **consistent** current ) as,
$$J^{\mu}_{Consistent}\equiv \frac{\partial S}{\partial {\cal A}_{\mu}} .$$ The consistent current is not covariant under gauge transformation.
As has been explained in great detail in [@Bardeen:1984pm] thus there exists another current in anomalous theories: the covariant current. The covariant current $J^{\mu}_{Cov}$ is a current shifted with respect to the consistent current by an amount $J_c^{\mu}$. The shift is such that its gauge transformation is anomalous and it exactly cancels the gauge non invariant part of the consistent current. Thus, the covariant current is covariant under the gauge transformation, as suggested by its name.
The covariant Gibbs current describes the transport of Gibbs free energy when a chemical potential is turned on for the covariant charge. We will take a Hodge-dual of this covariant Gibbs current to get a $d-1$ form in d-space time dimensions. Let us denote this Hodge-dual by $\bar{\mathcal{G}}^{Cov}_{anom}$. The anomalous parts of charge/entropy/energy currents can be derived from this Gibbs current via thermodynamics $$\begin{split}
\bar{J}^{Cov}_{anom} &= -\frac{\partial\bar{\mathcal{G}}_{anom}}{\partial\mu}\\
\bar{J}^{Cov}_{S,anom} &= -\frac{\partial\bar{\mathcal{G}}_{anom}}{\partial T}\\
\bar{q}^{Cov}_{anom} &= \bar{\mathcal{G}}_{anom} + T \bar{J}_{S,anom} + \mu \bar{J}_{anom}
\end{split}$$
Then according to [@Loganayagam:2011mu] the condition for adiabaticity is $$\label{adiabiticity}
d\bar{q}^{Cov}_{anom} + \mathfrak{a} \wedge \bar{q}^{Cov}_{anom} -\mathcal{E}\wedge \bar{J}^{Cov}_{anom}
= T d\bar{J}^{Cov}_{S,anom} + \mu d\bar{J}^{Cov}_{anom} -\mu \bar{\mathfrak{A}}^{Cov}$$ where $\mathfrak{a},\mathcal{E}$ are the acceleration 1-form and the rest-frame electric field 1-form respectively defined via $$\mathfrak{a} \equiv (u.\nabla)u_\mu\ dx^\mu\ ,\quad \mathcal{E}\equiv u^\nu\mathcal{F}_{\mu\nu} dx^\mu$$ Further the rest frame magnetic field/vorticity 2-forms are defined by subtracting out the electric part from the gauge field strength and the acceleration part from the exterior derivative of velocity, viz., $$\mathcal{B}\equiv \mathcal{F}-u\wedge\mathcal{E} \ ,\quad 2\omega \equiv du+u\wedge \mathfrak{a}$$
The symbol $\bar{\mathfrak{A}}^{Cov}$ is the d-form which is the Hodge dual of the rate at which the **covariant** charge is created due to anomaly,i.e., $$d\bar{J}^{Cov} = \bar{\mathfrak{A}}^{Cov}$$ where $\bar{J}^{Cov}$ is the entire covariant charge current including both the anomalous and the non-anomalous pieces. For simplicity we have restricted our attention to a single U(1) global symmetry which becomes anomalous on a non-trivial background.
In terms of the Gibbs current , we can write the adiabiticity condition as, $$\label{eq:adiabG}
d\bar{\mathcal{G}}^{Cov}_{anom} + \mathfrak{a} \wedge \bar{\mathcal{G}}^{Cov}_{anom}+\mu \bar{\mathfrak{A}}^{Cov}
= {\left ( dT+\mathfrak{a}T \right )}\wedge \frac{\partial\bar{\mathcal{G}}^{Cov}_{anom}}{\partial T}
+ {\left ( d\mu+\mathfrak{a}\mu-\mathcal{E} \right )}\wedge \frac{\partial\bar{\mathcal{G}}^{Cov}_{anom}}{\partial \mu}$$
### Construction of the polynomial $\mathfrak{F}^\omega_{anom}$
The main insight of [@Loganayagam:2011mu] is that in d-space time dimensions the solutions of this equation are most conveniently phrased in terms of a single homogeneous polynomial of degree $n+1$ in temperature $T$ and chemical potential $\mu$.
Following the notation employed in [@Loganayagam:2012pz] we will denote this polynomial as $\mathfrak{F}^\omega_{anom}[T,\mu]$. As was realised in [@Loganayagam:2012pz], this polynomial is often closely related to the anomaly polynomial of the system[^5] . More precisely, for a variety of systems we have a remarkable relation between $\mathfrak{F}_{anom}^\omega[T,\mu]$ and the anomaly polynomial $\mathcal{P}_{anom} {\left [ \mathcal{F}, \mathfrak{R} \right ]}$ $$\label{eq:anomFP}
\begin{split}
\mathfrak{F}_{anom}^\omega[T,\mu] = \mathcal{P}_{anom} {\left [ \mathcal{F} \mapsto \mu, p_1(\mathfrak{R}) \mapsto - T^2 , p_{k>1}(\mathfrak{R}) \mapsto 0 \right ]}
\end{split}$$ Let us be more specific : on a $(2n-1)+1$ dimensional space time consider a theory with $$\label{eq:FOmegaC}
\begin{split}
\mathfrak{F}^\omega_{anom}[T,\mu] &= \mathcal{C}_{anom}\mu^{n+1}+\sum_{m=0}^{n}C_m T^{m+1}\mu^{n-m}\\
\end{split}$$ Assuming that the theory obeys the replacement rule such a $\mathfrak{F}^\omega_{anom}[T,\mu]$ can be obtained from an anomaly polynomial[^6] $$\begin{split}
\mathcal{P}_{anom} &= \mathcal{C}_{anom}\mathcal{F}^{n+1}+\sum_{m=0}^{n}C_m {\left [ - p_1(\mathfrak{R}) \right ]}^{\frac{m+1}{2}}\mathcal{F}^{n-m}+\ldots\\
\end{split}$$ where we have presented the terms which do not involve the higher Pontryagin forms. Restricting our attention only to the $U(1)^{n+1}$ anomaly (and ignoring the mixed/pure gravitational anomalies ) we can write $$\label{eq:dJ}
\begin{split}
d\bar{J}_{Consistent} &=\mathcal{C}_{anom}\mathcal{F}^n\\
d\bar{J}_{Cov} &=(n+1)\mathcal{C}_{anom} \mathcal{F}^n \\
\end{split}$$ and their difference is given by $$\label{eq:shift}
\begin{split}
\bar{J}_{Cov} = \bar{J}_{Consistent}+n \mathcal{C}_{anom}\hat{\mathcal{A}}\wedge \mathcal{F}^{n-1}
\end{split}$$
The solution of corresponding to the homogeneous polynomial is given by $$\label{eq:GCovBOmega}
\begin{split}
\bar{\mathcal{G}}^{Cov}_{anom}
&= C_0 T \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}+ \sum_{m=1}^{n}\left[\mathcal{C}_{anom}\binom{n+1}{m+1}\mu^{m+1}\right.\\
&\qquad \left. + \sum_{k=0}^{m}C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k}\right] (2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u \\
\end{split}$$ Here $\hat{\mathcal{A}}$ is the $U(1)$ gauge-potential 1-form in some gauge with $\mathcal{F}\equiv d\hat{\mathcal{A}}$ being its field-strength 2-form. Further, $\mathcal{B},\omega$ are the rest frame magnetic field/vorticity 2-forms and $T ,\mu $ are the local temperature and chemical potential respectively. They obey $$\begin{split}
(d\mathcal{B})\wedge u = -(2\omega)\wedge\mathcal{E}\wedge u \ , \quad d(2\omega)\wedge u = (2\omega)\wedge\mathfrak{a}\wedge u
\end{split}$$ Using these equations it is a straightforward exercise to check that furnishes a solution to .
We will make a few remarks before we proceed to derive charge/entropy/energy currents from this Gibbs current. Note that if one insists that the Gibbs current be gauge-invariant then we are forced to put $C_0=0$ - in the solution presented in [@Loganayagam:2011mu] this condition was implicitly assumed and the $C_0$ term was absent. The authors of [@Banerjee:2012iz] later relaxed this assumption insisting gauge-invariance only for the covariant charge/energy currents. Since we would be interested in comparison with the results derived in [@Banerjee:2012iz] it is useful to retain the $C_0$ term.
Now we use thermodynamics to obtain the charge current as $$\begin{split}
&\bar{J}^{Cov}_{anom} \\
&=- \sum_{m=1}^{n} \left[ (m+1)\mathcal{C}_{anom}\binom{n+1}{m+1} \mu^m
\right.\\
&\qquad\left. +\sum_{k=0}^{m}(m-k)C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k-1}\right] (2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u \\
\end{split}$$ and the entropy current is given by $$\begin{split}
\bar{J}^{Cov}_{S,anom}
&=- C_0\hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}\\
&\qquad - \sum_{m=1}^{n}\sum_{k=0}^{m}(k+1) C_k \binom{n-k}{m-k} T^{k}\mu^{m-k} (2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u\\
\end{split}$$ The energy current is given by $$\begin{split}
&\bar{q}^{Cov}_{anom}\\
&=- \sum_{m=1}^{n}m\left[\mathcal{C}_{anom}\binom{n+1}{m+1} \mu^{m+1}
\right.\\
&\qquad \left.+\sum_{k=1}^{m}C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k}\right] (2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u \\
\end{split}$$
These currents satisfy an interesting Reciprocity type relationship noticed in [@Loganayagam:2011mu] $$\label{eq:reciprocity}
\frac{\delta \bar{q}^{Cov}_{anom}}{\delta \mathcal{B}} = \frac{\delta \bar{J}^{Cov}_{anom}}{\delta (2\omega)}$$
While this is a solution in a generic frame one can specialise to the Landau frame (where the velocity is defined via the energy current) by a frame transformation $$\begin{split}
u^\mu &\mapsto u^\mu - \frac{q^\mu_{anom}}{\epsilon + p}, \\J^\mu_{anom} &\mapsto J^\mu_{anom} - q \frac{q^\mu_{anom}}{\epsilon + p} ,\\
J^\mu_{S,anom}& \mapsto J^\mu_{S,anom} - s \frac{q^\mu_{anom}}{\epsilon + p},\\q^\mu_{anom} &\mapsto 0\\
\end{split}$$ to get $$\begin{split}
\bar{J}^{Cov,Landau}_{anom} &= \sum_{m=1}^{n}\xi_m(2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u\\
\bar{J}^{Cov,Landau}_{S,anom} &= \sum_{m=1}^{n}\xi^{(s)}_m(2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u+\zeta\ \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}\\
\end{split}$$ where $$\label{eq:xiLoga}
\begin{split}
\xi_m &\equiv {\left [ m \frac{q\mu}{\epsilon + p}-(m+1) \right ]}\mathcal{C}_{anom}\binom{n+1}{m+1} \mu^m\\
&\qquad+\sum_{k=0}^{m}{\left [ m \frac{q\mu}{\epsilon + p}-(m-k) \right ]}C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k-1} \\
\xi^{(s)}_m &\equiv {\left [ m \frac{sT}{\epsilon + p} \right ]}\mathcal{C}_{anom}\binom{n+1}{m+1} T^{-1}\mu^{m+1}\\
&\qquad+\sum_{k=0}^{m}{\left [ m \frac{sT}{\epsilon + p}-(k+1) \right ]}C_k \binom{n-k}{m-k} T^k\mu^{m-k} \\
\zeta &= - C_0
\end{split}$$ Often in the literature the entropy current is quoted in the form $$\begin{split}
\bar{J}^{Cov,Landau}_{S,anom} &= -\frac{\mu}{T} \bar{J}^{Cov,Landau}_{anom} + \sum_{m=1}^{n}\chi_m(2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u+\zeta\ \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}\\
\end{split}$$ where $$\label{eq:Chi_mPrediction}
\begin{split}
\zeta &= -C_0 \\
\chi_m &\equiv \xi^{(s)}_m +\frac{\mu}{T} \xi_m\\
&= - \mathcal{C}_{anom}\binom{n+1}{m+1} T^{-1}\mu^{m+1}-\sum_{k=0}^{m}C_k \binom{n-k}{m-k} T^k\mu^{m-k} \\
\end{split}$$ where we have used the thermodynamic relation $sT+q\mu=\epsilon + p$. By looking at we recognise these to be the coefficients occurring in the anomalous Gibbs current : $$\label{eq:GibbsChi}
\begin{split}
\bar{\mathcal{G}}^{Cov}_{anom} &= -T{\left [ \sum_{m=1}^{n}\chi_m(2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u+\zeta\ \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1} \right ]}\\
\end{split}$$ In fact this is to be expected from basic thermodynamic considerations : the above equation is a direct consequence of the relation $G=-T(S+\frac{\mu}{T}Q-\frac{U}{T})$ and the fact that energy current receives no anomalous contributions in the Landau frame.
This ends our review of the main results of [@Loganayagam:2011mu] adopted to our purposes. Our aim in the rest of the paper would be to derive all these results purely from a partition function analysis.
Equilibrium Partition Function {#subsec:PartitionReview}
------------------------------
In this subsection we review (and extension) an alternative approach to constrain the constitutive relations, namely by demanding the existence of an equilibrium partition function (or free energy) for the fluid as described in [@Banerjee:2012iz; @Jain:2012rh] [^7].
Let us keep the fluid in a special background such that the background metric has a time like killing vector and the background gauge field is time independent. Any such metric can be put into the following Kaluza-Klein form $$\begin{split}
ds^2 &= -e^{2\sigma}(dt+a_idx^i)^2 + g_{ij}dx^idx^j, \\
\hat{\cal A} &= {\cal A}_0dt + {\cal A}_idx^i
\end{split}
\label{KKform}$$ here $i,j~\epsilon ~~(1,2 \ldots 2n-1)$ are the spatial indices. We will often use the notation $\gamma\equiv e^{-\sigma}$ for brevity. This background has a time-like killing vector $\partial_t$ and let $u_k^\mu=(e^{-\sigma},0,0,\ldots)$ be the unit normalized vector in the killing direction so that $$u_k^\mu\partial_\mu = \gamma \partial_t \quad\text{and}\quad u_k= -\gamma^{-1}(dt+a)$$
In the corresponding Euclidean field theory description of equilibrium, the imaginary time direction would be compactified into a thermal circle with the size of circle being the inverse temperature of the underlying field theory. In the 2n-1 dimensional compactified geometry, the original 2n background field breaks as follow
- metric($g_{\mu\nu}$) : scalar($\sigma$), KK gauge field($a_i$), lower dimensional metric($g_{ij}$).
- gauge field($\hat{\cal A}_\mu$) : scalar(${\cal A}_0$), gauge field(${\cal A}_i$)
Under this KK type reduction the 2n dimensional diffeomorphisms breaks up into 2n-1 dimensional diffeomorphisms and KK gauge transformations. The components of 2n dimensional tensors which are KK-gauge invariant in 2n-1 dimensions are those with lower time(killing direction) and upper space indices. Given a 1-form $J$ we will split it in terms of KK-invariant components as $$J=J_0 (dt+a_i dx^i)+ g_{ij} J^i dx^j$$ Other KK non-invariant components of $J$ are given by $$\begin{split}
J^0 &= -{\left [ \gamma^2 J_0+a_i J^i \right ]}\\
J_i &= g_{ij}J^j +a_i J_0
\end{split}$$
To take care of KK gauge invariance we will identify the lower dimensional U(1) gauge field (denoted by non script letters) as follows $$\begin{split}
A_0 &= {\cal A}_0+\mu_0 , ~~A^i = {\cal A}^i \\
\Rightarrow A_i &= {\cal A}_i - {\cal A}_0 a_i {\rm ~~~and}\\
F_{ij} &= \partial_i A_j - \partial_j A_i = \mathcal{F}_{ij}
- A_0 f_{ij} -(\partial_i A_0~a_j - \partial_j A_0~a_i).
\end{split}
\label{KKinv}$$ where $f_{ij}\equiv \partial_i a_j - \partial_j a_i$ and $\mu_0$ is a convenient constant shift in ${\cal A}_0$ which we will define shortly. We can hence write $$\hat{\mathcal{A}} = \mathcal{A}_0 dt + \mathcal{A} = A_0 (dt+a_i dx^i) + A_i dx^i-\mu_0 dt$$ We are now working in a general gauge - often it is useful to work in a specific class of gauges : one class of gauges we will work on is obtained from this generic gauge by performing a gauge transformation to remove the $\mu_0 dt$ piece. We will call these class of gauges as the ‘zero $\mu_0$’ gauges. In these gauges the new gauge field is given in terms of the old gauge field via $$\hat{\mathcal{A}}_{\mu_0=0} \equiv \hat{\mathcal{A}}+\mu_0 dt$$ We will quote all our consistent currents in this gauge. The field strength 2-form can then be written as $$\mathcal{F}\equiv d\hat{\mathcal{A}} = dA+ A_0 da + dA_0\wedge(dt+a)$$
We will now focus our attention on the **consistent** equilibrium partition function which is the Euclidean path-integral computed on space adjoined with a thermal circle of length $1/T_0$. We will further turn on a chemical potential $\mu$ - since there are various different notions of charge in anomalous theories placed in gauge backgrounds we need to carefully define which of these notions we use to define the partition function[^8]. While in the previous subsection we used the chemical potential for a **covariant** charge and the corresponding **covariant** Gibbs free-energy following [@Loganayagam:2011mu] , in this subsection we will follow [@Banerjee:2012iz] in using a chemical potential for the consistent charge to define the partition function. This distinction has to be kept in mind while making a comparison between the two formalisms as we will elaborate later in section§\[sec:IntByParts\].
The consistent partition function $Z_{Consistent}$ that we write down will be the most general one consistent with 2n-1 dimensional diffeomorphisms, KK gauge invariance and the U(1) gauge invariance up to anomaly. It is a scalar $S$ constructed out of various background quantities and their derivatives. The most generic form of the partition function is $$\label{parform}
W=\ln Z_{Consistent}= \int d^{2n-1}x \sqrt{g_{2n-1}} S(\sigma, A_0,a_i,A_i,g_{ij}) .$$ Given this partition function, we compute various components of the stress tensor and charged current from it. The KK gauge invariant components of the stress tensor $T_{\mu\nu}$ and charge current $J_{\mu}$ can then be obtained from the partition function as follows [@Banerjee:2012iz], $$\label{parstcu}
\begin{split}
T_{00} &= -\frac{T_0 e^{2 \sigma}}{\sqrt{-g_{2n}} }\frac{\delta W}{\delta \sigma},~~
J_0^{Consistent} = -\frac{e^{2\sigma} T_0}{\sqrt{-g_{2n}}}\frac{\delta W}{\delta A_0}, \\
T_0^i &= \frac{T_0}{\sqrt{-g_{2n}} }\bigg(\frac{\delta W}{\delta a_i}
- A_0 \frac{\delta W}{\delta A_i}\bigg),~~
J^i_{Consistent} = \frac{T_0}{\sqrt{-g_{2n}}}\frac{\delta W}{\delta A_i}, \\~~
T^{ij} &= -\frac{2 T_0}{\sqrt {-g_{2n}}} g^{il}g^{jm}\frac{\delta W}
{\delta g^{lm}}. \\
\end{split}$$ here $\{ \sigma, a_i, g_{ij}, A_0, A_i \}$ are chosen independent sources, so the partial derivative w.r.t any of them in the above equations means that others are kept constant. We will sometimes use the above equation written in terms of differential forms - we will refer the reader to appendix \[app:variationForms\] for the differential-form version of the above equations.
Next we parameterize the most generic equilibrium solution and constitutive relations for the fluid as, $$\begin{aligned}
\label{flustcu}
&& u(x)= u_0(x)+u_1(x) , \quad T(x)= T_0(x)+T_1(x) , \quad \mu(x)= \mu_0(x)+\mu_1(x) , \nonumber \\
&& T_{\mu\nu}=(\epsilon + p)u_{\mu}u_{\nu} + p g_{\mu\nu}+\pi_{\mu\nu} , \quad J^{\mu}=
q u^{\mu}+ j^{\mu}_{diss},\end{aligned}$$ where, ${u_1,T_1,\mu_1,\pi_{\mu\nu},j^{\mu}_{diss}}$ are various derivatives of the background quantities. Note that we will work in Landau frame throughout.
These corrections are found by comparing the fluid stress tensor $T_{\mu\nu}$ and current $J_{\mu}$ in Eqn. with $T_{\mu\nu}$ and $J_{\mu}$ in Eqn. as obtained from the partition function. This exercise then constrains various non-dissipative coefficients that appear in the constitutive relations in Eqn..
This then ends our short review of the formalism developed in [@Banerjee:2012iz]. In the next section we will apply this formalism to a theory with $U(1)^{n+1}$ anomaly in $d=2n$ space time dimensions.
Anomalous partition function in arbitrary dimensions {#sec:2ndimu1}
====================================================
Let us consider then a fluid in a $2n$ dimensional space time. The fluid is charged under a single $ U(1)$ abelian gauge field ${\cal A}_{\mu}$. We will generalise to multiple abelian gauge fields later in section §\[sec:2ndimmul\] and leave the non-abelian case for future study. We will continue to use the notation in the subsection §§\[subsec:LogaReview\].
The consistent/covariant anomaly are then given by Eqn. which can be written in components as $$\begin{split}
\nabla_{\mu}J^{\mu}_{Consistent} &= \mathcal{C}_{anom}
\varepsilon^{\mu_1 \nu_1 \ldots\mu_n\nu_n}\partial_{\mu_1}\hat{\mathcal{A}}_{\nu_1}\ldots\partial_{\mu_n}\hat{\mathcal{A}}_{\nu_n}\\
&= \frac{\mathcal{C}_{anom}}{2^n} \varepsilon^{\mu_1 \nu_1 \ldots\mu_n\nu_n} \mathcal{F}_{\mu_1 \nu_1} \ldots \mathcal{F}_{\mu_n \nu_n}.\\
\nabla_{\mu}J^{\mu}_{Cov} &= (n+1) \mathcal{C}_{anom}
\varepsilon^{\mu_1 \nu_1 \ldots\mu_n\nu_n}\partial_{\mu_1}\hat{\mathcal{A}}_{\nu_1}\ldots\partial_{\mu_n}\hat{\mathcal{A}}_{\nu_n}\\
&= (n+1) \frac{\mathcal{C}_{anom}}{2^n} \varepsilon^{\mu_1 \nu_1 \ldots\mu_n\nu_n} \mathcal{F}_{\mu_1 \nu_1} \ldots \mathcal{F}_{\mu_n \nu_n}.
\end{split}$$ and Eqn. becomes $$\label{covcur}
J_{Cov}^{\mu} = J^{\mu}_{Consistent} + J^{\mu}_{(c)}.$$ where $$\label{curcor}
\begin{split}
J^{\lambda}_{(c)} &= n\mathcal{C}_{anom} \varepsilon^{\lambda \alpha\mu_1 \nu_1 \ldots\mu_{n-1}\nu_{n-1}}
\hat{\mathcal{A}}_{\alpha} \partial_{\mu_1}\hat{\mathcal{A}}_{\nu_1}\ldots\partial_{\mu_{n-1}}\hat{\mathcal{A}}_{\nu_{n-1}}\\
&=n\frac{\mathcal{C}_{anom}}{2^{n-1}} \varepsilon^{\lambda \alpha\mu_1 \nu_1 \ldots\mu_{n-1}\nu_{n-1}}
\hat{\mathcal{A}}_{\alpha} \mathcal{F}_{\mu_1 \nu_1} \ldots \mathcal{F}_{\mu_{n-1} \nu_{n-1}}.
\end{split}$$
The energy-momentum equation becomes $$\nabla_{\mu}T^{\mu}_{\nu}= F_{\nu \mu}J^{\mu}_{Cov} ,$$ where $J^{\mu}_{Cov}$ is the covariant current. This has been explicitly shown in [@Banerjee:2012iz] [^9].
Constraining the partition function
-----------------------------------
We want to write the equilibrium free energy functional for the fluid. For this purpose, let us keep the in the following $2n$-dimensional time independent background, $$\begin{aligned}
\label{backgr}
ds^2= - e^{2 \sigma}(dt+ a_i dx^i)^2+ g_{ij}dx^idx^j, \quad {\cal A}= (A_0, {\cal A}_i).\end{aligned}$$
Now, we write the $(2n-1)$ dimensional equilibrium free energy that reproduces the same anomaly as given in . The most generic form for the anomalous part of the partition function is , $$\label{action}
\begin{split}
W_{anom}&=\frac{1}{T_0}\int d^{2n-1}x \sqrt{g_{2n-1}}\bigg\{ \sum_{m=1}^{n}\alpha_{m-1}(A_0,T_0)\
{\left [ \epsilon A (da)^{m-1}(dA)^{n-m} \right ]} \bigg.\\
&\qquad \bigg.\qquad + \alpha_{n}(T_0)\ {\left [ \epsilon a (da)^{n-1} \right ]} \bigg\}.
\end{split}$$ where, $\epsilon^{ijk\ldots }$ is the $(2n-1)$ dimensional tensor density defined via $$\epsilon^{i_1i_2\ldots i_{d-1}} = e^{-\sigma}\varepsilon^{0i_1i_2\ldots i_{d-1}}$$ The indices $(i,j)$ run over $(2n-1)$ values. We have used the following notation for the sake of brevity $$\label{epsDef}
\begin{split}
&{\left [ \epsilon A (da)^{m-1}(dA)^{n-m} \right ]} \\
&\quad \equiv \epsilon^{i j_1k_1 \ldots j_{m-1} k_{m-1} p_1 q_1\ldots p_{n-m}q_{n-m}}
A_i \partial_{j_1} a_{k_1}\ldots \partial_{j_{m-1}} a_{k_{m-1}}\partial_{p_1} A_{q_1}\ldots \partial_{p_{n-m}} A_{q_{n-m}}\\
&{\left [ \epsilon (da)^{m-1}(dA)^{n-m} \right ]}^i \\
&\quad \equiv \epsilon^{i j_1k_1 \ldots j_{m-1} k_{m-1} p_1 q_1\ldots p_{n-m}q_{n-m}}
\partial_{j_1} a_{k_1}\ldots \partial_{j_{m-1}} a_{k_{m-1}}\partial_{p_1} A_{q_1}\ldots \partial_{p_{n-m}} A_{q_{n-m}}\\
\end{split}$$
The invariance under diffeomorphism implies that $\alpha_{n}$ is a constant in space .For $m<n$ however $\alpha_m$ can have $A_0$ dependence, as the gauge symmetry is anomalous, but they are independent of $\sigma$, due to diffiomorphism invariance.
The consistent current computed from this partition function is, $$\begin{split}
{\left ( J_{anom} \right )}_0^{Consistent} &=- e^{\sigma}\sum_{m=1}^{n} \frac{\partial\alpha_{m-1}}{\partial A_0}{\left [ \epsilon A (da)^{m-1}
(dA)^{n-m} \right ]} \\
{\left ( J_{anom} \right )}^i_{Consistent} &=e^{-\sigma} \bigg\{\sum_{m=1}^{n} (n-m+1) \alpha_{m-1}
{\left [ \epsilon(da)^{m-1}(dA)^{n-m} \right ]}^i \bigg. \\
&\bigg. - \sum_{m=1}^{n-1} (n-m) \frac{\partial\alpha_{m-1}}{\partial A_0} {\left [ \epsilon A dA_0(da)^{m-1}(dA)^{n-m-1} \right ]}^i \bigg\}
\end{split}$$
Next, we compute the covariant currents, following . The correction piece for the 0-component of the current is, $$(J_{(c)})_0=-n\mathcal{C}_{anom} e^{\sigma}
\sum_{m=1}^{n} A_0^{m}\binom{n-1}{m-1} {\left [ \epsilon A (da)^{m-1}
(dA)^{n-m} \right ]}$$ where, we have used the following identification for $2n$ dimensional gauge field ${\cal A}_{\mu}$ and $(2n-1)$ dimensional gauge fields $A_i, a_i$ and scalar $A_0$, $$\begin{split}
\mathcal A_i &= A_i + a_i A_0 \\
\mathcal A_0 &= A_0.
\end{split}$$ where we are working in a‘zero $\mu_0$’ gauge.
Thus, the 0-component of the covariant current is, $${\left ( J_{anom} \right )}_0^{Cov}= -e^{\sigma} \epsilon^{ijkl\ldots}\sum_{m=1}^{n} {\left [ \frac{\partial\alpha_{m-1}}{\partial A_0} +
n\binom{n-1}{m-1} A_0^{m-1} \mathcal{C}_{anom} \right ]}
{\left [ \epsilon A (da)^{m-1}(dA)^{n-m} \right ]}.$$ Every term in the above sum is gauge non-invariant. So the covariance of the covariant current demands that we chose the arbitrary functions $\alpha_m$ appearing in the partition function such that the current vanishes. Thus, we get, $$\frac{\partial\alpha_{m-1}}{\partial A_0} +
n\binom{n-1}{m-1} A_0^{m-1} \mathcal{C}_{anom} =0 .$$ The solution for the above equation is, $$\begin{aligned}
\label{Csol}
&&\alpha_m = - \mathcal{C}_{anom}\binom{n}{m+1} A_0^{m+1} + \tilde{C}_m T_0^{m+1} ,
\quad m=0, \ldots, n-1 {\nonumber}\\
&&\alpha_n = \tilde{C}_n T_0^{n+1}\end{aligned}$$ Here, $\tilde C_m$ are constants that can appear in the partition function.
Thus, at this point, a total of $n+1$ coefficients can appear in the partition function. A further study of CPT invariance of the partition function will reduce this number. We will present that analysis later in details and here we just state the result. CPT forces all $\tilde{C}_{2k} =0$. For even $n$, the number of constants are $\frac{n}{2}$ where as for odd $n$, the number is $(\frac{n+1}{2})$.
Currents from the partition function
------------------------------------
With these functions the $i-$component of the covariant current is, $$\label{cufp}
\begin{split}
{\left ( J_{anom} \right )}_{Cov}^i
&= e^{-\sigma} \sum_{m=1}^{n} \bigg[A_0 \frac{\partial\alpha_{m-1}}{\partial A_0} +(n-m+1)\alpha_{m-1} \bigg]{\left [ \epsilon (da)^{m-1} (dA)^{n-m} \right ]}^i \\
&= e^{-\sigma} \sum_{m=1}^{n} \bigg[-(n+1) \mathcal{C}_{anom}
\binom{n}{m} T_0 A_0^{m}\bigg.\\
&\qquad \bigg. \qquad +(n-m+1)T_{0}^{m}\tilde C_{m-1} \bigg]{\left [ \epsilon (da)^{m-1} (dA)^{n-m} \right ]}^i ,
\end{split}$$ As expected, this current is $U(1)$ gauge invariant. The different components of stress-tensor computed from the partition function are, $$\label{stfp}
\begin{split}
T^{anom}_{00}&=0, \qquad T_{anom}^{ij}=0 \\
{\left ( T^i_0 \right )}_{anom} &= e^{-\sigma} \sum_{m=1}^{n}{\left ( m \alpha_{m} - (n-m+1) A_0
\alpha_{m-1} \right )} {\left [ \epsilon (da)^{m-1} (dA)^{n-m} \right ]}^i\\
&= e^{-\sigma} \sum_{m=1}^{n}\left[m \tilde{C}_m T_0^{m+1}-(n+1-m) \tilde{C}_{m-1} T_0^m A_0\right.\\
&\qquad\left.\qquad+\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right]{\left [ \epsilon (da)^{m-1} (dA)^{n-m} \right ]}^i\\
\end{split}$$
Comparison with Hydrodynamics
-----------------------------
Next, we find the equilibrium solution for the fluid variables. As usual, we keep the fluid in the time independent background . The equilibrium solutions for perfect charged fluid (with out any dissipation) are, $$u^{\mu}\partial_\mu= e^{-\sigma}\partial_t , \quad T=T_0 e^{-\sigma}, \quad \mu= A_0 e^{-\sigma} .$$
The most generic constituitive relations for the fluid can be written as, $$\begin{aligned}
\label{conscor}
T_{\mu\nu}&=& (\epsilon+p) u_{\mu}u_{\nu} +p g_{\mu\nu} + \eta \sigma_{\mu\nu}+ \zeta \Theta
{\cal P}_{\mu\nu} \nonumber \\
J^{\mu}_{Cov}&=& q u^{\mu}+ J^{\mu}_{even} +J^{\mu}_{odd} ,\nonumber \\
J^{\mu}_{even}&=&\sigma (E^{\mu}- T {\cal P}^{\mu\alpha} \partial_{\alpha}\nu )+
\alpha_1 E^{\mu} + \alpha_2 T {\cal P}^{\mu\alpha} \partial_{\alpha}\nu +
\mbox{higher derivative terms}\nonumber \\
J^{\mu}_{odd}&=& \sum_{m=1}^{n} \xi_m \varepsilon^{\mu \nu\ \gamma_1\delta_1\ldots\gamma_{m-1}\delta_{m-1}\ \alpha_1\beta_1 \ldots\alpha_{n-m}\beta_{n-m}}
u_{\nu}(\partial_{\gamma}u_{\delta})^{m-1}(\partial_{\alpha}{\cal A}_{\beta})^{n-m}+\ldots.\end{aligned}$$ Here, $J^{\mu}_{even}$ is parity even part of the charge current and $J^{\mu}_{odd}$ is parity odd charge current. $\varepsilon^{\mu \nu \alpha
\beta \gamma \delta \ldots}$ is a $2n$ dimensional tensor density whose $(n-m)$ indices are contracted with $\partial_{\alpha}{\cal A}_{\beta}$ and $(m-1)$ indices are contracted with $\partial_{\gamma}u_{\delta}$.
We notice that the higher derivative part of the current gets contribution from both parity even and odd vectors. Parity even vectors can be at any derivative order but parity odd vectors always appear at $(n-1)$ derivative order. Thus, for a generic value of $n$ (other than $n=2$) , the parity even and odd parts corrections to the current will always appear at different derivative orders. From now on, we will only concentrate on the parity odd sector. It is also straight forward to check that $J_{0}^{odd}=0$.
Next, we look for the equilibrium solution for this fluid. Since, there exist no gauge invariant parity odd scalar, the temperature and chemical potential do not get any correction. Also, in $2n$ dimensional theory, the parity odd vectors that we can write are always $(n-1)$ derivative terms. No other parity odd vector at any lower derivative order exists. Since the fluid velocity is always normalized to unity, we have, $$\delta T=0, \quad \delta \mu=0, \quad \delta u_0 = - a_i \delta u^i.$$ where, the most generic correction to the fluid velocity is, $$\delta u^i = \sum_{m=1}^{n} U_m(\sigma, A_0) {\left [ \epsilon(da)^{m-1} (dA)^{n-m} \right ]}^i .$$
Here, $U_m(\sigma, A_0)$ are arbitrary coefficients and factors of $e^{\sigma}$ is introduced for later convenience. Similarly, we can parameterize the $i-$component of the parity-odd current as, $$\label{jdiss}
J^{i}_{odd} = \sum_{m=1}^{n} J_m(\sigma, A_0) {\left [ \epsilon(da)^{m-1} (dA)^{n-m} \right ]}^i .$$ The coefficients $J_m(\sigma, A_0)$ are related to the transport coefficients $\xi_m$ via $$\label{eq:jxi}
J_m= \sum_{k=1}^m \binom{n-k}{m-k} \xi_k {\left ( -e^\sigma \right )}^{k-1} A_0^{m-k} .$$
With all these data, we can finally compute the corrections to the stress tensor and charged currents and they take the following form, $$\begin{aligned}
\label{stc}
\delta T_{00}&=&0, \quad \delta T^{ij}=0, \quad \delta \tilde J_{0}=0 \nonumber \\
\delta T_{0}^i&=& -e^{\sigma} (\epsilon+p) \epsilon^{ijk\ldots} \sum_{m=1}^{n}
U_m(\sigma, A_0) ( da)^{m-1} (dA)^{n-m} \nonumber \\
\delta J^{i}_{Cov} &=& \epsilon^{ijk\ldots} \sum_{m=1}^{n}(J_m(\sigma, A_0) +
q U_m(\sigma, A_0))( da)^{m-1} (dA)^{n-m}\end{aligned}$$ Comparing the expressions for various components of stress tensor and covariant current of the fluid obtained from equilibrium partition function , and fluid constitutive relations , we get, $$\begin{aligned}
\label{velocur}
U_{m} &=& - \frac{e^{-2\sigma}}{\epsilon+p}\left[m \alpha_m-(n-m+1)A_0\alpha_{m-1}\right]{\nonumber}\\
&=&-\frac{e^{-2\sigma}}{\epsilon+p}\left[m \tilde{C}_m T_0^{m+1}-(n+1-m) \tilde{C}_{m-1} A_0T_0^m\right.{\nonumber}\\
&&\qquad\left.\qquad+\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right]\end{aligned}$$ Similarly, we can evaluate $J_m(\sigma, A_0) $ as follows, $$\label{Jm}
\begin{split}
J_{m}&=e^{-\sigma}{\left [ -(m+1)\mathcal{C}_{anom} A_0^{m}\binom{n+1}{m+1}+(n-m+1){\tilde C}_{m-1}T_0^{m} \right ]}\\
&+ \frac{qe^{-2\sigma}}{\epsilon+p}\left[m \tilde{C}_m T_0^{m+1}-(n+1-m) \tilde{C}_{m-1} A_0 T_0^m\right.\\
&\qquad\left.\qquad+\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right]
\end{split}$$ We want to now use this to obtain the transport coefficients $\xi_m$ in the last relation of . For this we have to invert the relations for $\xi_m$. We finally get $$\label{explicitform2n}
\begin{split}
\xi_m &= {\left [ m \frac{q\mu}{\epsilon + p}-(m+1) \right ]}\mathcal{C}_{anom}\binom{n+1}{m+1} \mu^m\\
&\qquad+\sum_{k=0}^{m}{\left [ m \frac{q\mu}{\epsilon + p}-(m-k) \right ]}(-1)^{k-1}{\tilde C}_{k}\binom{n-k}{m-k} T^{k+1}\mu^{m-k-1} \\
\end{split}$$ This then is the prediction of this transport coefficient via partition function methods. This exactly matches with the expression from [@Loganayagam:2011mu] in provided we make the following identification among the constants $\tilde{C}_m= (-1)^{m-1} C_m$.
Comments on Most Generic Entropy Current {#sec:entropy}
========================================
Another physical requirement which has long been used as a source of constraints on fluid dynamical transport coefficients is the local form of second law of thermodynamics. As we reviewed in the subsection§§\[subsec:LogaReview\] this principle had been used in [@Loganayagam:2011mu] to obtain anomaly induced transports coefficients in arbitrary even dimensions.\
In this section we will determine the entropy current in equilibrium by comparing the total entropy with that obtained from the equilibrium partition function. In the examples studied in [@Banerjee:2012iz; @Jain:2012rh] it was seen that in general the comparison with equilibrium entropy ( obtained from partition function) did not fix all the non dissipative coefficients in fluid dynamical entropy current. However it did determine the anomalous contribution exactly. Here we will see that this holds true in general even dimensions.\
Let us begin by computing the entropy from the equilibrium partition function. We begin with the anomalous part of the partition function $$\begin{split}
W_{anom} = \frac{1}{T_0}\int d^{2n-1}x &\sqrt{g_{2n-1}}\bigg\{ \sum_{m=1}^{n}
\alpha_{m-1} {\left [ \epsilon A(da)^{m-1}(dA)^{n-m} \right ]}\bigg.\\
&\qquad\bigg.\qquad + \alpha_n {\left [ \epsilon a(da)^{n-1} \right ]} \bigg\}
\end{split}$$ where the functions $\alpha_m$ are given in .
The anomalous part of the total entropy is easily computed to be $$\begin{split}\label{entropy1}
S_{anom} &= \frac{\partial}{\partial T_0} {\left ( T_0 W_{anom} \right )} \\
&= \int d^{2n-1}x \sqrt{g_{2n-1}} \bigg\{ \sum_{m=1}^{n}
m~T_0^{m-1} ~\tilde{C}_{m-1}\ {\left [ \epsilon A(da)^{m-1}(dA)^{n-m} \right ]} \\
& \quad + ~(n+1) \tilde{C}_n ~T_0^n {\left [ \epsilon a(da)^{n-1} \right ]} \bigg\}\\
&= \int d^{2n-1}x \sqrt{g_{2n-1}} \bigg\{ \sum_{m=1}^{n}
(m+1) ~T_0^m ~\tilde{C}_m\ {\left [ \epsilon a(da)^{m-1}(dA)^{n-m} \right ]} \\
& \qquad + \tilde{C}_0 {\left [ \epsilon A(dA)^{n-1} \right ]} \bigg\}
\end{split}$$
Now we will determine the most general form of entropy current in equilibrium by comparison with . In [@Banerjee:2012iz] it was argued that the entropy current by itself is not a physical object, but entropy production and total entropy are. This gave a window for gauge non invariant contribution to entropy current but the contribution was removed by CPT invariance. Here also we will allow for such gauge non invariant terms in the entropy current. The most general form of entropy current, allowing for gauge non invariant pieces, is then $$\begin{split}
J^\mu_S &= s u^\mu -\frac{\mu}{T} J^{\mu}_{odd} + \sum_{m=1}^{n} \chi_m \varepsilon^{\mu\nu\ldots} u_\nu (\partial u)^{m-1}
(\partial\hat{\mathcal{A}})^{n-m}\\
&\qquad + \zeta \varepsilon^{\mu\nu\ldots} \hat{\mathcal{A}}_\nu (\partial\hat{\mathcal{A}})^{n-1}
\end{split}$$ where $\chi_m$ is a function of $T$ and $\mu$ whereas $\zeta$ is a constant . The correction to the local entropy density (i.e., the time component of the entropy current) can be written after an integration by parts as $$\label{0entcur}\begin{split}
\delta J_S^0 = \varepsilon^{0ij\ldots}{\left [ \zeta A(dA)^{n-1} + \sum_{k=1}^{n} {\tilde f}_k\ a\ (da)^{k-1}\ (dA)^{n-k} \right ]}_{ij\ldots} +\text{total derivatives}
\end{split}$$ where $$\begin{split}
{\tilde f}_m &\equiv - s U_m +\frac{\mu}{T} J_m+\zeta A_0^m \binom{n}{m}+\sum_{k=1}^{m} \binom{n-k}{m-k}\chi_k {\left ( -e^{\sigma} \right )}^k A_0^{m-k}\\
\end{split}$$
The correction to the entropy is then, $$\begin{split}\label{entropy2}
\delta S &= \int d^{2n-1}x \sqrt{g_{2n}} ~J^0_S \\
& = \int d^{2n-1}x \sqrt{g_{2n-1}}
{\left [ \zeta {\left [ \epsilon A(dA)^{n-1} \right ]} + \sum_{m=1}^{n} {\tilde f}_m\ {\left [ \epsilon a\ (da)^{m-1}\ (dA)^{m-k} \right ]} \right ]}
\end{split}$$
Comparing the two expressions of total equilibrium entropy and we find the following expressions of the various coefficients in the entropy current , $$\begin{aligned}
\label{resultec}
\zeta &=& {\tilde C}_0 \quad\text{and}\quad \tilde{f}_k = (k+1) ~T_0^{k} ~{\tilde C}_{k} {\rm ~~for~~} 0 \leq k \leq n\end{aligned}$$
This in turn implies that $$\begin{split}
T_0\sum_{k=1}^{m} &\binom{n-k}{m-k}\chi_k {\left ( -e^{\sigma} \right )}^k A_0^{m-k}\\
&= \tilde{C}_mT_0^{m+1}+m\binom{n}{m}\mathcal{C}_{anom}A_0^{m+1}-\tilde{C}_0 T_0A_0^m \binom{n}{m}
\end{split}$$ which can be inverted to give $$\begin{split}
\chi_m &= - \mathcal{C}_{anom}\binom{n+1}{m+1} T^{-1}\mu^{m+1}-\sum_{k=0}^{m}\tilde{C}_k (-1)^{k-1}\binom{n-k}{m-k} T^k\mu^{m-k} \\
\zeta &= {\tilde C}_0 \\
\end{split}$$ which matches with the prediction from [@Loganayagam:2011mu] in equation again with the identification $C_m(-1)^{m-1} = \tilde{C}_m$. We see that in the entropy current we have a total of $n+1$ constants as in the equilibrium partition function.
This completes our partition function analysis and our re derivation of the results of [@Loganayagam:2011mu] via partition function techniques. We see that the transport coefficients match exactly with the results obtained via entropy current (provided the analysis of [@Loganayagam:2011mu] is extended by allowing gauge-non-invariant pieces in the entropy current). This detailed match of transport coefficients warrants the question whether the form of the equilibrium partition function itself can be directly derived from the expressions of [@Loganayagam:2011mu] quoted in \[subsec:LogaReview\]. We turn to this question in the next section.
Gibbs current and Partition function {#sec:IntByParts}
====================================
We begin by repeating the expression for the Gibbs current in which was central to the results of [@Loganayagam:2011mu]. $$\begin{split}
\bar{\mathcal{G}}^{Cov}_{anom}
&= C_0 T \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}+ \sum_{m=1}^{n}\left[\mathcal{C}_{anom}\binom{n+1}{m+1}\mu^{m+1}\right.\\
&\qquad \left. + \sum_{k=0}^{m}C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k}\right] (2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u \\
\end{split}$$ The subscript ‘anom’ denotes that we are considering only a part of the entropy current relevant to anomalies. The superscript ‘Cov’ refers to the fact that this is the Gibbs free energy computed by turning on a chemical potential for the **covariant** charge.
Let us ask how this expression would be modified if the Gibbs free energy was computed by turning on a chemical potential for the **consistent** charge instead. The change from covariant charge to consistent charge/current is simply given by a shift as given by the equation. This shift does not depend on the state of the theory but is purely a functional of the background gauge fields. Thinking of Gibbs free energy as minus temperature times the logarithm of the Eucidean path integral, a conversion from covariant charge to a consistent charge induces a shift $$\bar{\mathcal{G}}^{Cov}_{anom} = \bar{\mathcal{G}}^{Consistent}_{anom} - \mu\ n\ \mathcal{C}_{anom}\hat{\mathcal{A}}\wedge \mathcal{F}^{n-1}$$ which gives $$\label{eq:GConsBOmega}
\begin{split}
&\bar{\mathcal{G}}^{Consistent}_{anom} \\
&= \sum_{m=1}^{n}\left[\mathcal{C}_{anom}\binom{n+1}{m+1}\mu^{m+1} +\sum_{k=0}^{m}C_k \binom{n-k}{m-k} T^{k+1}\mu^{m-k}\right]
(2\omega)^{m-1} \mathcal{B}^{n-m}\wedge u \\
&\qquad + {\left [ C_0 T + n\mathcal{C}_{anom}\mu \right ]} \hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}\\
\end{split}$$ This now a Gibbs current whose $\mu$ derivative gives the consistent current rather than a covariant current. It is easy to check that this solves an adiabaticity equation very similar to the one quoted in equation $$\label{eq:adiabGCons}
\begin{split}
d\bar{\mathcal{G}}^{Consistent}_{anom} &+ \mathfrak{a} \wedge \bar{\mathcal{G}}^{Consistent}_{anom}+n\mathcal{C}_{anom}{\left ( \hat{\mathcal{A}}+\mu u \right )}\wedge\mathcal{E}\wedge \mathcal{B}^{n-1}\\
&= {\left ( dT+\mathfrak{a}T \right )}\wedge \frac{\partial\bar{\mathcal{G}}^{Consistent}_{anom}}{\partial T}
+ {\left ( d\mu+\mathfrak{a}\mu-\mathcal{E} \right )}\wedge \frac{\partial\bar{\mathcal{G}}^{Consistent}_{anom}}{\partial \mu}
\end{split}$$ The question we wanted to address is how this Gibbs current is related to the partition function in equation .
The answer turns out to be quite intuitive - we would like to argue in this section that $$\label{eq:ZGibbs}
W_{anom} = \ln\ Z^{anom}_{Consistent} = - \int_{space}\frac{1}{T} \bar{\mathcal{G}}^{Consistent}_{anom}$$ This equation instructs us to pull back the $2n-1$ form in equation (divided by local temperature) and integrate it on an arbitrary spatial hyperslice to obtain the anomalous contribution to negative logarithm of the equilibrium path integral. Note that pulling back the Hodge dual of Gibbs current on a spatial hyperslice is essentially equivalent to integrating its zero component (i.e., the Gibbs density) on the slice. Seen this way the above relation is the familiar statement relating Gibbs free energy to the grand-canonical partition function.
Reproducing the Gauge variation
-------------------------------
Before giving an explicit proof of the relation we will check in this subsection that the relation essentially gives the correct gauge variation to the path-integral at equilibrium. This will provide us with a clearer insight on how the program of [@Banerjee:2012iz] to write a local expression in the partition function to reproduce the anomaly works.
The gauge variation of under $\delta\hat{\mathcal{A}}=d\delta\lambda$ is $$\begin{split}
\delta W_{anom} &= \delta \ln\ Z^{anom}_{Consistent} = - \int_{space}\frac{1}{T} \delta\bar{\mathcal{G}}^{Consistent}_{anom}\\
&= -\int_{space}{\left [ C_0 + n\mathcal{C}_{anom}\frac{\mu}{T} \right ]} \delta\hat{\mathcal{A}}\wedge\mathcal{F}^{n-1}\\
&= -\int_{space}{\left [ C_0 + n\mathcal{C}_{anom}\frac{\mu}{T} \right ]} d\delta\lambda\wedge\mathcal{F}^{n-1}\\
&= -\int_{surface}\delta\lambda{\left [ C_0 + n\mathcal{C}_{anom}\frac{\mu}{T} \right ]} \wedge\mathcal{F}^{n-1}
+ n\mathcal{C}_{anom}\int_{space}\delta\lambda d{\left ( \frac{\mu}{T} \right )} \wedge\mathcal{F}^{n-1}
\end{split}$$
We will now ignore the surface contribution and use the fact that chemical equilibrium demands that $$Td{\left ( \frac{\mu}{T} \right )} = \mathcal{E}$$ where $\mathcal{E}\equiv u^\nu\mathcal{F}_{\mu\nu}dx^\nu$ is the rest frame electric-field. This is essentially a statement (familiar from say semiconductor physics) that in equilibrium the diffusion current due to concentration gradients should cancel the drift ohmic current due to the electric field. Putting this in along with the electric-magnetic decomposition $\mathcal{F}=\mathcal{B}+u\wedge \mathcal{E}$, we get $$\begin{split}
\delta W_{anom} &= \delta \ln\ Z^{anom}_{Consistent} = \mathcal{C}_{anom}\int_{space}\frac{\delta\lambda}{T} n\mathcal{E}\wedge\mathcal{B}^{n-1}
\end{split}$$ which is the correct anomalous variation required of the equilibrium path-integral ! In $d=2n=4$ dimensions for example we get the correct $E.B$ variation along with the $1/T$ factor coming from the integration over euclidean time-circle. The factor of $n$ comes from converting to electric and magnetic fields $$\mathcal{F}^n = n\ u\wedge \mathcal{E}\wedge\mathcal{B}^{n-1}$$ Thus the shift piece along with the chemical equilibrium conspires to reproduce the correct gauge variation. The reader might wonder why this trick cannot be made to work by just keeping the shift term alone in the Gibbs current - the answer is of course that other terms are required if one insists on adiabaticity in the sense that we want to solve .
Integration by parts
--------------------
In this subsection we will prove explicitly. We will begin by evaluating the consistent Gibbs current in the equilibrium configuration. We will as before work in the ‘zero $\mu_0$’ gauge.
Using the relations in the appendix \[app:hydrostatics\] we get the consistent Gibbs current as $$\begin{split}
-\frac{1}{T}&\bar{\mathcal{G}}^{Consistent}_{anom}\\
&=\frac{1}{T_0}\sum_{m=1}^{n}\left[ C_m(-1)^{m-1} T_0^{m+1}-C_0(-1)^{0-1}\binom{n}{m}T_0A_0^m\right.\\
&\qquad\left. -\binom{n}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right] (da)^{m-1}(dA)^{n-m}\wedge (dt+a)\\
&\quad -\frac{1}{T_0} {\left [ n \mathcal{C}_{anom}A_0 +C_0T_0 \right ]} A\wedge (dA+A_0 da)^{n-1}\\
&\quad -\frac{(n-1)}{T_0} {\left [ n \mathcal{C}_{anom}A_0 +C_0T_0 \right ]} A\wedge dA_0\wedge(dt+a)\wedge (dA+A_0 da)^{n-2}\\
\end{split}$$
After somewhat long set of manipulations one arrives at the following form for the consistent Gibbs current $$\begin{split}
-\frac{1}{T}&\bar{\mathcal{G}}^{Consistent}_{anom}\\
&= d\left\{\frac{A}{T_0}
\sum_{m=1}^{n-1}\left[ C_m(-1)^{m-1} T_0^{m+1}-C_0(-1)^{0-1}\binom{n-1}{m}T_0A_0^m\right.\right.\\
&\qquad\left.\left. +m\binom{n}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right] (da)^{m-1}(dA)^{n-1-m}\wedge (dt+a)\right\}\\
&\quad +\frac{A}{T_0}\sum_{m=1}^{n}{\left [ C_{m-1}(-1)^{m-2} T_0^m-\binom{n}{m} \mathcal{C}_{anom} A_0^m \right ]}(da)^{m-1}(dA)^{n-m}\\
&\quad+ C_n(-1)^{n-1} T_0^{n}(da)^{n-1}\wedge (dt+a)\\
\end{split}$$ Here we have taken out a surface contribution which we will suppress from now on since it does not contribute to the partition function. This final form is easily checked term by term and we will leave that as an exercise to the reader.
Suppressing the surface contribution we can write $$\begin{split}
-\frac{1}{T}&\bar{\mathcal{G}}^{Consistent}_{anom}\\
&= d{\left [ \ldots \right ]} +\frac{A}{T_0}\sum_{m=1}^{n}{\left [ C_{m-1}(-1)^{m-2} T_0^m-\binom{n}{m} \mathcal{C}_{anom} A_0^m \right ]}(da)^{m-1}(dA)^{n-m}\\
&\quad+ C_n(-1)^{n-1} T_0^{n}(da)^{n-1}\wedge (dt+a)\\
&=d{\left [ \ldots \right ]} +\frac{A}{T_0}\wedge
\sum_{m=1}^{n}\alpha_{m-1}(da)^{m-1}(dA)^{n-m} + \frac{dt+a}{T_0} \wedge \alpha_n(da)^{n-1} \\
\end{split}$$ where we have defined $$\label{eq:alphaC}
\begin{split}
\alpha_m &= C_{m}(-1)^{m-1} T_0^{m+1}-\binom{n}{m+1} \mathcal{C}_{anom} A_0^{m+1}\quad \text{for}\ m<n\\
\alpha_n &= C_{n}(-1)^{n-1} T_0^{n+1}\\
\end{split}$$
To get the contribution to the equilibrium partition function, we integrate the above equation over the spatial slice (putting $dt=0$). We will neglect surface contributions to get $$\begin{split}
&{\left ( \ln \mathcal{Z} \right )}^{Consistent}_{anom} \\
&=\int_{\text{space}}\frac{A}{T_0}\wedge
\sum_{m=1}^{n}{\left [ C_{m-1}(-1)^{m-2} T_0^m-\binom{n}{m} \mathcal{C}_{anom} A_0^m \right ]}(da)^{m-1}(dA)^{n-m} \\
&\qquad + \int_{\text{space}}C_n(-1)^{n-1} T_0^n a \wedge(da)^{n-1} \\
&=\int_{\text{space}}\frac{A}{T_0}\wedge
\sum_{m=1}^{n}\alpha_{m-1}(da)^{m-1}(dA)^{n-m} + \int_{\text{space}} \frac{a}{T_0} \wedge \alpha_n(da)^{n-1} \\
\end{split}$$ with $\alpha_{m}$s given by . We are essentially done - we have got the form in and comparing the equations and we find a perfect agreement with the usual relation $C_m(-1)^{m-1}=\tilde{C}_m$. Now by varying this partition function we can obtain currents as before (the variation can be directly done in form language using the equations we provide in appendix \[app:variationForms\]). With this we have completed a whole circle showing that the two formalisms for anomalous transport developed in [@Loganayagam:2011mu] and [@Banerjee:2012iz] are completely equivalent.
Before we conclude, let us rewrite the partition function in terms of the polynomial $\mathfrak{F}^\omega_{anom}[T,\mu]$ as $$\begin{split}
&{\left ( \ln \mathcal{Z} \right )}^{Consistent}_{anom} \\
&=\int_{\text{space}}\frac{A}{T_0 da}\wedge
{\left [ \frac{ \mathfrak{F}^\omega_{anom}[-T_0 da, dA]-\mathfrak{F}^\omega_{anom}[-T_0 da, 0]}{dA}
-\frac{ \mathfrak{F}^\omega_{anom}[0, dA+A_0 da]}{dA+A_0 da} \right ]}\\
&\qquad + \int_{\text{space}}\frac{\mathfrak{F}^\omega_{anom}[-T_0 da, 0]}{(T_0da)^2}\wedge T_0 a \\
\end{split}$$ We will consider an example. Using adiabaticity arguments, the authors of [@Loganayagam:2012pz] derived the following expression for a theory of free Weyl fermions in $d=2n$ spacetime dimensions $$\begin{split}
{\left ( {\mathfrak{F}}^\omega_{anom} \right )}^{free\ Weyl}_{d=2n}&=- 2\pi\sum_{species} \chi_{_{d=2n}} {\left [ \frac{\frac{\tau}{2}T}{\sin \frac{\tau}{2}T}e^{\frac{\tau}{2\pi}q\mu} \right ]}_{\tau^{n+1}} \\
\end{split}$$ where $\chi_{_{d=2n}}$ is the chirality and the subscript $\tau^{n+1}$ denotes that one needs to Taylor-expand in $\tau$ and retain the coefficient of $\tau^{n+1}$. Substituting this into the above expression gives the anomalous part of the partition function of free Weyl fermions.
Fluids charged under multiple $U(1)$ fields {#sec:2ndimmul}
===========================================
In this section, we will generalize our results to cases where we have multiple abelian $U(1)$ gauge fields in arbitrary $2n-$dimensions.
We can take $$\label{eq:FOmegaCmulti}
\begin{split}
\mathfrak{F}^\omega_{anom}[T,\mu] &= \mathcal{C}_{anom}^{A_1 \ldots A_{n+1}}\mu_{A_1} \ldots \mu_{A_{n+1}}+\sum_{m=0}^{n}C_m^{A_1\ldots A_{n-m}} T^{m+1}\mu_{A_1\ldots A_{n-m}}.\\
\end{split}$$ In this case, the anomaly equation takes the following form,
$$\label{anomeq}
\nabla_{\mu} J^{\mu,A_{n+1}}_{Cov} =\frac{n+1}{2^n} {\cal C}_{anom}^{A_1 A_2\ldots A_{n+1}} \varepsilon^{\mu_1\nu_1\mu_2\nu_2 \ldots \mu_n\nu_n}
{\left ( \mathcal{F}_{\mu_1 \nu_1} \right )}_{A_1} \ldots {\left ( \mathcal{F}_{\mu_n \nu_n} \right )}_{A_n} .$$
Where, in $2n$ dimensions ${{\cal C}_{anom} }$ has $n+1$ indices denoted by $(A_1,A_2 \cdot A_{n+1})$ and it is symmetric in all its indices. It is straightforward to carry on the above computation for the case of multiple $U(1)$ charges and most of the computations remains the same. Now, for the multiple $U(1)$ case, in partition function \[action\] the functions $\alpha_m$ and the constants $\tilde{C}_m$ (and the constants $C_m$ appearing in $\mathfrak{F}^\omega_{anom}$) have $n-m$ number of indices which are contracted with $n-1-m$ number of $dA$ and one $A$. The constant $\zeta$ appearing in the entropy current has $n$ indices.
The constant $\tilde{C}_{n}$ (and $\alpha_n$) has no index. All these constants are symmetric in their indices. Considering the above index structure into account, we can understand that the functions $U_m$ appearing in velocity correction and $\chi_m$ appearing in entropy corrections has $n-m$ indices and the function $J_{m}$ appearing in the charge current has $n-m+1$ indices. Now, we can write the generic form of these functions as follows: $$\begin{split}
U_m^{A_1A_2\ldots A_{n-m}}&=-\frac{e^{-2\sigma}}{\epsilon+p} \left[ m\tilde C_{m}^{A_1A_2\ldots A_{n-m}}T_0^{m+1}\right.\\
&\qquad -(n+1-m)\tilde C_{m-1}^{A_1A_2\ldots A_{n-m}B_1}(A_0)_{B_1}T_0^m \\
&\qquad \left.
+\binom{n+1}{m+1}\mathcal{C}_{anom}^{A_{1}\ldots A_{n-m}B_{1}\ldots B_{m+1}}(A_{0})_{B_{1}}\ldots (A_0)_{B_{m+1}}\right]\\
\end{split}$$ where ${\left ( A_0 \right )}_{B_1}$ comes from the $B_1$th gauge field.
Similarly, we can write the coefficients appearing in $A$’th charge current ($J^{A}$) as, $$\label{transport}
\begin{split}
{\left ( J^A \right )}_m^{A_1 A_2\ldots A_{n-m}}&=e^{-\sigma}\left[- (m+1) \mathcal{C}_{anom}^{A A_1 \ldots A_{n-m}B_{1}\ldots B_m} (A_0)_{B_{1}}\ldots (A_0)_{B_{m}}\binom{n+1}{m+1}\right.\\
&\qquad \left.+ (n-m+1){\tilde C}_{m-1}^{A A_1\ldots A_{n-m}}T_0^m \right]\\
&\qquad +\frac{q^A e^{-2\sigma}}{\epsilon+p} \left[ m\tilde C_{m}^{A_1A_2\ldots A_{n-m}}T_0^{m+1}\right.\\
&\qquad -(n+1-m)\tilde C_{m-1}^{A_1A_2\ldots A_{n-m}B_1}(A_0)_{B_1}T_0^m \\
&\qquad \left.
+\binom{n+1}{m+1}\mathcal{C}_{anom}^{A_{1}\ldots A_{n-m}B_{1}\ldots B_{m+1}}(A_{0})_{B_{1}}\ldots (A_0)_{B_{m+1}}\right]\\
\end{split}$$
We can also express the transport coefficients for fluids charged under multiple $U(1)$ charges, generalising equation as, $$\begin{split}
&{\left ( \xi^A \right )}_m^{A_1 A_2\ldots A_{n-m}} \\
&\ = {\left [ m \frac{q^A\mu_B}{\epsilon + p}-(m+1)\delta^A_B \right ]}\mathcal{C}_{anom}^{BA_1\ldots A_{n-m}B_1\ldots B_m}
\binom{n+1}{m+1} \mu_{B_1}\ldots\mu_{B_m}\\
&\quad +\sum_{k=0}^{m-1}{\left [ m \frac{q^A\mu_B}{\epsilon + p}-(m-k)\delta^A_B \right ]}\\
&\qquad \times (-1)^{k-1}{\tilde C}_{k}^{BA_1\ldots A_{n-m}B_1\ldots B_{m-k-1}}\binom{n-k}{m-k} T^{k+1}\mu_{B_1} \ldots \mu_{B_{m-k-1}} \\
&\quad +{\left [ m \frac{q^A}{\epsilon + p} \right ]} (-1)^{m-1}{\tilde C}_{m}^{A_1\ldots A_{n-m}} T^{m+1} \\
\end{split}$$
Similarly the coefficieints $\chi_m$ appearing entropy current become $$\begin{split}
\chi_m^{A_{1}\ldots A_{n-m}} &= - \mathcal{C}_{anom}^{A_{1}\ldots A_{n-m} B_1\ldots B_{m+1}}\binom{n+1}{m+1} T^{-1}\mu_{B_{1}}\ldots \mu_{B_{m+1}}\\
&-\sum_{k=0}^{m}(-1)^{k-1}\binom{n-k}{m-k} T^k \tilde{C}_k^{A_{1}\ldots A_{n-m}B_1\ldots B_{m-k}} \mu_{B_{1}}\ldots \mu_{B_{m-k}}\\
\end{split}$$
This finishes the analysis of anomalous fluid charged under multiple abelian $U(1)$ gauge fields.
CPT Analysis {#sec:CPT}
============
In this section we analyze the constraints of 2n dimensional CPT invariance on the analysis of our previous sections.
Name Symbol CPT
----------------------- ---------------------------------- ----- -- --
Temperature $T$ +
Chemical Potential $\mu$ -
Velocity 1-form $u$ +
Gauge field 1-form $\hat{\mathcal{A}}$ -
Exterior derivative $d$ -
Field strength 2-form $\mathcal{F}=d\hat{\mathcal{A}}$ +
Magnetic field 2-form $\mathcal{B}$ +
Vorticity 2-form $\omega$ -
: \[tab:CPTform\]Action of CPT on various forms
Let us first examine the CPT transformation of the Gibbs current proposed in [@Loganayagam:2011mu]. Using the Table§\[tab:CPTform\] we see that the Gibbs current in Eqn. is CPT-even provided the coefficients $\{\mathcal{C}_{anom},C_{2k+1}\}$ are CPT-even and the coefficients $C_{2k}$ are CPT-odd. Since in a CPT-invariant theory all CPT-odd coefficients should vanish, we conclude that $C_m=0$ for even $m$. This conclusion can be phrased as $$CPT\quad : \quad C_m(-1)^{m-1}= C_m$$ Note that this is the same conclusion as reached by assuming the relation to the anomaly polynomial.
Next we analyze the constraints of 2n dimensional CPT invariance on the partition function . Our starting point is a partition function of the fluid and we expect it to be invariant under $2n$dimensional CPT transformation of the fields. Table§\[cpttab\] lists the effect of 2n dimensional C, P and T transformation on various field appearing in the partition function . Since $a_i$ is even while $A_i$ and $\partial_j$ are odd under CPT, the term with coefficient $C_m$ picks up a factor of $(-1)^{(m+1)}$. Thus CPT invariance tells us that $C_m$ must be
- even function of $A_0$ for odd $m$.
- odd function of $A_0$ for even $m$.
Now the coefficients $C_m$ are fixed upto constants $\tilde{C}_m$ by the requirement that the partition function reproduces the correct anomaly. Note that the $A_0$(odd under CPT) dependence of the coefficients $C_m$ thus determined are consistent with the requirement CPT invariance. Further, CPT invariance forces $\tilde{C}_m = 0$ for even $m$. The last term in the partition function is odd under parity and thus its coefficient is set to zero by CPT for even $n$ whereas for odd $n$ it is left unconstrained.
Thus finally we see that CPT invariance allows for a total of
- $\frac{n}{2}$ constant ($\tilde{C}_m$ with $m$ odd) for even $n$.
- $\frac{n+1}{2}$ constants ($\tilde {C}_m$ with $m$ even and $\tilde {C}_n$) for odd $n$.
In particular the coefficient $\tilde C_0$ always vanishes and thus, for a CPT invariant theory, we never get the gauge-non invariant contribution to th elocal entropy current.
fields C P T CPT
---------- --- --- --- -----
$\sigma$ + + + +
$a_i$ + - - +
$g_{ij}$ + + + +
$A_0$ - + + -
$A_i$ - - - -
: \[cpttab\] Action of CPT on various field
Conclusion {#sec:conclusion}
==========
In this paper we have shown that the results of [@Kharzeev:2011ds; @Loganayagam:2011mu] can based on entropy arguments can be re derived within a more field-theory friendly partition function technique[@Banerjee:2012iz; @Jain:2012rh; @Jensen:2012jh; @Jensen:2012jy]. This has led us to a deeper understanding linking the local description of anomalous transport in terms of a Gibbs current [@Loganayagam:2011mu; @Loganayagam:2012pz] to the global description in terms of partition functions.
An especially satisfying result is that the polynomial structure of anomalous transport coefficients discovered in [@Loganayagam:2011mu] is reproduced at the level of partition functions. There it was shown that the whole set of anomalous transport coefficients are essentially governed by a single homogeneous polynomial $\mathfrak{F}^\omega_{anom}[T,\mu]$ of temperature and chemical potentials. The authors of [@Loganayagam:2012pz] noticed that in a free theory of chiral fermions this polynomial structure is directly linked to the corresponding anomaly polynomial of chiral fermions via a replacement rule $$\begin{split}
\mathfrak{F}_{anom}^\omega[T,\mu] = \mathcal{P}_{anom} {\left [ \mathcal{F} \mapsto \mu, p_1(\mathfrak{R}) \mapsto - T^2 , p_{k>1}(\mathfrak{R}) \mapsto 0 \right ]}
\end{split}$$ This result could be generalised for an arbitrary free theory with chiral fermions and chiral p-form fields using sphere partition function techniques which link this polynomial to a specific thermal observable[@futureLoga].
Various other known results (for example in AdS/CFT) support the conjecture that this rule is probably true in all theories with some mild assumptions. While we have succeeded in reproducing the polynomial structure we have not tried in this paper to check the above conjecture - this necessarily involves a similar analysis keeping track of the effect of gravitational anomalies which we have ignored in our work. It would be interesting to extend our analysis to theories with gravitational anomalies[^10].
We have derived in this paper a particular contribution to the equilibrium partition function that is linked to the underlying anomalies of the theory. A direct test of this result would be to do a direct holographic computation of the same quantity in AdS/CFT to obtain these contributions. Since the CFT anomalies are linked to the Chern-Simons terms in the bulk the holographic test would be a computation of a generalised Wald entropy for a black hole solution of a gravity theory with Chern-Simons terms. The usual Wald entropy gets modified in the presence of such Chern-Simons terms[@Tachikawa:2006sz; @Bonora:2011gz] which are usually a part of higher derivative corrections to gravity. We hope that reproducing the results of this paper would give us a test of generalised Wald formalism for such higher derivative corrections.
We have directly linked the description in terms of a Gibbs current[@Loganayagam:2011mu; @Loganayagam:2012pz] satisfying a kind of adiabticity equation to the global description in terms of partition functions. Further we have noticed in that at least in the case of anomalous transport this Gibbs current is closely linked to what has been called ‘the non-canonical part of the entropy current ’ in various entropy arguments[@Bhattacharyya:2012ex]. It would be interesting to see whether this construction can be generalised beyond the anomalous transport coefficients to other partition function computations which appear in [@Banerjee:2012iz; @Jensen:2012jh]. This would give us a more local interpretation of the various terms appearing in the partition function linking them to a specific Gibbs free energy transport process. Hence with such a result one could directly identify the coefficients appearing in the partition function as the transport coefficients of the Gibbs current.
Another interesting observation of [@Loganayagam:2011mu] apart from the polynomial structure is that the anomalous transport satisfies an interesting reciprocity type relation - the susceptibility describing the change in the anomalous charge current with a small change in vorticity is equal to the susceptibility describing the change in the anomalous energy current with a small change in magnetic field. While we see that the results of our paper are consistent with this observation made in [@Loganayagam:2011mu], we have not succeeded in deriving this relation directly from the partition function. It would be interesting to derive such a relation from the partition function hence clarifying how such a relation arises in a microscopic description .
Finally as we have emphasised in the introductions one would hope that the results of our paper serve as a starting point for generalising the analysis of anomalies to non-equilibrium phenomena. Can one write down a Schwinger-Keldysh functional which transforms appropriately - does this provide new constraints on the dissipative transport coefficients ? We leave such questions to future work.
Acknowledgements {#acknowledgements .unnumbered}
----------------
We would like to thank Sayantani Bhattacharyya for collaboration in the initial stages of this project. It is a pleasure to thank Jyotirmoy Bhattacharya, Dileep Jatkar, Shiraz Minwalla, Mukund Rangamani, Piotr Surowka, Amos Yarom and Cumrun Vafa for various useful discussions on ideas presented in this paper. Research of NB is supported by NWO Veni grant, The Netherlands. RL would like to thank **ICTS discussion meeting on current topics of research in string theory** at the International Centre for Theoretical Sciences(TIFR) , IISc Bangalore for their hospitality while this work was being completed. RL is supported by the Harvard Society of Fellows through a junior fellowship. Finally, RL would like to thank various colleagues at the Harvard society for interesting discussions. Finally, we would like to thank people of India for their generous support to research in science.
[**APPENDICES**]{}
Results of $(3+1)-$ dimensional and $(1+1)-$ dimensional fluid {#app:oldresult}
==============================================================
In this appendix we want to specialise our results to $1+1$ and $3+1$ dimensional anomalous fluids.By considering local entropy production of the system, the results for $(3+1)-$ dimensional anomalous fluid were obtained in [@Son:2009tf], [@Neiman:2010zi; @Bhattacharya:2011tra] and for $(1+1)-$dimensional fluid were obtained in [@Dubovsky:2011sk]. The same results have also been obtained in [@Banerjee:2012iz] and [@Jain:2012rh] for $(3+1)-$ dimensional and $(1+1)-$dimensional anomalous fluid respectively, by writing the equilibrium partition function, the technique that we have followed in this paper. Our goal in this section is to check that the arbitrary dimension results reduce correctly to these special cases.
$(3+1)-$ dimensional anomalous fluids
-------------------------------------
Let us consider fluid living in $(3+1)-$dimension and is charged under a $U(1)$ current. Take $$\begin{split}
\mathfrak{F}^\omega_{anom}[T,\mu] &= \mathcal{C}^{d=4}_{anom} \mu^3+C^{d=4}_0 T\mu^2+C^{d=4}_1 T^2\mu+C^{d=4}_2 T^2\mu\\
\end{split}$$ the constants $\{C^{d=4}_0,C^{d=4}_2\}$ if non-zero violate CPT since their subscript indices are even.
By the replacement rule of [@Loganayagam:2012pz] this corresponds to a theory with the anomaly polynomial $$\begin{split}
\mathcal{P}_{anom} &= \mathcal{C}^{d=4}_{anom}\mathcal{F}^3-C^{d=4}_1\ p_{_1}{\left ( \mathfrak{R} \right )}\wedge \mathcal{F}\\
\end{split}$$ where $p_{_1}{\left ( \mathfrak{R} \right )}$ is the first-pontryagin 4-form of curvature.
We have $$d\bar{J}_{Consistent} = \mathcal{C}^{d=4}_{anom}\mathcal{F}^2$$ $$d\bar{J}_{Cov} = 3\mathcal{C}^{d=4}_{anom}\mathcal{F}^2$$ and their difference is given by $$\bar{J}_{Cov} = \bar{J}_{Consistent}+2 \mathcal{C}^{d=4}_{anom}\hat{\mathcal{A}}\wedge \mathcal{F}$$ In components we have $$\label{anomeq4d}
\begin{split}
\nabla_{\mu} J^{\mu}_{Consistent} &= {\cal C}^{d=4}_{anom} \frac{1}{4}\varepsilon^{\mu \nu \rho \sigma}
\mathcal{F}_{\mu \nu} \mathcal{F}_{\rho \sigma} ,\\
\nabla_{\mu} J^{\mu}_{Cov} &= 3{\cal C}^{d=4}_{anom} \frac{1}{4}\varepsilon^{\mu \nu \rho \sigma}
\mathcal{F}_{\mu \nu} \mathcal{F}_{\rho \sigma} ,\\
J^{\mu}_{Cov} &= J^{\mu}_{Consistent} + 2 \mathcal{C}^{d=4}_{anom} \frac{1}{2}\varepsilon^{\mu \nu \rho \sigma}
\hat{\mathcal{A}}_{\nu} \mathcal{F}_{\rho \sigma}
\end{split}$$ The anomaly-induced transport coefficients (in Landau frame) in this case are given by $$\label{eq:xi4d}
\begin{split}
J^{\mu,anom}_{Cov} &= \xi_1^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho \hat{\mathcal{A}}_{\sigma}
+\xi_2^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho u_{\sigma}\\
\xi_1^{d=4} &= 3\mathcal{C}^{d=4}_{anom} \mu{\left [ \frac{q\mu}{\epsilon + p}-2 \right ]}
+2C^{d=4}_0 T{\left [ \frac{q\mu}{\epsilon + p}-1 \right ]} +C^{d=4}_1 T^2\mu^{-1}{\left [ \frac{q\mu}{\epsilon + p} \right ]} \\
\xi_2^{d=4} &= \mathcal{C}^{d=4}_{anom} \mu^2{\left [ 2\frac{q\mu}{\epsilon + p}-3 \right ]}+C^{d=4}_0 T\mu{\left [ 2\frac{q\mu}{\epsilon + p}-2 \right ]}\\
&\quad +C^{d=4}_1 T^2\mu{\left [ 2\frac{q\mu}{\epsilon + p}-1 \right ]} +C^{d=4}_2 T^3\mu^{-1}{\left [ 2\frac{q\mu}{\epsilon + p} \right ]} \\
\end{split}$$ and $$\label{eq:chi4d}
\begin{split}
J^{\mu,anom}_{S} &= -\frac{\mu}{T}J^{\mu,anom}_{Cov}+ \chi_1^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho \hat{\mathcal{A}}_{\sigma}
+\chi_2^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho u_{\sigma}
+\zeta^{d=4} \varepsilon^{\mu \nu \rho \sigma}\hat{\mathcal{A}}_\nu\partial_\rho \hat{\mathcal{A}}_{\sigma}\\
\mathcal{G}^{\mu,anom}_{Cov} &= -T \chi_1^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho \hat{\mathcal{A}}_{\sigma}
-T \chi_2^{d=4} \varepsilon^{\mu \nu \rho \sigma}u_\nu\partial_\rho u_{\sigma}
-T\zeta^{d=4} \varepsilon^{\mu \nu \rho \sigma}\hat{\mathcal{A}}_\nu\partial_\rho \hat{\mathcal{A}}_{\sigma}\\
-\zeta^{d=4} &= C^{d=4}_0 \\
-\chi_1^{d=4} &= 3\mathcal{C}^{d=4}_{anom} T^{-1}\mu^2+2C^{d=4}_0 \mu+C^{d=4}_1 T \\
-\chi_2^{d=4} &= \mathcal{C}^{d=4}_{anom} T^{-1}\mu^3+C^{d=4}_0 \mu^2 + C^{d=4}_1 T\mu +C^{d=4}_2 T^2 \\
\end{split}$$ The anomalous part of the consistent partition function is given by $$\label{eq:Z4d}
\begin{split}
&{\left ( \ln \mathcal{Z} \right )}^{Consistent}_{anom} \\
&=\int_{\text{space}}\frac{A}{T_0}\wedge\left\{ {\left [ C^{d=4}_0(-1)T_0- 2\mathcal{C}^{d=4}_{anom} A_0 \right ]}(dA)
+{\left [ C^{d=4}_1 T_0^2- \mathcal{C}^{d=4}_{anom} A_0^2 \right ]}(da) \right\}\\
&\qquad + \int_{\text{space}}C^{d=4}_2(-1) T_0^2 a \wedge(da) \\
&=-\frac{\mathcal{C}^{d=4}_{anom}}{T_0}\int d^3x\sqrt{g_3}\epsilon^{ijk}{\left [ 2A_0 A_i\partial_jA_k + A_0^2A_i\partial_ja_k \right ]}\\
&\qquad - C^{d=4}_0 \int d^3x\sqrt{g_3} \epsilon^{ijk}A_i\partial_jA_k+C^{d=4}_1 T_0\int d^3x\sqrt{g_3}\epsilon^{ijk}A_i\partial_ja_k\\
&\qquad- C^{d=4}_2 T_0^2\int d^3x\sqrt{g_3} \epsilon^{ijk}a_i\partial_ja_k \\
\end{split}$$
The results for the equilibrium partition function and the transport coefficients of the fluid have been obtained in [@Banerjee:2012iz] in great detail. We will now compare the results above against the results there. We begin by first fixing the relation between the notation here and the notation employed in [@Banerjee:2012iz]. Comparing our partition function in against Eqn(1.11) of [@Banerjee:2012iz] we get a perfect match with the following relabeling of constants[^11] $$\label{eq:constantConv4d}
C^{d=4}_{anom} = \frac{C}{6}\ ,\quad
C^{d=4}_0 = -C_0\ ,\quad
C^{d=4}_1 = C_2\ ,\quad
C^{d=4}_2 = -C_1$$ The first of these relations also follows independently from comparing our eqn against the corresponding equations in [@Banerjee:2012iz] for covariant/consistent anomaly and the Bardeen current. We then proceed to compare the transport coefficients in Eqn(3.12) and Eqn.(3.21) of [@Banerjee:2012iz] against our results in and .
We get a match provided one uses (in addition to ) the following relations arising from comparing definitions here against [@Banerjee:2012iz] $$\xi_B=\xi_1^{d=4}\ ,\quad
\xi_\omega=2\xi_2^{d=4}\ ,\quad
D_B=\chi_1^{d=4}\ ,\quad
D_\omega=2\chi_2^{d=4}\ ,\quad
h= \zeta^{d=4}$$
$(1+1)-$ dimensional anomalous fluids
-------------------------------------
Let us consider fluid living in $(1+1)-$dimension and is charged under a $U(1)$ current. Take $$\begin{split}
\mathfrak{F}^\omega_{anom}[T,\mu] &= \mathcal{C}^{d=2}_{anom} \mu^2+C^{d=2}_0 T\mu+C^{d=2}_1 T^2\\
\end{split}$$ the constant $C^{d=2}_0$ if non-zero violates CPT since its subscript index is even.
By the replacement rule of [@Loganayagam:2012pz] this corresponds to a theory with the anomaly polynomial $$\begin{split}
\mathcal{P}_{anom} &= \mathcal{C}^{d=2}_{anom}\mathcal{F}^2-C^{d=2}_1\ p_{_1}{\left ( \mathfrak{R} \right )}\\
\end{split}$$ where $p_{_1}{\left ( \mathfrak{R} \right )}$ is the first-pontryagin 4-form of curvature.
We have $$d\bar{J}_{Consistent} = \mathcal{C}^{d=2}_{anom}\mathcal{F}$$ $$d\bar{J}_{Cov} = 2\mathcal{C}^{d=2}_{anom}\mathcal{F}$$ and their difference is given by $$\bar{J}_{Cov} = \bar{J}_{Consistent}+ \mathcal{C}^{d=2}_{anom}\hat{\mathcal{A}}$$ In components we have $$\label{anomeq2d}
\begin{split}
\nabla_{\mu} J^{\mu}_{Consistent} &= {\cal C}^{d=2}_{anom} \frac{1}{2}\varepsilon^{\mu \nu }
\mathcal{F}_{\mu \nu} ,\\
\nabla_{\mu} J^{\mu}_{Cov} &= 2{\cal C}^{d=2}_{anom} \frac{1}{2}\varepsilon^{\mu \nu }
\mathcal{F}_{\mu \nu} ,\\
J^{\mu}_{Cov} &= J^{\mu}_{Consistent} + \mathcal{C}^{d=2}_{anom} \varepsilon^{\mu \nu }
\hat{\mathcal{A}}_{\nu}
\end{split}$$ The anomaly-induced transport coefficients (in Landau frame) in this case are given by $$\begin{split}
J^{\mu,anom}_{Cov} &= \xi_1^{d=2} \varepsilon^{\mu \nu }u_\nu\\
\xi_1^{d=2} &= \mathcal{C}^{d=2}_{anom} \mu{\left [ \frac{q\mu}{\epsilon + p}-2 \right ]}+C^{d=2}_0 T{\left [ \frac{q\mu}{\epsilon + p}-1 \right ]}
+C^{d=2}_1 T^2\mu^{-1}{\left [ \frac{q\mu}{\epsilon + p} \right ]} \\
\end{split}$$ and $$\begin{split}
J^{\mu,anom}_{S} &= -\frac{\mu}{T}J^{\mu,anom}_{Cov}+ \chi_1^{d=2} \varepsilon^{\mu \nu }u_\nu
+\zeta^{d=2} \varepsilon^{\mu \nu }\hat{\mathcal{A}}_\nu\\
\mathcal{G}^{\mu,anom}_{Cov} &= -T \chi_1^{d=2} \varepsilon^{\mu \nu }u_\nu
-T\zeta^{d=2} \varepsilon^{\mu \nu }\hat{\mathcal{A}}_\nu\\
-\zeta^{d=2} &= C^{d=2}_0 \\
-\chi_1^{d=2} &= \mathcal{C}^{d=2}_{anom} T^{-1}\mu^2+C^{d=2}_0 \mu+C^{d=2}_1 T \\
\end{split}$$ The anomalous part of the consistent partition function is given by $$\label{eq:Z2d}
\begin{split}
&{\left ( \ln \mathcal{Z} \right )}^{Consistent}_{anom} \\
&=\int_{\text{space}}\frac{A}{T_0}\wedge {\left [ C^{d=2}_0(-1)T_0- \mathcal{C}^{d=2}_{anom} A_0 \right ]}
+ \int_{\text{space}}C^{d=2}_1 T_0 a \\
&=-\frac{\mathcal{C}^{d=2}_{anom}}{T_0}\int dx\sqrt{g_1}\epsilon^{i}A_0 A_i - C^{d=2}_0 \int dx\sqrt{g_1}\epsilon^{i}A_i
+C^{d=2}_1 T_0\int dx\sqrt{g_1}\epsilon^{i}a_i\\
\end{split}$$
Now we are all set to compare our results with the results of [@Jain:2012rh]. The comparison proceeds here the same way as the comparison in $3+1$d before. By comparing Eqn(2.4) of [@Jain:2012rh] against our we get[^12]
$$\label{eq:constantConv2d}
C^{d=2}_{anom} = C\ ,\quad
C^{d=2}_0 = -C_1 \ ,\quad
C^{d=2}_1 = -C_2\ ,\quad$$
and we get a match of transport coefficients using the definitions $$\xi_j=\xi_1^{d=2}\ ,\quad
\xi_s+\frac{\mu}{T}\xi_j=\chi_1^{d=2}\ ,\quad
D_\omega=2\chi_2^{d=4}\ ,\quad
h= \zeta^{d=2}$$
Hydrostatics and Anomalous transport {#app:hydrostatics}
====================================
In this section we will follow [@Banerjee:2012iz; @Jain:2012rh] in describing a hydrostatic configuration,i.e., a time-independent hydrodynamic configuration in a gauge/gravitational background. We will then proceed to evaluate the anomalous currents derived in previous section in this background. This is followed by a computation of consistent partition function by integrating the consistent Gibbs current over a spatial slice. For convenience we will phrase our entire discussion in the language of forms (as in the previous section) and refer the reader to the appendix\[app:formConventions\] for our form conventions.
Let us consider the special case where we consider a stationary (time-independent) spacetime with a metric given by $$g_{spacetime}= -\gamma^{-2}(dt+a)^2 + g_{space}$$ where in the notation of [@Banerjee:2012iz]we can write $\gamma \equiv e^{-\sigma}$. Following the discussion there, consider a time-independent fluid configuration with local temperature and chemical potential $T ,\mu $ and
placed in a time-independent gauge-field background $$\hat{\mathcal{A}} = \mathcal{A}_0 dt + \mathcal{A}$$ We first compute $$\begin{split}
\mathcal{E}& \equiv \mathcal{F}_{\mu\nu}dx^\mu u^\nu = \gamma \mathcal{F}_{i0}dx^i =\gamma d\mathcal{A}_0 \\
\mathfrak{a} &\equiv u^\mu \nabla_\mu u_\nu dx^\nu =-\gamma^{-1} d\gamma= \gamma d\gamma^{-1}\\
dT +\mathfrak{a}T &= \gamma d{\left ( \gamma^{-1}T \right )} \\
d\mu +\mathfrak{a}\mu -\mathcal{E} &= \gamma d{\left ( \gamma^{-1}\mu -\mathcal{A}_0 \right )} \\
\end{split}$$ If we insist that $$\begin{split}
dT +\mathfrak{a}T &=0 \\
d\mu +\mathfrak{a}\mu -\mathcal{E} &=0 \\
\end{split}$$ then it follows that the quantities $$T_0\equiv \gamma^{-1}T \quad\text{and}\quad \mu_0\equiv \gamma^{-1}\mu -\mathcal{A}_0$$ are constant across space. We can invert this to write $$T = \gamma T_0 \quad\text{and}\quad \mu = \gamma {\left ( \mathcal{A}_0+\mu_0 \right )}\equiv \gamma A_0$$ where we have defined $A_0 \equiv \mathcal{A}_0 +\mu_0$.Following [@Banerjee:2012iz]we will split the gauge field as $$\hat{\mathcal{A}} = \mathcal{A}_0 dt + \mathcal{A} = A_0 (dt+a) + A-\mu_0 dt$$ where $A\equiv \mathcal{A}-A_0\ a $. We are now working in a general gauge - often it is useful to work in a specific gauge : one gauge we will work on is obtained from this generic gauge by performing a gauge transformation to remove the $\mu_0 dt$ piece. We will call this gauge as the ‘zero $\mu_0$’ gauge. In this gauge the new gauge field is given in terms of the old gauge field via $$\hat{\mathcal{A}}_{\mu_0=0} \equiv \hat{\mathcal{A}}+\mu_0 dt$$ We will quote all our consistent currents in this gauge.
We are now ready to calculate various hydrostatic quantities $$\label{eq:HStatics}
\begin{split}
\mathcal{E}& =\gamma d\mathcal{A}_0 =\gamma dA_0 \\
\mathfrak{a} & =-\gamma^{-1} d\gamma= \gamma d\gamma^{-1}\\
\mathcal{B}&\equiv \mathcal{F}-u\wedge \mathcal{E} = d{\left [ A_0 (dt+a) + A-\mu_0 dt \right ]}+(dt+a)\wedge dA_0\\
&= dA+A_0 da \\
2\omega &= du+u\wedge \mathfrak{a} =- \gamma^{-1} da \\
2\omega T &=-T_0 da \\
2\omega\mu &= -A_0 da \\
\hat{A}+\mu u &= A-\mu_0 dt\\
\mathcal{B}+2\omega \mu &= dA \\
\end{split}$$
Now let us compute the various anomalous currents in terms of the hydrostatic fields. Using we get the Gibbs current as $$\begin{split}
-&\bar{\mathcal{G}}^{Cov}_{anom} \\
&= \gamma \sum_{m=1}^{n}\left[ C_m(-1)^{m-1} T_0^{m+1}-C_0(-1)^{0-1}\binom{n}{m}T_0A_0^m\right.\\
&\qquad\left. +m\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right] (da)^{m-1}(dA)^{n-m}\wedge (dt+a)\\
&\qquad -\gamma C_0 T_0 \hat{\mathcal{A}}_{\mu_0=0}\wedge \mathcal{F}^{n-1}
\end{split}$$ In the following we will always write the minus signs in the form $C_m(-1)^{m-1}$ so that once we impose CPT all the minus signs could be dropped.
We can now calculate the charge/entropy/energy currents $$\begin{split}
\bar{J}^{Cov}_{anom}
&= \sum_{m=1}^{n}\left[-(n+1-m)C_{m-1}(-1)^{m-2} T_0^m\right.\\
&\qquad\left. +(n+1)\binom{n}{m}\mathcal{C}_{anom} A_0^{m}\right] (da)^{m-1}\wedge(dA)^{n-m}\wedge (dt+a) \\
\end{split}$$ $$\begin{split}
\bar{J}^{Cov}_{S,anom}
&=\sum_{m=1}^{n}\left[(m+1)C_m (-1)^{m-1} T_0^m\right.\\
&\qquad \left.-C_0(-1)^{0-1}\binom{n}{m} A_0^m\right] (da)^{m-1}(dA)^{n-m}\wedge (dt+a) \\
&\qquad -C_0 \hat{\mathcal{A}}_{\mu_0=0}\wedge \mathcal{F}^{n-1}
\end{split}$$ and $$\begin{split}
&\bar{q}^{Cov}_{anom}\\
&= \gamma\sum_{m=1}^{n}\left[m C_m (-1)^{m-1} T_0^{m+1}-(n+1-m) C_{m-1}(-1)^{m-2} T_0^m A_0\right.\\
&\qquad\left. +\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right](da)^{m-1}(dA)^{n-m}\wedge (dt+a) \\
\end{split}$$
We can go to the Landau frame as before $$\begin{split}
u^\mu &\mapsto u^\mu - \frac{q^\mu_{anom}}{\epsilon + p} \\
J^\mu_{anom} &\mapsto J^\mu_{anom} - q \frac{q^\mu_{anom}}{\epsilon + p} \\
J^\mu_{S,anom} &\mapsto J^\mu_{S,anom} - s \frac{q^\mu_{anom}}{\epsilon + p}\\
q^\mu_{anom} &\mapsto 0\\
\end{split}$$ In the Landau frame we can write the corrections to various quatities as $$\begin{split}
\delta \bar{u} &\equiv -\gamma^{-1}\sum_{m=1}^n U_m (da)^{m-1}\wedge(dA)^{n-m}\wedge(dt+a)\\
\delta \bar{J}^{Cov}_{anom} &\equiv -\gamma^{-1}\sum_{m=1}^n {\left ( J_m+ q\ U_m \right )}(da)^{m-1}\wedge(dA)^{n-m}\wedge(dt+a)\\
\delta \bar{J}^{Cov}_{S,anom} &\equiv -\gamma^{-1}\sum_{m=1}^n {\left ( S_m+ s\ U_m \right )}(da)^{m-1}\wedge(dA)^{n-m}\wedge(dt+a)\\
\end{split}$$ where $$\begin{split}
U_m &=-\frac{\gamma^2}{\epsilon + p}\left[ m C_m (-1)^{m-1}T_0^{m+1}-(n+1-m) C_{m-1}(-1)^{m-2} T_0^m A_0\right.\\
&\qquad\left.+\binom{n+1}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right] \\
J_m+ q\ U_m &= \gamma{\left [ (n+1-m)C_{m-1}(-1)^{m-2} T_0^m-(n+1)\binom{n}{m}\mathcal{C}_{anom} A_0^{m} \right ]}\\
S_m+ s\ U_m &=\gamma{\left [ -(m+1)C_m (-1)^{m-1}T_0^m+C_0(-1)^{0-1}\binom{n}{m} A_0^m \right ]}\\
\end{split}$$ which matches with expressions from the partition function.
The corresponding consistent currents can be obtained via the relations $$\begin{split}
\bar{\mathcal{G}}^{Cov}_{anom} &= \bar{\mathcal{G}}^{Consistent}_{anom} - \mu\ n\ \mathcal{C}_{anom}\hat{\mathcal{A}}\wedge \mathcal{F}^{n-1}\\
\bar{J}^{Cov}_{anom} &= \bar{J}^{Consistent}_{anom} + n\ \mathcal{C}_{anom}\hat{\mathcal{A}}\wedge \mathcal{F}^{n-1} \\
\bar{J}^{Cov}_{S,anom} &= \bar{J}^{Consistent}_{S,anom} \\
\bar{q}^{Cov}_{anom} &= \bar{q}^{Consistent}_{anom}\\
\end{split}$$ In particular we have $$\begin{split}
-\frac{1}{T}&\bar{\mathcal{G}}^{Consistent}_{anom}\\
&=\frac{1}{T_0}\sum_{m=1}^{n}\left[ C_m(-1)^{m-1} T_0^{m+1}-C_0(-1)^{0-1}\binom{n}{m}T_0A_0^m\right.\\
&\qquad\left. -\binom{n}{m+1}\mathcal{C}_{anom}A_0^{m+1}\right] (da)^{m-1}(dA)^{n-m}\wedge (dt+a)\\
&\quad -\frac{1}{T_0} {\left [ n \mathcal{C}_{anom}A_0 +C_0T_0 \right ]} A\wedge (dA+A_0 da)^{n-1}\\
&\quad -\frac{(n-1)}{T_0} {\left [ n \mathcal{C}_{anom}A_0 +C_0T_0 \right ]} A\wedge dA_0\wedge(dt+a)\wedge (dA+A_0 da)^{n-2}\\
\end{split}$$
Variational formulae in forms {#app:variationForms}
=============================
The energy current is defined via the relation $$\begin{split}
q_\mu dx^\mu &\equiv -T_{\mu\nu}u^\mu dx^\nu \\
&= -\gamma T_{00} (dt+a) - \gamma g_{ij} T^i_0 dx^j \\
\end{split}$$ Hence its Hodge dual is (See \[app:formConventions\] for the definition of Hodge dual) $$\begin{split}
\bar{q} &= \gamma^3 T_{00} d\forall_{d-1} + \gamma T^i_0 (dt+a)\wedge {\left ( d\Sigma_{d-2} \right )}_i
\end{split}$$ We take the following relations[^13] from Eqn(2.16) of [@Banerjee:2012iz] $$\begin{split}
\gamma T_{00} d\forall_{d-1} &= \frac{\delta}{\delta \gamma} {\left ( T_0 \ln\ \mathcal{Z} \right )} \\
T^i_0 d\forall_{d-1} &= dx^i \wedge T^j_0 {\left ( d\Sigma_{d-2} \right )}_j
= {\left [ \frac{\delta}{\delta a_i}-A_0 \frac{\delta}{\delta A_i} \right ]} {\left ( T_0 \ln\ \mathcal{Z} \right )} \\
\end{split}$$ where the independent variables are $\{\gamma, a,g^{ij},A_0,A,T_0,\mu_0\}$. Converting into forms $$\begin{split}
\bar{q} &= {\left [ \gamma^2 \frac{\delta}{\delta \gamma} + \gamma(dt+a)\wedge\frac{\delta}{\delta a}-\gamma A_0(dt+a)\wedge \frac{\delta}{\delta A} \right ]}
{\left ( T_0\ln \mathcal{Z} \right )}\\
&={\left [ \gamma^2 \frac{\delta}{\delta \gamma} +\gamma(dt+a)\wedge\frac{\delta}{\delta a}-\mu(dt+a)\wedge \frac{\delta}{\delta A} \right ]}{\left ( T_0\ln \mathcal{Z} \right )} \\
\end{split}$$
Similarly for the charge current $$\begin{split}
-\gamma^2 J_0 d\forall_{d-1} &= \frac{\delta}{\delta A_0} {\left ( T_0 \ln\ \mathcal{Z} \right )} \\
J^i d\forall_{d-1} &= dx^i \wedge J^j {\left ( d\Sigma_{d-2} \right )}_j
= \frac{\delta}{\delta A_i} {\left ( T_0 \ln\ \mathcal{Z} \right )} \\
\end{split}$$ which implies $$\begin{split}
\bar{J} &\equiv-\gamma^2 J_0 d\forall_{d-1}- J^i(dt+a)\wedge {\left ( d\Sigma_{d-2} \right )}_i\\
&= {\left [ \frac{\delta}{\delta A_0} -(dt+a)\wedge\frac{\delta}{\delta A} \right ]}{\left ( T_0\ln \mathcal{Z} \right )}
\end{split}$$
Putting $T_0 \ln \mathcal{Z}= -\gamma^{-1}\bar{\mathcal{G}}$ we can write $$\begin{split}
\bar{J} &\equiv -\frac{\partial \bar{\mathcal{G}}}{\partial\mu}=-\gamma^{-1} {\left [ \frac{\delta}{\delta A_0} -(dt+a)\wedge\frac{\delta}{\delta A} \right ]}\bar{\mathcal{G}}\\
\bar{J}_S
&\equiv -\frac{\partial \bar{\mathcal{G}}}{\partial T}=-\gamma^{-1}\frac{1}{T_0}{\left [ \gamma\frac{\delta }{\delta \gamma}+(dt+a)\wedge\frac{\delta}{\delta a}-A_0\frac{\delta}{\delta A_0} \right ]}\bar{\mathcal{G}}
\\
\bar{q}
&= \bar{\mathcal{G}}+T\bar{J}_S +\mu \bar{J} \\
\end{split}$$
Convention for Forms {#app:formConventions}
====================
The inner product between two 1-forms $J\equiv J_0 (dt+a)+ g_{ij} J^i dx^j$ and $J'\equiv J'_0 (dt+a)+ g_{ij} (J')^i dx^j$ is given in terms of the KK-invariant components as $$\begin{split}
\langle J ,J' \rangle &\equiv -\gamma^2 J_0 J'_0 + g_{ij}J^i (J')^j
\end{split}$$
In general, the exterior derivative of a p-form $$A_p \equiv \frac{1}{p!}A_{\mu_1\ldots\mu_p} dx^{\mu_1}\wedge\ldots\wedge dx^{\mu_p}$$ is given by $$\begin{split}
(dA)_{p+1} &\equiv \frac{1}{p!}\partial_\lambda A_{\mu_1\ldots\mu_p} dx^\lambda\wedge dx^{\mu_1}\wedge\ldots\wedge dx^{\mu_p}\\
&=\frac{1}{(p+1)!}{\left [ \partial_{\mu_1} A_{\mu_2\ldots\mu_{p+1}}+\text{cyclic} \right ]} dx^{\mu_1}\wedge\ldots\wedge dx^{\mu_{p+1}}
\end{split}$$
The Levi-Civita tensor $\varepsilon^{\mu_1\ldots\mu_d}$ is defined as the completely antisymmetric tensor with $$\varepsilon^{012\ldots (d-1)} = \frac{1}{\sqrt{-\det\ g_d}} = \frac{1}{\gamma^{-1}\sqrt{\det\ g_{d-1}}}$$ We will also often define the spatial Levi-Civita tensor $\epsilon^{i_1i_2\ldots i_{d-1}}$ such that $$\epsilon^{12\ldots (d-1)} = \frac{1}{\sqrt{\det\ g_{d-1}}}$$ which is related to its spacetime counterpart via $$\epsilon^{i_1i_2\ldots i_{d-1}} = \gamma^{-1} \varepsilon^{0i_1i_2\ldots i_{d-1}}$$
Let us define the spatial volume $(d-1)$-form as $$\begin{split}
d\forall_{d-1} &\equiv \gamma^{-1}\epsilon_{i_1\ldots i_{d-1}} dx^{i_1}\otimes\ldots\otimes dx^{i_{d-1}} \\
&= \frac{1}{(d-1)!}\gamma^{-1}\epsilon_{i_1\ldots i_{d-1}} dx^{i_1}\wedge\ldots\wedge dx^{i_{d-1}} \\
&= d^{d-1}x\ \gamma^{-1}\sqrt{\det\ g_{d-1}} \\
&= d^{d-1}x\ \sqrt{-\det\ g_d}
\end{split}$$ where $\epsilon_{i_1\ldots i_{d-1}}$ is the spatial Levi-Civita symbol. The form $d\forall_{d-1}$ transforms like a vector with a lower time-index and hence is KK-invariant.
Define the spatial area $(d-2)$-form as $$\begin{split}
{\left ( d\Sigma_{d-2} \right )}_j &\equiv \gamma^{-1}\epsilon_{j i_1\ldots i_{d-2}} dx^{i_1}\otimes\ldots\otimes dx^{i_{d-2}} \\
&= \frac{1}{(d-2)!}\gamma^{-1}\epsilon_{ji_1\ldots i_{d-2}} dx^{i_1}\wedge\ldots\wedge dx^{i_{d-2}} \\
\end{split}$$ This transforms like a vector with a lower time-index and a lower spatial index but is antisymmetric in these two indices and is hence KK-invariant. The area $(d-2)$-form satisfies $$dx^i\wedge {\left ( d\Sigma_{d-2} \right )}_j = d\forall_{d-1}\ \delta^i_j$$
The Hodge-dual of a 1-form $J\equiv J_0 (dt+a)+ g_{ij} J^i dx^j$ is defined as $$\label{eq:HodgeDef}
\begin{split}
\bar{J}
&= -\gamma^2 J_0 d\forall_{d-1}- J^i(dt+a)\wedge {\left ( d\Sigma_{d-2} \right )}_i\\
\end{split}$$ This is defined such that $$\begin{split}
J'\wedge \bar{J} = \langle J',J \rangle (dt+a)\wedge d\forall_{d-1} = \langle J' ,J \rangle dt\wedge d\forall_{d-1}
\end{split}$$ In particular $$\begin{split}
d\bar{J} = {\left ( \nabla_\mu J^\mu \right )} dt\wedge d\forall_{d-1}
\end{split}$$ One often useful formula is this $$\label{eq:HodgeDual}
\begin{split}
\bar{J} &= \hat{\mathcal{A}}\wedge(d\hat{\mathcal{A}})^{n-1} \\
&\qquad\text{is equivalent to}\\
J^\mu &= {\left [ \varepsilon \hat{\mathcal{A}}\ (\partial\hat{\mathcal{A}})^{n-1} \right ]}^\mu\\
\end{split}$$
Let us take another example which will recur throughout our paper - say we are given that the Hodge-dual of a 1-form $J\equiv J_0 (dt+a)+ g_{ij} J^i dx^j$ is $$-\bar{J} = A\wedge(da)^{m-1}(dA)^{n-m}+ A_0(dt+a)\wedge (da)^{m-1}(dA)^{n-m}$$ where $a=a_i dx^i$ and $A=A_i dx^i$ are two arbitrary 1-forms with only spatial components.
Then we can invert the Hodge-dual using the following statement $$\label{eq:InvertHodge}
\begin{split}
\bar{J} &= -A\wedge(da)^{m-1}(dA)^{n-m} - A_0(dt+a)\wedge (da)^{m-1}(dA)^{n-m}\\
&\qquad\text{is equivalent to}\\
J_0 &= \gamma^{-1} {\left [ \epsilon A(da)^{m-1}(dA)^{n-m} \right ]}\\
J^i &= \gamma A_0 {\left [ \epsilon(da)^{m-1}(dA)^{n-m} \right ]}^i\\
\end{split}$$
[^1]: The Wess-Zumino descent relations are dealt with in detail in various textbooks[@Weinberg:1996kr; @Bertlmann:1996xk; @Bastianelli:2006rx] and lecture notes [@Harvey:2005it; @Bilal:2008qx].
[^2]: It would be an impossible task to list all the references in the last few decades which have discovered (and rediscovered) such effects in free/weakly coupled theories in various disguises using a diversity of methods . See for example [@Vilenkin:1978hb] for what is probably the earliest study in $3+1d$. See [@Loganayagam:2012pz] for a recent generalisation to arbitrary dimensions.
[^3]: See for example [@Erdmenger:2008rm; @Banerjee:2008th; @Torabian:2009qk] for some of the initial holographic results.
[^4]: Time-independence at finite temperature and chemical potential essentially means we are doing a Euclidean field theory. Unlike the Lorentzian field theory (which often has light-hydrodynamic modes) the Euclidean field theory has very few light modes except probably the Goldstone modes arising out of spontaneous symmetry breaking. We thank Shiraz Minwalla for emphasising this point.
[^5]: We remind the reader that the anomalies of a theory living in $d=2n$ spacetime dimensions is succinctly captured by a $2n+2$ form living in *two dimensions higher*. This $2n+2$ form called the anomaly polynomial (since it is a polynomial in external/background field strengths $\mathcal{F}$ and $\mathfrak{R}$) is related to the variation of the effective action $\delta W$ via *the descent relations* $$\mathcal{P}_{anom}=d\Gamma_{CS}\ ,\qquad \delta \Gamma_{CS}= d \delta W$$ We will refer the reader to various textbooks[@Weinberg:1996kr; @Bertlmann:1996xk; @Bastianelli:2006rx] and lecture notes [@Harvey:2005it; @Bilal:2008qx] for a more detailed exposition.
[^6]: Since all relativistic theories only have integer powers of Pontryagin forms the constants $C_{m}$ should vanish whenever $m$ is even. As we shall see later that another way to arrive at the same conclusion is to impose CPT invariance.
[^7]: For similar discussions, see for example [@Jensen:2012jh; @Jensen:2012jy].
[^8]: See, for example, section§3 of [@Landsteiner:2011tf] for a discussion of some of the subtleties.
[^9]: One required identity is, $$\hat{\mathcal{A}}_{\alpha}
\varepsilon^{\mu_1 \nu_1 \ldots\mu_n\nu_n} \mathcal{F}_{\mu_1 \nu_1} \ldots \mathcal{F}_{\mu_n \nu_n}
= 2n\ \hat{\mathcal{A}}_{\mu_1}
\varepsilon^{\mu_1 \nu_1 \mu_2\nu_2 \ldots\mu_n\nu_n} \mathcal{F}_{\alpha \nu_1} \mathcal{F}_{\mu_2 \nu_2}\ldots \mathcal{F}_{\mu_n \nu_n}$$ for arbitrary $2n-$dimensions
[^10]: As we were finalising this manuscript, a paper[@Valle:2012em] dealing with $1+1$d gravitational anomalies appeared in arXiv. We thank Amos Yarom for various discussions regarding this topic.
[^11]: We warn the reader that the wedge notation in [@Banerjee:2012iz] differs from the one we use by numerical factors. So the comparisons are to be made *after* converting to explicit components to avoid confusion.
[^12]: Note that authors of [@Jain:2012rh] set the CPT-violating coefficient $C^{d=2}_0= -C_1=0 $ in most of their analysis. This fact has to be accounted for during the comparison.
[^13]: we remind the reader that $\gamma\equiv e^{-\sigma}$ and $d\forall_{d-1} = d^{d-1}x \sqrt{-\det\ g_d}$
|
---
abstract: |
In this paper we establish function field versions of two classical conjectures on prime numbers. The first says that the number of primes in intervals $(x,x+x^\epsilon]$ is about $x^\epsilon/\log x$ and the second says that the number of primes $p<x$, $p \equiv a \pmod d$, for $d^{1+\delta}<x$, is about $\frac{\pi(x)}{\phi(d)}$.
More precisely, we prove: Let $1\leq m<k$ be integers, let $q$ be a prime power, and let $f$ be a monic polynomial of degree $k$ with coefficients in ${\mathbb{F}}_q$. Then there is a constant $c(k)$ such that the number $N$ of prime polynomials $g=f+h$ with $\deg h\leq m$ satisfies $|N-q^{m+1}/k|\leq c(k)q^{m+\frac{1}{2}}$. Here we assume $m\geq 2$ if $\gcd(q,k(k-1))>1$ and $m\geq 3$ if $q$ is even and $\deg f'\leq 1$. We show that this estimation fails in the neglected cases.
Let $\pi_q(k)$ be the number of monic prime polynomials of degree $k$ with coefficients in ${\mathbb{F}}_q$. For relatively prime $f,D\in {\mathbb{F}}_q[t]$ we prove that the number $N'$ of monic prime polynomials $g\equiv f\pmod D$ of degree $k$ satisfies $|N'-\frac{\pi_q(k)}{\phi(D)}|\leq c(k)\frac{\pi_q(k)q^{-1/2}}{\phi(D)}$, as long as $1\leq\deg D\leq k-3$ (or $\leq k-4$ if $p=2$ and $(f/D)'$ is constant).
We also generalize these results to other factorization types.
author:
- 'Efrat Bank [^1]'
- 'Lior Bary-Soroker [^2]'
- 'Lior Rosenzweig [^3]'
title: Prime polynomials in short intervals and in arithmetic progressions
---
Introduction
============
We study two function field analogues of two classical problems in number theory concerning the number of primes in short intervals and in arithmetic progressions. We first introduce the classical problems and then formulate the results in function fields.
Primes in short intervals
-------------------------
Let $\pi(x)=\#\{ 0<p\leq x\mid p\mbox{ is a prime}\}$ be the prime counting function. By the Prime Number Theorem (PNT) $$\pi(x) \sim \frac{x}{\log x}, \quad x\to \infty.$$ Therefore one may expect that an interval $I=(x,x+\Phi(x)]$ of size $\Phi(x)$ starting at a large $x$ contains about $\Phi(x)/\log x$ primes, i.e.$$\label{eq:primes_short}
\pi(I) := \pi(x+\Phi(x)) - \pi(x) \sim \frac{\Phi(x)}{\log x}.$$ From PNT holds for $\Phi(x) \sim c x$, for any fixed $0<c<1$. By Riemann Hypothesis holds for $\Phi(x) \sim \sqrt {x} \log x$ or even $\Phi(x) \sim \epsilon \sqrt {x \log x}$ assuming a strong form of Montgomery’s pair correlation conjecture [@HeathBrownGoldston1984]. Concerning smaller powers of $x$ Granville conjectures [@Granville1995 p. 7] that
\[conj:si\] If $\Phi(x)>x^\epsilon$ then holds.
But even for $\Phi(x)=\sqrt{x}$ Granville says [@Granville2010 p. 73]:
> *we know of no approach to prove that there are primes in all intervals $[x, x + \sqrt x]$.*
Heath-Brown [@HeathBrown1988Crelle], improving Huxley [@Huxley1972inv], proves Conjecture \[conj:si\], unconditionally, for $x^{\frac{7}{12}- \epsilon(x)}\leq \Phi(x) \leq \frac{x}{\log^4x}$, where $\epsilon (x) \to 0$.
We note that for extremely short intervals (e.g. for $\Phi(x) = \log x \frac{\log\log x \log\log\log\log x}{\log\log\log x}$) fails [@Rankin1938] uniformly, but may hold for almost all $x$, see [@Selberg1943] and the survey [@Granville2010 Section 4].
Primes in arithmetic progressions
---------------------------------
Let $\pi(x;d,a)$ denote the number of primes $p\leq x$ such that $p\equiv a\pmod d$. Then the Prime Number Theorem for arithmetic progressions says that if $a$ and $d$ are relatively prime and fixed, then $$\label{eq:primesinAP}
\pi(x;d,a) \sim \frac{\pi(x)}{\phi(d)}, \quad x\to \infty,$$ where $\pi(x)$ is the prime counting function and $\phi(d)$ is the Euler totient function, giving the number of positive integers $i$ up to $d$ with $\gcd(i,d)=1$.
In many applications it is crucial to allow the modulus $d$ to grow with $x$. The interesting range is $d<x$ since if $d\geq x$, there can be at most one prime in the arithmetic progression $p\equiv i\pmod d$. A classical conjecture is the following (in a slightly different form see [@MontgomeryVaughan Conjecture 13.9]).
\[conj:AP\] For every $\delta >0$, holds in the range $d^{1+\delta}<x$.
Concerning results on this conjecture Granville says [@Granville2010 p. 69]:
> *$\ldots$ the best proven results have $x$ bigger than the exponential of a power of $q$ *(Granville’s $q$ is our $d$)* far larger than what we expect. If we are prepared to assume the unproven Generalized Riemann Hypothesis we do much better, being able to prove that the primes up to $q^{2+\delta}$ are equally distributed amongst the arithmetic progressions $\mod q$, for $q$ sufficiently large, though notice that this is still somewhat larger than what we expect to be true.*
In this work we establish function field analogues of Conjectures \[conj:si\] and \[conj:AP\] for certain intervals of parameters $\epsilon,\delta$ which may be arbitrary small, and in particular breaking the barriers $\epsilon = 1/2$ in the former and $\delta =1$ in the latter. This indicates that Conjectures \[conj:si\] and \[conj:AP\] should hold. A crucial ingredient is a type of Hilbert’s irreducibility theorem over finite fields [@BarySoroker2012].
Results in function fields
==========================
Let $\mathcal{P}_{\leq k}$ be the space of polynomials of degree at most $k$ over ${\mathbb{F}}_q$ and $\mathcal{M}(k,q)\subseteq \mathcal{P}_{\leq k}$ the subset of monic polynomials of degree $k$. If $\deg f=k$, we let $\|f\| = q^{k}$.
Short intervals
---------------
Let $\pi_q(k) = \# \{g\in M(k,q) \mid g \mbox{ is a prime polynomial}\}$ be the prime polynomial counting function. The Prime Polynomial Theorem (PPT) asserts that $$\pi_q(k) = \frac{q^k}{k} + O\bigg(\frac{q^{k/2}}{k}\bigg).$$
An interval $I$ around $f\in \mathcal{M}(k,q)$ is defined as $$I=I(f,m) =\{g\in{\mathbb{F}}_q[t]\mid \|f-g\| \leq q^m\} = f + \mathcal{P}_{\leq \lfloor m\rfloor}.$$ If $m\geq k$, then $I(f,m)=\mathcal{P}_{\leq m}$, and so the PPT gives the number of primes there. The interesting intervals are the *short intervals*, i.e. when $m<k$. In particular $\mathcal{M}(k,q)=I(t^k,k-1)$. We note that all the polynomials in a short interval around a monic polynomial are monic.
For a short interval $I$ let $\pi_q(I) = \# \{g\in I \mid g \mbox{ is a prime polynomial}\}$. The expected analogy to is $$\label{eq:prime_polynomial_short}
\pi_q(I(f,m)) \sim \frac{|I(f,m)|}{k} =\frac{q^{\lfloor m \rfloor+1}}{k},$$ for $f\in \mathcal{M}(k,q)$ and $0<m<k$.
Keating and Rudnick [@KeatingRudnick2012] study the variance of primes in short intervals in the limit $q\to \infty$. From their result it follows in a standard way that holds almost everywhere for $m\leq k-3$, see Appendix A for details.
We show that holds everywhere:
\[thm:main\] Let $k$ be a positive integer. Then there exists a constant $c(k)>0$ depending only on $k$ such that for any
- prime power $q=p^\nu$,
- integer $1\leq m<k$, and
- a short interval $I=I(f,m)$ around $f\in \mathcal{M}(k,q)$
we have $$\left|\pi_q(I) - \frac{q^{m+1}}{k}\right| \leq c(k) q^{m+\frac{1}{2}},$$ provided $2\leq m$ if $p\mid k(k-1)$ and provided $3\leq m$ if $p=2$ and $\deg f'\leq 1$.
To compare with Conjecture \[conj:si\] we note that $x$ corresponds to $q^k$, hence an interval of length $x^\epsilon$ corresponds to $I(f,\epsilon k)$, $f\in \mathcal{M}(k,q)$. Thus for any fixed $k$, for every $\frac{3}{k}\leq \epsilon \leq 1$, and for every sequence of intervals $I_q=I_q(f_q,\epsilon k)$, $$\pi_q(I_q) \sim \frac{|I_q|}{k}, \quad q\to \infty.$$ (In fact it is possible to consider $\epsilon \geq \frac{1}{k}$ for those intervals $I_q$, $q=p^\nu$, for which $p\nmid k(k-1)$ and $\epsilon \geq \frac{2}{k}$ if $p\neq 2$ or $p=2$ and $\deg f_q'\geq 2$.) The conclusion is that a precise analogue of Conjecture \[conj:si\] for $\frac{3}{k}\leq \epsilon\leq 1$ holds. In particular we go below the barrier $\epsilon =\frac{1}{2}$, and by enlarging $k$, $\epsilon $ can be made arbitrary small.
In Section \[sec:ce\] we discuss the cases which are not included in Theorem \[thm:main\] by studying the intervals $I(t^k,m)$. In particular we show that fails if $m=0$, or if $m=1$ and $p\mid k(k-1)$. We do not know whether holds true in the remaining case $p=m=2$ and $\deg f'\leq 1$.
Primes in arithmetic progressions
---------------------------------
For relatively prime $f,D\in {\mathbb{F}}_q[t]$ let $$\pi_q(k;D ,f)=\#\{h=f+Dg\in \mathcal{M}(k,q)\mid h\mbox{ is a prime polynomial}\}.$$ The Prime Polynomial Theorem for arithmetic progressions says that $$\label{eq:PPTAP}
\pi_q(k;D ,f) = \frac{\pi_q(k)}{\phi(D)} + O\bigg(\frac{q^{k/2}}{k}\deg D\bigg).$$ Here $\phi(g)$ is the function field Euler totient function, giving the number of units in ${\mathbb{F}}_q[t]/g{\mathbb{F}}_q[t]$.
In analogy to the classical case we want to allow $\deg D$ to grow with $k$. The interesting range of parameters is $\deg D>k$, because if $\deg D\geq k$, there is at most one monic prime in the arithmetic progression $h\equiv f\mod D$ of degree $k$.
We note that $$\phi(D) \sim q^{\deg D}, \quad q\to \infty.$$ Therefore, if $2\deg D < k-\delta$, then gives that $$\pi_q(k;D,f) \sim \frac{\pi_q(k)}{\phi(D)}, \quad q\to \infty.$$ On the other hand gives nothing when $2\deg D\geq k$.
In analogy with one may expect that $$\label{eq:PPTAPlarge}
\pi_q(k;D,f) \sim \frac{\pi_q(k)}{\phi(D)}$$ as long as $(1+\delta)\deg D \leq k$.
Keating and Rudnick [@KeatingRudnick2012] calculate the variance of the number of primes in arithmetic progressions in function fields. From their work holds true almost everywhere, in a standard way.
We show everywhere:
\[thm:main2\] Let $k$ be a positive integer. Then there exists a constant $c(k)>0$ depending only on $k$ such that for any
- prime power $q=p^\nu$,
- $2\leq m<k$
- monic modulus $D\in {\mathbb{F}}_q[t]$ with $\deg D = k-m-1$,
- and $f\in {\mathbb{F}}_q[t]$,
we have $$\left|\pi_q(k;D,f) - \frac{\pi_q(k)}{\phi(D)}\right| \leq c(k) q^{m+\frac{1}{2}},$$ provided $(f/D)'$ is not constant if $p=m=2$. (Note that $\frac{\pi_q(k)}{\phi(D)}\sim\frac{q^{k-\deg D}}{k} =\frac{q^{m+1}}{k} $ as $q\to \infty$. )
To compare with Conjecture \[conj:AP\] we note that $x$ corresponds to $q^k$, $d$ corresponds to $q^{\deg D}$ and the condition $d^{(1+\delta)}<x$ translates to $(1+\delta) \deg D< k$. Thus for any fixed $k$ and $\frac{4}{k-4}\leq \delta$ and for any sequence of $(D_q,f_q)_q$ with $D_q, f_q \in {\mathbb{F}}_q[t]$ such that $D$ is monic and $(1+\delta) \deg D_q < k$ we have $$\pi_q(k;D_q,f_q) \sim \frac{{\pi_q(k)}}{\phi(D_q)}, \quad q\to \infty.$$ (In fact we may take $\frac{3}{k-3}\leq \delta$ if $q$ is odd or if $(f_q/D_q)'$ is not constant.) The conclusion is that a perfect analogue of Conjecture \[conj:AP\] for $\frac{4}{k-4}\leq \delta$ holds. In particular we go below the barrier $\delta = 1$ and by enlarging $k$, $\delta$ can be made arbitrary small. This indicates that Conjecture \[conj:AP\] should hold for any $\delta >0$.
Other factorization types
-------------------------
Our method allows us to count polynomials with any given factorization type. Let us start by setting up the notation.
The degrees of the primes in the factorization of a polynomial $f\in {\mathbb{F}}_q[t]$ to a product of prime polynomials gives a partition of $\deg f$, denoted by $\lambda_f$. Similarly the lengths of the cycles in the factorization of a permutation $\sigma\in S_k$ to a product of disjoint cycles induce a partition, $\lambda_\sigma$, of $k$. For a partition $\lambda$ of $k$ we denote the probability for $\sigma\in S_k$ to have $\lambda_\sigma=\lambda$ by $$\label{eq:cycle_type}
P(\lambda)=\frac{\#\{ \sigma\in S_k\mid \lambda_\sigma = \lambda\}}{k!}.$$ We note that if $\lambda$ is the partition to one part, then $\lambda_f=\lambda$ if and only if $f$ is prime and $P(\lambda)=\frac{(k-1)!}{k!}=\frac{1}{k}$.
Let $k$ be a positive integer and $\lambda$ a partition of $k$. For a short interval $I=I(f,m)$ with $f\in \mathcal{M}(k,q)$ we define the counting function $$\pi_q(I;\lambda) = \# \{g\in I \mid \lambda_g=\lambda \}.$$
We generalize Theorem \[thm:main\]:
\[thm:mainpart\] Let $k$ be a positive integer. Then there exists a constant $c(k)>0$ depending only on $k$ such that for any
- partition $\lambda$ of $k$,
- prime power $q=p^\nu$,
- integer $1\leq m<k$, and
- a short interval $I=I(f,m)$ around $f\in \mathcal{M}(k,q)$
we have $$\left|\pi_q(I;\lambda) - P(\lambda)q^{m+1}\right| \leq c(k) q^{m+\frac{1}{2}},$$ provided $2\leq m$ if $p\mid k(k-1)$ and provided $3\leq m$ if $p=2$ and $\deg f'\leq 1$.
For relatively prime $f,D\in {\mathbb{F}}_q[t]$ with $D$ monic we define the counting function $$\pi_q(k;D,f;\lambda) = \#\{g \equiv f\pmod D\mid \deg g=k \mbox{ and } \lambda_g=\lambda\}.$$
We generalize Theorem \[thm:main2\]:
\[thm:main2part\] Let $k$ be a positive integer. Then there exists a constant $c(k)>0$ depending only on $k$ such that for any
- partition $\lambda$ of $k$,
- prime power $q=p^\nu$,
- $2\leq m<k$
- monic modulus $D\in {\mathbb{F}}_q[t]$ with $\deg D = k-m-1$, and
- $f\in {\mathbb{F}}_q[t]$,
we have $$\left|\pi_q(k;D,f;\lambda) - \frac{\pi_q(k;\lambda)}{\phi(D)}\right| \leq c(k) q^{m+\frac{1}{2}},$$ provided $(f/D)'$ is not constant if $p=m=2$. (Note that $\frac{\pi_q(k;\lambda)}{\phi(D)}\sim P(\lambda)q^{k-\deg D} =P(\lambda)q^{m+1}$ as $q\to \infty$. )
Auxiliary results
=================
Specializations
---------------
We briefly recall some definitions and basic facts on specializations, see [@BarySoroker2012 Section 2.1] for more details and proofs. Let
- $K$ be a field with algebraic closure $\tilde{K}$,
- ${\mathop{\rm Gal}}(K)={\rm Aut}(\tilde{K}/K)$ the absolute Galois group of $K$,
- $W={\mathop{\rm Spec}}S$ and $V={\mathop{\rm Spec}}R$ absolutely irreducible smooth affine $K$-varieties,
- $\rho\colon W\to V$ a finite separable morphism which is generically Galois,
- $F/E$ the function field Galois extension that corresponds to $\rho$,
- $K$-rational point $\mathfrak{p}\in V(K)$ that is étale in $W$, and
- $\mathfrak{P}\in \rho^{-1}(\mathfrak{p})$.
Then $\mathfrak{p}$ induces a homomorphism $\phi_{\mathfrak{p}}\colon R\to K$ that extends to a homomorphism $\phi_{\mathfrak{P}}\colon S\to \tilde{K}$ (via the inclusion $R\to S$ induced by $\rho$). Since $\mathfrak{p}$ is étale in $W$, we have a homomorphism $\mathfrak{P}^*\colon {\mathop{\rm Gal}}(K)\to {\mathop{\rm Gal}}(F/E)$ such that $$\label{eq:geosol}
\phi_{\mathfrak{P}}(\mathfrak{P}^*(\sigma) (x)) = \sigma(\phi_{\mathfrak{P}}(x)), \quad \forall x\in S,\ \forall \sigma\in {\mathop{\rm Gal}}(K).$$ For every other $\mathfrak{Q}\in \rho^{-1}(\mathfrak{p})$ there is $\tau\in {\mathop{\rm Gal}}(F/E)$ such that $\phi_{\mathfrak{Q}} = \phi_{\mathfrak{P}}\circ \tau$. Thus, by , $\mathfrak{Q}^* = \tau^{-1}\mathfrak{P}^*\tau$ and vice-versa every $\tau^{-1}\mathfrak{P}^*\tau$ comes from a point $\mathfrak{Q}\in \rho^{-1}(\mathfrak{p})$ . Hence $\mathfrak{p}^* = \{\mathfrak{Q}^* \mid \mathfrak{Q}\in \rho^{-1}(\mathfrak{p})\}$ is the orbit of $\mathfrak{P}^*$ under the conjugation action of ${\mathop{\rm Gal}}(F/E)$.
The key ingredients in the proof of the following proposition are the Lang-Weil estimates [@LangWeil1954 Theorem 1] and the field crossing argument (as utilized in [@BarySoroker2012 Proposition 2.2]).
\[prop:irrsub\] Let $k$, $m$, and $B$ be positive integers, let $\lambda$ be a partition of $k$, let ${\mathbb{F}}$ be an algebraic closure of ${\mathbb{F}}_q$, and let $\mathcal{F}\in {\mathbb{F}}_q[A_0, \ldots, A_{m}, t]$ be a polynomial that is separable in $t$ with $\deg \mathcal{F}\leq B$ and $\deg_t \mathcal{F}=k$. Assume that $\mathcal{F}$ is separable in $t$ and $${\mathop{\rm Gal}}(\mathcal{F}, {\mathbb{F}}(A_0, \ldots, A_{m})) = S_k.$$ Then there is a constant $c(m,B)$ that depends only on $m$ and $B$ such that if we denote by $N = N(\mathcal{F},q)$ the number of $(a_0, \ldots, a_{m}) \in \mathbb{F}_q^{m+1}$ such that $f=\mathcal{F}(a_0, \ldots, a_{m},t)$ has factorization type $\lambda_f=\lambda$, then $$\left|N-P(\lambda)q^{m+1}\right| \leq c(m,B) q^{m+1/2},$$ where $P(\lambda)$ is defined in .
Let $\mathbf{A}=(A_0,\ldots, A_m)$ and $F$ the splitting field of $\mathcal{F}$ over ${\mathbb{F}}_q(\mathbf{A})$. Since $$S_k={\mathop{\rm Gal}}(\mathcal{F}, {\mathbb{F}}(\mathbf{A}))={\mathop{\rm Gal}}(F\cdot{\mathbb{F}}/{\mathbb{F}}(\mathbf{A})) \leq {\mathop{\rm Gal}}(F/ {\mathbb{F}}_q(\mathbf{A}))\leq S_k,$$ all inequalities are in fact equalities and ${\mathbb{F}}_q=F\cap {\mathbb{F}}$. In particular, $\alpha\colon {\mathop{\rm Gal}}(F/{\mathbb{F}}_q(\mathbf{A})) \to {\mathop{\rm Gal}}(F\cap {\mathbb{F}}/{\mathbb{F}}_q)=1$, so $$\label{eq:ker}
\ker \alpha = S_k.$$
Since ${\mathop{\rm Gal}}({\mathbb{F}}_q) =\langle \varphi \rangle \cong \hat{{\mathbb{Z}}}$ with $\varphi$ being the Frobenius map $x\mapsto x^q$, the homomorphisms $\theta\colon {\mathop{\rm Gal}}({\mathbb{F}}_q)\to S_k$ can be parametrized by permutations $\sigma \in S_k$. Explicitly, each $\sigma\in S_k$ gives rise to $\theta_\sigma\colon {\mathop{\rm Gal}}({\mathbb{F}}_q)\to S_k$ defined by $\theta_\sigma(\varphi)=\sigma$. Let $\mathcal{C}$ be the conjugacy class of all permutations $\sigma$ with $\lambda_\sigma=\lambda$ and let $\Theta = \{ \theta_\sigma\mid \sigma\in \mathcal{C}\}$. Fix $\theta\in \Theta$. Clearly $\#\Theta=\#\mathcal{C}$, so by we have $$\label{eq:P}
\frac{\#\ker\alpha}{\#\Theta } = \frac{\#S_k}{\#C}=\frac{1}{P(\lambda)}.$$
Let $Z$ be the closed subset of $\mathbb{A}^{m+1}={\mathop{\rm Spec}}{\mathbb{F}}_q[\mathbf{A}]$ defined by $D={\mathop{\rm disc}}_t(\mathcal{F})= 0$ and $V=\mathbb{A}^{m+1}\smallsetminus Z ={\mathop{\rm Spec}}{\mathbb{F}}_q[\mathbf{A},D^{-1}]$. By assumption $\mathcal{F}$ is separable in $t$, so $D$ is a nonzero polynomial of degree depending only on $B$. By [@LangWeil1954 Lemma 1], there exists a constant $c_1=c_1(m,B)$ such that $$\label{eq:bddddd}
\#Z(K)\leq c_1 q^m.$$ Let $u_1, \ldots, u_k$ be the roots of $\mathcal{F}$ in some algebraic closure of ${\mathbb{F}}(A_0, \ldots, A_{m})$ and let $W = {\mathop{\rm Spec}}{\mathbb{F}}_q[u_1, \ldots, u_k, D^{-1}] \subseteq \mathbb{A}^{k+1}$. Then $W$ is an irreducible smooth affine ${\mathbb{F}}_q$-variety of degree bounded in terms of $B=\deg \mathcal{F}$ and the embedding ${\mathbb{F}}_q[\mathbf{A},D^{-1}]\to {\mathbb{F}}_q[u_1, \ldots, u_k, D^{-1}]$ induces a finite separable étale morphism $\rho\colon W\to V$.
We apply [@BarySoroker2012 Proposition 2.2] to get an absolutely irreducible smooth ${\mathbb{F}}_q$-variety $\widehat{W}$ together with a finite separable étale morphism $\pi\colon \widehat{W}\to V$ with the following properties:
1. \[cond:11\] Let $U\subseteq V({\mathbb{F}}_q)$ be the set of $\mathfrak{p}\in V({\mathbb{F}}_q)$ that are étale in $W$ and such that $\mathfrak{p}^*=\Theta$. Then $\pi(\widehat{W}({\mathbb{F}}_q)) = U$.
2. \[cond:22\] For every $\mathfrak{p}\in U$, $$\#(\pi^{-1}(\mathfrak{p})\cap \widehat{W}({\mathbb{F}}_q)) = \frac{\#\ker\alpha}{\#\Theta } =\frac{1}{P(\lambda)}.$$ (See for the last equality.)
By the construction of $\widehat{W}$ in *loc. cit.* it holds that $\widehat{W}_L=W_L$, for some finite extension $L/{\mathbb{F}}_q$ (where subscript $L$ indicates the extension of scalars to $L$). Hence $\widehat{W}$ and $W$ have the same degree, which is bounded in terms of $B$. Thus, by [@LangWeil1954 Theorem 1], there is a constant $c_2=c_2(m,B)$ such that $$\label{eq:LWbd}
|\#\widehat{W}({\mathbb{F}}_q)-q^{m+1}| \leq c_2 q^{m+1/2}.$$ Applying gives $P(\lambda)\cdot \#\pi(\widehat{W}({\mathbb{F}}_q))=\#\widehat{W}({\mathbb{F}}_q)$. So multiplying by $P(\lambda)$ implies $$\label{eq:bdd}
|\#\pi(\widehat{W}({\mathbb{F}}_q))-P(\lambda)q^{m+1}| \leq P(\lambda)c_2 q^{m+1/2}\leq c_2q^{m+1/2}.$$
Since for $\mathfrak{p}=(a_0,\ldots, a_m)\in V({\mathbb{F}}_q)\subseteq {\mathbb{F}}_q^{m+1}$ we have $\mathfrak{p}^* = \Theta$ if and only if the orbit type of $\mathfrak{p}^*$ is $\lambda$ (in the sense of [@BarySoroker2012 p. 859]). Thus $\lambda_{\mathcal{F}(a_0,\ldots, a_m,t)}=\lambda$ if and only $\mathfrak{p}^* = \Theta$ ([@BarySoroker2012 Lemma 2.1]). Let $$X = \{\mathfrak{p}=(a_0,\ldots, a_m)\in {\mathbb{F}}_q^{m+1} \mid \lambda_{\mathcal{F}(a_0,\ldots, a_m,t)}=\lambda \mbox{ and } D(a_0,\ldots, a_m)\neq 0\}.$$ Then $N=\#X$. Equation gives $X\cap V({\mathbb{F}}_q) = \pi(\widehat{W}({\mathbb{F}}_q))$. Since $V=\mathbb{A}^{m+1} \smallsetminus Z$, it follows from and that $$\begin{aligned}
\left|N-P(\lambda)q^{m+1}\right| &=& \left|\#X-P(\lambda)q^{m+1}\right|\\
&=&\left|\#(X\cap V({\mathbb{F}}_q)) +\#(X\cap Z({\mathbb{F}}_q)) -P(\lambda)q^{m+1}\right|\\
& \leq& \left|\#(X\cap V({\mathbb{F}}_q))-P(\lambda)q^{m+1}\right| + \#(X\cap Z({\mathbb{F}}_q)) \\
&\leq &\left|\pi(\widehat{W}({\mathbb{F}}_q))-P(\lambda)q^{m+1}\right| + \#Z({\mathbb{F}}_q) \\
&\leq & c_2q^{m+1/2} + c_1 q^{m} \leq c(m,B) q^{m+1/2},\end{aligned}$$ where $c=c_1+c_2$.
Calculating a Galois Group
--------------------------
\[lem:separableirreducible\] Let $F$ be an algebraically closed field, $\mathbf{A}=(A_0,\ldots, A_m)$ an $m$-tuple of variables with $m\geq 1$, and $f,g\in K[t]$ relatively prime polynomials. Then $\mathcal{F}(\mathbf{A},t)=f+g(\sum_{i=1}^m A_it^i)$ is separable in $t$ and irreducible in the ring $F(\mathbf{A})[t]$.
Since $\mathcal{F}$ is linear in $A_0$ and since $f,g$ are relatively prime, it follows that $\mathcal{F}$ is irreducible in $F[\mathbf{A},t]$, hence by Gauss’ lemma also in $F(A)[t]$. Take $\alpha\in F$ with $g(\alpha)\neq 0$. Then $$\mathcal{F}'(\alpha)=f'(\alpha) +g'(\alpha)A_0 + g(\alpha) A_1 \neq 0,$$ hence $\mathcal{F}'\neq 0$, so $\mathcal{F}$ is separable.
\[lem:dt\] Let $F$ be an algebraically closed field, $\mathbf{A}=(A_0,\ldots, A_m)$ an $m$-tuple of variables with $m\geq 2$, and $f,g\in K[t]$ relatively prime polynomials with $\deg f>\deg g$. The Galois group $G$ of $\mathcal{F}(\mathbf{A},t)=f+g(\sum_{i=1}^m A_it^i)$ over $F(\mathbf{A})$ is doubly transitive (with respect to the action on the roots of $\mathcal{F}$).
By replacing $t$ by $t+\alpha$, where $\alpha\in F$ is a root of $f$, we may assume that $f(0)=0$, hence $f_0(t)=f(t)/t$ is a polynomial. By Lemma \[lem:separableirreducible\] the group $G$ is transitive. The image of $\mathcal{F}$ under the substitution $A_0=0$ is $$\bar{\mathcal{F}}=f+g \big(\sum_{i=1}^m A_it^i\big)=t\big( f_0 +g \big(\sum_{i=0}^{m-1} A_it^{i-1}\big)\big).$$ Lemma \[lem:separableirreducible\] then gives that $ f_0 +g \big(\sum_{i=0}^{m-1}A_it^{i-1}\big) $ is separable and irreducible. Hence the stabilizer of the root $t=0$ in the Galois group of $\bar{\mathcal{F}}$ acts transitively on the other roots. But since $\bar{\mathcal{F}}$ is separable, its Galois group embeds into $G$, so the stabilizer of a root of $\mathcal{F}$ in $G$ is transitive. Thus $G$ is doubly transitive.
For a rational function $\psi(t)\in F(t)$ the first and second Hasse-Schmidt derivatives of $\psi$ are denoted by $\psi'$ and $\psi^{[2]}$, respectively, and defined by $$\psi(t+u) \equiv\psi(t) +\psi'(t)u+\psi^{[2]}(t)u^2 \mod u^3.$$ A trivial observation is that $\psi'$ is the usual derivative of $\psi$ and, if the characteristic of $F\neq 2$, then $\psi^{[2]}=\frac{1}{2}\psi''$.
\[lem:morse\] Let $\psi(t)\in F(t)$ be a rational function with $\psi^{[2]}$ nonzero and $A_1$ a variable. Then $\psi'(t)+A_1$ and $\psi^{[2]}(t)$ have no common zeros.
This is obvious since the roots of $\psi'+A_1$ are transcendental over $F$, while those of $\psi^{[2]}$ are algebraic.
\[lem:excellent\] Let $F$ be an algebraically closed field of characteristic $p\geq 0$, $m\geq 2$, $\mathbf{A}=(A_1, \ldots, A_m)$, $f,g\in F[t]$ relatively prime polynomials and put $\psi=f/g$ and $\Psi = \psi +\sum_{i=1}^m A_it^i$. Assume $\deg f>\deg g+m$. Further assume that $\psi^{'}$ is not a constant if $p=m=2$. Then the system of equations $$\label{eq:excellentmorse}
\begin{array}{rcl}
\Psi'(\rho_1)&=&0\\
\Psi'(\rho_2)&=&0\\
\Psi(\rho_1)&=&\Psi(\rho_2)
\end{array}$$ has no solution with distinct $\rho_1, \rho_2$ in an algebraic closure $\Omega$ of $F(\mathbf{A})$.
For short we write $\rho=(\rho_1,\rho_2)$. Let $$-\varphi(t)=\bigg(\psi+\sum_{i=3}^m A_it^i\bigg)' =\psi'+\sum_{i=3}^m iA_it^{i-1}=\frac{f' g-fg'}{g^2} + \sum_{i=3}^miA_it^{i-1}.$$ Then $\Psi'(t)=2A_2t+A_1-\varphi(t)$. If $p=m=2$, then $\varphi=-\psi'$ which is not constant by assumption.
Let $$\begin{aligned}
c(\rho) &= \psi(\rho_1)-\psi(\rho_2) + \sum_{i=3}^m(\rho_1^{i}-\rho_2^{i})A_i\\
&= \Psi(\rho_1)-\Psi(\rho_2) - ((\rho_1^2-\rho_2^2)A_2+(\rho_1-\rho_2)A_1).\end{aligned}$$
The system of equations defines an algebraic set $T\subseteq \mathbb{A}^2\times \mathbb{A}^{m}$ in the variables $\rho_1, \rho_2, A_1, \ldots, A_m$. Let $\alpha\colon T\to \mathbb{A}^2$ and $\beta\colon T\to \mathbb{A}^m$ the projection maps. The system of equations takes the matrix form $$\label{eq:excellent}
M(\rho) \cdot \big(\begin{smallmatrix} A_2\\A_1\end{smallmatrix}\big) = B(\rho)=\Big(\begin{smallmatrix} \varphi(\rho_1)\\ \varphi(\rho_2)\\c(\rho)\end{smallmatrix}\Big),$$ where $M(\rho) = \left(\begin{smallmatrix}
2 \rho_1&1\\
2\rho_2&1\\
\rho_2^2-\rho_1^2&\rho_2-\rho_1
\end{smallmatrix}\right)$. For every $\rho\in U=\{\rho\mid \rho_1\neq \rho_2,\ \varphi(\rho_i)\neq \infty, i=1,2\}$, the rank of $M(\rho)$ is $2$. Thus the dimension of the fiber $\alpha^{-1}(\rho)$, for any $\rho\in U$, is at most $m-2$. Moreover, for a given $\rho\in U$, is solvable if and only if $\mathop{\rm rank}(M|B)=2$ if and only if $d(\rho)=\det(M|B)=0$, that is the solution space (restricting to $\rho\in U$) lies in $d(\rho)=0$.
It suffices to prove that $d(\rho)$ is a nonzero rational function in the variables $\rho=(\rho_1,\rho_2)$. Indeed, this implies that $\dim (\alpha(T))\leq \dim \{d(\rho)=0\} = 1$, so $\dim T \leq 1+m-2<m$. Thus $\beta(T)$ does not contain the generic point of $\mathbb{A}^m$ which is $\mathbf{A}=(A_0,\ldots, A_m)$ and hence has no solution with $\rho\in \Omega^2$.
A straightforward calculation gives $$d(\rho) = (\rho_1-\rho_2) (2c(\rho) +(\rho_1-\rho_2)(\varphi(\rho_1)+\varphi(\rho_2))).$$ If $m\geq 3$, then the coefficient of $A_3$ in $2c(\rho) +(\rho_1-\rho_2)(\varphi(\rho_1)+\varphi(\rho_2))$ is $$2(\rho_1^3-\rho_2^3)+3(\rho_1^2-\rho_2^2),$$ which is nonzero in any characteristic and we are done.
To this end assume $m=2$. If $p=2$, then $2c(\rho)=0$. Since $\varphi$ is not constant in this case, we have $\varphi(\rho_1)+ \varphi(\rho_2)\neq 0$ and we are done.
Finally assume $m=2$ and $p\neq 2$. Then $c(\rho)=\psi(\rho_1)-\psi(\rho_2)$ and $\phi=-\psi'$. We may assume without loss of generality that $f(0)=0$ (and hence $\psi(0)=0$). Since $f(t)/t+g(t)(A_2t + A_1)$ is separable (Lemma \[lem:separableirreducible\]), we can replace $A_1$ and $A_2$ by $A_1+\alpha_1$ and $A_2+\alpha_2$, respectively, and $f$ by $f(t)+g(t)(\alpha_2t^2+\alpha_1t)$, for suitably chosen $\alpha_1,\alpha_2\in F$, to assume that $f(t)/t$ is separable. Since $\deg f(t)>\deg +m\geq 2$, this implies that $f(t)$ has at least one simple root, say $\alpha$. Then $\alpha$ is a simple root of $\psi=f/g$. So $\psi'(\alpha)\neq 0$. Let $\beta\neq \alpha$ be another root of $f$, hence of $\psi$.
If $\psi'(\beta)=0$, then we have $c(\alpha,\beta)=\psi(\alpha)-\psi(\beta)=0$, so $$d(\alpha,\beta) =-(\alpha-\beta)^2 \psi'(\alpha)\neq 0$$ and we are done. If $\psi'(\beta)\neq0$, then $\beta$ is a simple root of $\psi$, hence of $f$. But $\deg f> 2$, so there must be another root $\gamma$ of $\psi$. If $d=0$, then we must have $$\begin{aligned}
\frac{d(\alpha,\beta)}{-(\alpha-\beta)^2}=0&= \psi'(\alpha) + \psi'(\beta)\\
\frac{d(\alpha,\gamma)}{-(\alpha-\gamma)^2}=0&= \psi'(\alpha) + \psi'(\gamma)\\
\frac{d(\gamma,\beta)}{-(\gamma-\beta)^2}=0&= \psi'(\gamma) + \psi'(\beta).\end{aligned}$$ So $2\psi'(\alpha)=0$. This contradiction, implies that $d\neq 0$, as needed.
\[prop:Galoisgroup\] Let $F$ be a field of characteristic $p\geq 0$, let $1\leq m<k$, let $\mathbf{A}=(A_0, \ldots, A_{m})$ an $(m+1)$-tuple of variables, and let $f,g\in F[t]$ be relatively prime polynomials with $\deg g+m<k=\deg f$. Assume
1. $2\leq m$ if $\deg g>0$,
2. $2\leq m$ if $p\mid k(k-1)$, and
3. \[eq:nonconstant\] $(f/g)'$ is not constant if $p=m=2$.
Then the Galois group of $\mathcal{F}(\mathbf{A},t) = f(t)+g(t)(\sum_{i=0}^m A_i t^i)$ over $F(\mathbf{A})$ is $${\mathop{\rm Gal}}\left(\mathcal{F}, F(\mathbf{A})\right)=S_k.$$
Let $\tilde{F}$ be an algebraic closure of $F$. Since $ {\mathop{\rm Gal}}(\mathcal{F},\tilde{F}(\mathbf{A}))\leq {\mathop{\rm Gal}}(\mathcal{F},F(\mathbf{A}))\leq S_k$, we may replace, without loss of generality, $F$ by $\tilde{F}$ to assume that $F$ is algebraically closed.
If $p\nmid k(k-1)$ and $\deg g=0$, the result follows from [@Cohen1980 Theorem 1] (note that $F(A_0, \ldots, A_{m}) = F(A_2, \ldots, A_{m-1})(A_0,A_1)$, hence the result for $m=1$ in *loc. cit.* extends to $m> 1$).
Assume that $2\leq m$. Then $G = {\mathop{\rm Gal}}(\mathcal{F},F(\mathbf{A}))\leq S_k$ is doubly transitive by Lemma \[lem:dt\].
Let $\Omega$ be an algebraic closure of $F(A_1, \ldots, A_m)$ and consider the map $\Psi\colon \mathbb{P}^1_\Omega\to \mathbb{P}^1_\Omega$ defined locally by $t\mapsto -A_0 := \frac{f(t)}{g(t)}+\sum_{i=1}^m A_it^i$. The numerator of $\Psi'=\frac{f'g-g'f}{g^2} + \sum_{i=1}^m iA_it^i$ is $$f'g-g'f + g^2(\cdots + 2A_2t + A_1).$$ If $m\geq 3$ or if $p\neq 2$, this numerator has positive degree. If $p=m=2$, then this numerator is $f'g-g'f+g^2A_1$, so it is not constant by . In any case, the numerator of $\Psi'$, hence $\Psi'$, has a root, say $\alpha\in \Omega$. Then $\Psi$ is ramified at $t=\alpha$. Lemma \[lem:morse\] says that the orders of ramifications are $\leq 2$, so the equation $\Psi(t)=\Psi(\alpha)$ has at most double roots in $\Omega$. Lemma \[lem:excellent\] says that the critical values are distinct, so $\Psi(t)=\Psi(\alpha)$ has at least $k-1$ solutions. But since $\alpha$ is a ramification point, the fiber over $\Psi(\alpha)$ is with exactly one double points. Hence the inertia group over $\Psi(\alpha)$ permutes two roots of $$\mathcal{F}(\mathbf{A},t) = g(t)(\Psi(t)+A_0),$$ and fixes the other roots (cf. [@BarySoroker2009Dirichlet Proposition 2.6]). In other words $G$ contains a transposition. Therefore $G=S_k$ [@Serre2007Topics Lemma 4.4.3].
Proof of Theorems \[thm:main\] and \[thm:mainpart\]
===================================================
Since Theorem \[thm:main\] is a special case of Theorem \[thm:mainpart\] it suffices to prove the latter.
Let $k$ be a positive integer, $\lambda$ a partition of $k$, $q=p^\nu$ a prime power, $1\leq m<k$, and $I=I(f,m)$ a short interval around $f\in \mathcal{M}(k,q)$. Assume $2\leq m$ if $p\mid k(k-1)$ and assume $3\leq m$ if $p=2$ and $\deg f'\leq 1$. Let ${\mathbb{F}}$ be an algebraic closure of ${\mathbb{F}}_q$.
Let $\mathcal{F}=f+\sum_{i=0}^{m} A_it^i$. Then $\mathcal{F}$ satisfies the assumptions of Proposition \[prop:Galoisgroup\], so ${\mathop{\rm Gal}}(\mathcal{F}, {\mathbb{F}}(A_0, \ldots, A_m))=S_k$.
Since $\deg \mathcal{F} =\deg_t \mathcal{F} =\deg f = k$ and $m<k$, by Proposition \[prop:irrsub\], the number $N$ of $(a_0, \ldots, a_{m})\in {\mathbb{F}}_q^{m+1}$ such that $f+ \sum_{i=0}^{m} a_it^i$ has factorization type $\lambda$ satisfies $$\left|N - P(\lambda)q^{m+1}\right| \leq c(k)q^{m+1/2},$$ where $c(k)>0$ is a constant depending only on $k$ (and not on $f$, $q$). This finishes the proof since by definition $N=\pi_{q}(I(f,m);\lambda)$.
Proof of Theorems \[thm:main2\] and \[thm:main2part\]
=====================================================
Since Theorem \[thm:main2\] is a special case of Theorem \[thm:main2part\] it suffices to prove the latter.
Let $k$ be a positive integer, $\lambda$ a partition of $k$, $q=p^\nu$ a prime power, $2\leq m <k$, $D\in {\mathbb{F}}_q[t]$ monic with $\deg D = k-m-1$ and $f\in {\mathbb{F}}_q[t]$. We are interested in the number of primes in the arithmetic progression $g\equiv f\mod D$, so we may replace $f$ by $f-QD$, for some polynomial $Q$ to assume that $\deg f<\deg D$. Let $\mathbb{F}$ be an algebraic closure of ${\mathbb{F}}_q$.
Let $$\mathcal{F} = f+D\bigg(t^{m+1} + \sum_{i=0}^m A_it^i\bigg) = \tilde{f} + D\bigg( \sum_{i=0}^m A_it^i\bigg), \quad \tilde{f} = f+D\cdot t^{m+1},$$ where $\mathbf{A}=(A_0,\ldots, A_m)$ is an $(m+1)$-tuple of variables. Since $\deg \tilde{f}=m+1+\deg D=k>\deg D+m$, Proposition \[prop:Galoisgroup\] gives that $${\mathop{\rm Gal}}(\mathcal{F}, {\mathbb{F}}(\mathbf{A}))=S_k,$$
Since $\deg \mathcal{F}=\deg_t\mathcal{F}=k$, Proposition \[prop:irrsub\] implies that the number $N$ of $(a_0, \ldots, a_{m})\in {\mathbb{F}}_q^{m+1}$ such that $f+ D(t^{m+1}+\sum_{i=0}^{m} a_it^i)$ has factorization type $\lambda$ satisfies $$\left|N - P(\lambda)q^{m+1}\right| \leq c_1(k)q^{m+1/2},$$ where $c(k)>0$ is a constant depending only on $k$ (and not on $f$, $q$).
Finally $\phi(D) = |D|\prod_{P\mid f}(1-1/|P|)$, where the products runs over the distinct prime polynomials $P$ dividing $D$ and since $|P|\geq q$, we have $$\phi(D) = q^{\deg D}(1+O(\frac{1}{q}))=q^{k-m-1} + O_k(q^{k-m-2}).$$ By Theorem \[thm:main2\] applied to the interval $I(t^k,k-1)$, $$\pi_q(k;\lambda) = P(\lambda)q^{k} +O_k(q^{m+1/2}).$$ Thus $$\left|\frac{\pi_q(k;\lambda)}{\phi(D)} -P(\lambda)q^{m+1}\right| \leq c_2(k) q^{m+1/2}$$ and $$\bigg|N-\frac{\pi_{q}(k;\lambda)}{\phi(D)}\bigg| \leq
\bigg| N - P(\lambda)q^{m+1}\bigg| + \bigg|\frac{\pi_{q}(k;\lambda)}{\phi(D)}-P(\lambda)q^{m+1} \bigg|\leq
c(k)q^{m+1/2},$$ where $c=c_1+c_2$. This finishes the proof since by definition $N=\pi_{q}(k;D,f;\lambda)$.
Small $m$ {#sec:ce}
=========
In this section we show fails in the cases excluded by Theorem \[thm:main\] expect possibly in the case $p=m=2$ and $\deg f'\leq 1$ (when we do not know whether the holds or not).
$m=0$
-----
We denote Euler’s totient function by $\phi(k) = |(\mathbb{Z}/k\mathbb{Z})^*|$.
For $k>1$ we have $$\pi_q(I(t^k,0)) =
\begin{cases}
0,& q\not\equiv 1 \mod k\\
\frac{\phi(k)}{k}(q-1), &q\equiv 1\mod k.
\end{cases}$$ In particular, if $k>2$, $|\pi_q(I(t^k,0))-q/k|\gg q$.
We separate the proof into cases.
[$\gcd(q,k)>1$]{} In this case $t^k-a$ is inseparable for any $a\in {\mathbb{F}}_p$. Since ${\mathbb{F}}_q$ is perfect, this implies that $t^k-a$ is reducible. So $\pi_q(I(t^k,0))=0$.
[$\gcd(q(q-1),k)=1$]{}\[case:invertible\] In this case $k\neq 2$ and $1-q$ is invertible modulo $k$. Assume, by contradiction, that there exists $a\in {\mathbb{F}}_q$ such that $f=t^k-a$ is irreducible in ${\mathbb{F}}_p[X]$. Then the Frobenius map, $\varphi\colon x\mapsto x^q$, acts transitively on the roots of $f$. Thus $\alpha^q = \zeta \alpha$, where $\zeta$ is a primitive $k$-th root of unity. We get that the orbit of $\alpha$ under $\varphi$ is $$\alpha \mapsto \alpha^q = \zeta\alpha \mapsto (\zeta\alpha)^q =\zeta^{1+q} \alpha \mapsto \cdots \mapsto \zeta^{1+q+\cdots+q^{k-1}} \alpha = \alpha.$$ On the other hand, this orbit equals to the set of roots of $f$ which is $\{\zeta^i \alpha \mid i=0,\ldots, k-1\}$. So for every $i \mod k$ there is a unique $1\leq r\leq k$ such that $$i\equiv 1+ q + \cdots + q^{r-1} \equiv (1-q)^{-1} (1-q^r) \pmod k.$$ This is a contradiction since there are at most $\phi(k)<k$ powers of $q$ mod $k$, hence $\#\{(1-q)^{-1}(1-q^{r})\mod k\}<k = \#\{i\mod k\}$.
[$\gcd(q,k)=1$ and $q\not\equiv 1\mod k$]{} Let $g=\gcd(q-1,k)$; then $l=k/g>1$ and $\gcd(q(q-1),l)=1$. Let $a\in {\mathbb{F}}_q$, and let $\alpha$ be a root of $f=t^k-a$. Then the polynomial $f_1=t^{l}-\alpha^l \in {\mathbb{F}}_q[\alpha^l][t]$ is reducible by Case \[case:invertible\]. Since $\alpha$ is a root of $g$ and since $\alpha^l$ is a root of $f_2=t^g-a$, we get that $$[{\mathbb{F}}_q[\alpha]:{\mathbb{F}}_q] = [{\mathbb{F}}_q[\alpha]:{\mathbb{F}}_q[\alpha^l]]\cdot [{\mathbb{F}}_q[\alpha^l]:{\mathbb{F}}_q]< l \cdot g =k.$$ In particular $f$ is reducible.
[$q\equiv 1\mod k$]{} In this case ${\mathbb{F}}_q$ contains a primitive $k$-th root of unity. By Kummer theory $t^k - a$ is irreducible in ${\mathbb{F}}_q$ if and only if the order of $a({\mathbb{F}}_q^*)^k$ in $C={\mathbb{F}}_q^*/({\mathbb{F}}_q^*)^k$ is $k$. Since ${\mathbb{F}}_q^*$ is cyclic of order $q-1$, also $C$ is cyclic of order $k$, hence there are exactly $\phi(k)$ cosets of order $k$ in $C$. Each coset contains $\frac{q-1}{k}$ elements. So there are exactly $\frac{\phi(k)}{k}(q-1)$ irreducible $t^k-a$.
$m=1$ and $p\mid k$ {#sec:pmidk}
-------------------
In this case we study the interval $I(t^{p^2},1)=\{t^{p^2}-at+b\mid a,b\in {\mathbb{F}}_q\}$ for $q=p^{2n}$.
For $q=p^{2n}$ we have $$\pi_q(I(t^{p^2},1))=0.$$ In particular, $|\pi_q(I(t^p,1))-q^2/p|\gg q$.
Let $F={\mathbb{F}}_{p^2}$, let $E$ be the splitting field of $\mathcal{F} = t^p-At+B$ over $K={\mathbb{F}}_q(A,B)$. Then, by [@Uchida1970 Theorem 2], $$G={\mathop{\rm Gal}}(\mathcal{F},F)\cong{\mathop{\rm Gal}}(E/F) \cong {\mathop{\rm Gal}}(E\cdot {\mathbb{F}}, {\mathbb{F}}(A,B)) \cong {\rm Aff}(F),$$ as permutation groups. Here ${\mathbb{F}}$ is an algebraic closure of ${\mathbb{F}}_q$ and ${\rm Aff}(F)$ is the group of transformation of the affine line $\mathbb{A}^1(F) = F$: $$M_{c,d}\colon x\mapsto cx+d, \quad 0\neq c,d\in F.$$ Since $|G|=p^2(p^2-1)$ and since the group of translation $T=\{x\mapsto x+d\} \cong\mathbb{F}_{p^2}$ is of order $p^2$, we get that $T$ is a $p$-sylow subgroup of $T$. But $T$ is of exponent $p$, hence there are no $p^2$-cycles in $G$.
For every $a,b\in {\mathbb{F}}_q$, the Galois group $G_{a,b}$ of $f=t^{p^2}-at+b$ is a cyclic sub-quotient of $G$, hence of order $\leq p$. In particular $G_{a,b}$ acts intransitively on the roots of $f$, hence $f$ is reducible.
$m=1$ and $p\mid k-1$
---------------------
The details of this case are nearly identical to Section \[sec:pmidk\] with the distinction that the group ${\rm Aff}(F)$ is replaced by the group of transformations on the projective line, cf. [@Uchida1970 Theorem 2]. Hence we state the result but omit the details.
For $q=p^{2n}$ we have $$\pi_q(I(t^{p^2},1)=0.$$
Primes in almost all intervals {#sec:ae}
==============================
Generalities
------------
\[defconverg\] Let $Q$ be an infinite set of positive integers, and assume that for all $q\in Q$ we have a sequence $\mathcal{S}(q) = \{a_1(q), \ldots, a_{n(q)}(q)\}$ of non-negative real numbers. We say that $\mathcal{S}(q)$
1. **converges on average to $0$** if $\frac{1}{n(q)}\sum_{i=1}^{n(q)} a_i(q) \to 0$ as $q\to \infty$.
2. **converges pointwise to $0$** if for any choice of a sequence of indices $i(q) \in [1,n(q)]$ we have $\displaystyle\lim_{q\to \infty}a_{i(q)}(q) \to 0$.
3. **converges almost everywhere to $0$** if for every $q\in Q$ there is a subset $J(q)\subseteq \mathcal{S}(q)$ such that $\displaystyle\lim_{q\to \infty} \#J(q)/n(q)=1$ and for any choice of indices $i(q)\in J(q)$ we have $\displaystyle\lim_{q\to \infty}a_{i(q)}(q) \to 0$.
It is standard that convergence on average implies convergence almost everywhere:
\[lem:ava.e.\] In the notation of Definition \[defconverg\], if $\mathcal{S}(q)$ converges on average to $0$, then $\mathcal{S}(q)$ converges almost everywhere to $0$.
Let $\epsilon>0$. Since $\displaystyle\lim_{q\to \infty} \frac{1}{n(q)}\sum_{i=1}^{n(q)} a_i(q) = 0$ there exists $N_0(\epsilon)>0$ such that for any $q>N_0(\epsilon)$ we have $$\label{small sums}
\frac{1}{n(q)}\sum_{i=1}^{n(q)}a_i(q)<\epsilon^2.$$ Denote by $$J(q)=\{1\leq i\leq n(q)\mid a_{i}(q)<\epsilon\}.$$ Then, by , we have $$\epsilon^2>\frac{1}{n(q)}\sum_{i=1}^{n(q)}a_i(q)\geq\frac{1}{n(q)}\sum_{i \in[1,n(q)]\smallsetminus J(q)} a_i(q)\geq\frac{n(q)-\#J(q)}{n(q)}\cdot \epsilon.$$ Thus $|1-\#J(q)/n(q)|<\epsilon$, so $\displaystyle\lim_{q\to \infty} \#J(q)/n(q) = 1$.
Let $i(q)\in J(q)$. If $q>N_0(\epsilon)$, then $0\leq a_i(q)< \epsilon$, by the definition of $J(q)$, and hence $\lim_{q\to\infty}a_{i(q)}(q)=0$.
Number of primes in short intervals
-----------------------------------
In the terminology of Definition \[defconverg\] Theorem \[thm:main\] says that $$\mathcal{E}(q) =\bigg\{\bigg|\frac{\pi_q(I(f,m))}{q^{m+1}/k} - 1\bigg|\ \bigg| \ f\in M(k,q)\bigg\}.$$ converges pointwise to $0$ (under the restrictions there on $m$). In what follows we show how to derive an almost everywhere convergence, including small $m$, from a result of Keating and Rudnick [@KeatingRudnick2012].
Let $f\in {\mathbb{F}}_q[t]$. The von-Mangoldt function, $\Lambda(f)$, is defined by $$\Lambda(f)=
\begin{cases}
\deg(P)& \mbox{if $f=cP^k$, where $P$ is a prime polynomial $P$ and $c\in {\mathbb{F}}_q^*$,}\\
0& \mbox{otherwise.}
\end{cases}$$ If $f\in M(k,q)$ and $1\leq m<k$, we let $$\nu(f;m) = \sum_{\substack{g\in I(f,m)\\g(0)\neq0}} \Lambda(g).$$ We denote the mean value and variance of $\nu(\bullet;m)$ by $$\begin{aligned}
\langle\nu(\bullet;m)\rangle&=&\frac{1}{q^k}\sum_{f\in\mathcal{M}(k,q)}\nu(f;m),\\
{\mathop{\rm Var}}\nu(\bullet;m)&=&\frac{1}{q^{k}}\sum_{f\in\mathcal{M}(k,q)}\left|\nu(f;m)-\langle\nu(\bullet;m)\rangle\right|^2,\end{aligned}$$ respectively.
\[thm:KR\] Let $1\leq m<k$ be integers. Then $$\label{eq:meanvalue}
\langle\nu(\bullet;m)\rangle=q^{m+1}\left(1-\frac{1}{q^k}\right).$$ If in addition $m<k-3$, then $$\label{eq:variance}
\lim_{q\to \infty}\frac{1}{q^{m+1}}{\mathop{\rm Var}}\nu(\bullet;m)=k-m-2.$$
See [@KeatingRudnick2012 Lemma 4.3] for and Theorem 2.1 in *loc.cit.* for .
\[cor:Vara.e\] Let $1\leq m<k-3$ and for each prime power $q$ let $$\mathcal{V}(q) =\bigg\{ a_{f}(q)=\bigg|\frac{\nu(f,m)}{q^{m+1}}-\left(1-\frac{1}{q^k}\right)\bigg|^2\ \bigg|\ f\in\mathcal{M}(k,q)\bigg\}.$$ Then $\mathcal{V}(q)$ converges almost everywhere to $0$.
By Theorem \[thm:KR\] we have $$\frac{1}{q^k} \sum_{f\in M(k,q)} a_f(q) = \frac{1}{q^k}\sum_{f\in M(k,q)} \bigg|\frac{\nu(f,m)}{q^{m+1}}-\left(1-\frac{1}{q^k}\right)\bigg|^2 = \frac{1}{q^{m+1}} \bigg(\frac{1}{q^{m+1}}{\mathop{\rm Var}}\nu(\bullet;m)\bigg) \to 0,$$ as $q\to \infty$. So $\mathcal{V}(q)$ converges to $0$ on average. By Lemma \[lem:ava.e.\], $\mathcal{V}(q)$ converges almost everywhere to $0$.
The last corollary says that $\nu(\bullet;m) \sim q^{m+1}$ (as long as $1\leq m<k-3$) almost always. It remains to explain how to deduce from this a similar result for the prime counting function.
For a short interval $I=I(f,m)$ with $f\in M(k,q)$ and for $d\mid k$ we let $$I^{1/d} = \{g\in M(k/d,q) \mid g^d \in I\}.$$
\[lem:radicals\] Let $f\in M(k,q)$, $1\leq m< k$, $I=I(f,m)$ and $d\mid k$, $d>1$. Then $$\#(I^{1/d}) \leq q^m.$$
Let $J = I^{1/d}$. If $J=\emptyset$, we are done. Otherwise there is monic $g\in M(k/d,q)$ such that $g^d\in I$. Then $I=I(g^d,m)$, so without loss of generality we may assume that $g^d=f$.
If $\tilde{g}\in J$, then $\deg(\tilde{g}^d-f)\leq m$. Moreover $\tilde{g}$ is monic, so $\tilde{g} = g+h$ for some $h$ with $\deg h<k/d=\deg g$. It suffice to show that $\deg h< m$, since there are only $q^m$ such polynomials.
If $d=p^a$, where $p={\rm char}({\mathbb{F}}_q)$, then $I\ni \tilde{g}^d= g^d+h^d=f+h^d$. So $\deg h \leq m/d<m$ and we are done.
Assume $d=p^a D$ with $D>1$ and $\gcd(p,D)=1$. Write $g_1= g^{p^a}$ and $h_1=h^{p^a}$. Then $\deg h_1< \deg g_1$, $g_1^D=f$, and $$\begin{array}{ll}
\tilde{g}^d-f &= (g+h)^d-f = (g_1+h_1)^{D}-f \\&= g_1^D +\sum_{i=1}^{D} \binom{D}{i}g_1^{D-i} h_1^{i} - f
=D g_1^{D-1} h_1 + \frac{D(D-1)}{2} g_1^{D-2}h_1^2 + \cdots.
\end{array}$$ Since $p\nmid D$ and $\deg h_1<\deg g_1$, we get that $$m\geq \deg (\tilde{g}^d-f) = \deg(g_1^{D-1} h_1) = \frac{k(D-1)}{D}+\deg h_1 >\deg h_1,$$ as needed.
Finally we prove almost everywhere.
Let $1\leq m<k-3$ be integers and for each prime power $q$ let $$\mathcal{E}(q) = \bigg\{\bigg|\frac{\pi_q(I(f,m))}{q^{m+1}/k} - 1\bigg|\ \bigg| \ f\in M(k,q)\bigg\}.$$ Then $\mathcal{E}(q)$ converges almost everywhere to $0$.
For $f\in M(k,q)$ and for $d\mid k$ we let $\Pi_d(f)\subseteq I(f,m)^{1/d}$ be the subset of monic prime polynomials of degree $d$ and let $\epsilon = 1$ if $t^k\in I(f,m)$ and $\epsilon =0$ otherwise. Then $$\begin{aligned}
\nu(f;m) &=& \sum_{\substack{g\in I(f,m)\\g(0)\neq0}}\Lambda(g) = \sum_{g\in I(f,m)} \Lambda(g) + \epsilon = \sum_{d\mid k}\sum_{P\in \Pi_d(f)} d +\epsilon\\
&=&
k\pi_q(I(f,m)) +
\sum_{\substack{d\mid k\\1<d\leq k}} \frac{k}{d}\pi_q(I(f,m)^{1/d}) + \epsilon.\end{aligned}$$ By Lemma \[lem:radicals\] we have $\pi_q(I(f,m)^{1/d})\leq \# (I(f,m)^{1/d}) \leq q^m$ for $d>1$. So $$\nu(f;m)= k\pi_q(f,m)+ O(c(k)q^m),$$ where $c(k)=\sigma(k)-k =\displaystyle \sum_{\substack{d\mid k\\1<d\leq k}} \frac{k}{d}$. Thus $$\bigg|\frac{\pi_q(I(f,m))}{q^{m+1}/k} - 1\bigg|=\bigg|\frac{\nu(f;m)}{q^{m+1}} - 1\bigg|+O_{k}(q^{-1}) =
\bigg|\frac{\nu(f;m)}{q^{m+1}} - \bigg(1-\frac{1}{q^k}\bigg)\bigg|+O_{k}(q^{-1}) .$$ Thus Corollary \[cor:Vara.e\] gives the convergence of $\mathcal{E}(q)$ to almost everywhere $0$.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We thank Zeev Rudnick for helpful remarks on earlier drafts of this paper and for the suggestions to consider arithmetic progressions and different factorization types.
The first two authors were supported by a Grant from the GIF, the German-Israeli Foundation for Scientific Research and Development. The last author was supported by the Göran Gustafsson Foundation (KVA).
[10]{}
L. Bary-Soroker. . , **137**(1):73–83, 2009.
L. Bary-Soroker. Irreducible values of polynomials. , **229**(2): 854–874, 2012.
S. D. Cohen. . , **90**(1):63–76, 1980.
A. Granville.
A. Granville. . , **78**(1):65–84, 2010.
D. R. Heath-Brown. . , **389**:22–63, 1988.
D. R. Heath-Brown and D. A. Goldston. . , **266**(3):317–320.
M. N. Huxley. . , **15**:164–170, 1972.
J. P. Keating and Z. Rudnick. . , page 30 pp., April 2012.
S. Lang and A. Weil. . , **76**:819–827, 1954.
H. L. Montgomery and R. C. Vaughan. . .
R. A. Rankin. . , **13**:242–247, 1938.
A. Selberg. . , **47**(6):87–105, 1943.
J.-P. Serre. . A. K. Peters, Ltd., 2 edition, 2008.
K. Uchida. . , **22**(4):670–678, 1970.
[^1]: School of Mathematical Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel, <efratban@post.tau.ac.il>
[^2]: School of Mathematical Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel, <barylior@post.tau.ac.il>
[^3]: Department of Mathematics, KTH, SE-10044, Stockholm, Sweden, <lior.rosenzweig@gmail.com>
|
---
abstract: 'Triangular lattice of rare-earth ions with interacting effective spin-$1/2$ local moments is an ideal platform to explore the physics of quantum spin liquids (QSLs) in the presence of strong spin-orbit coupling, crystal electric fields, and geometrical frustration. The Yb delafossites, NaYbCh$_2$ (Ch=O, S, Se) with Yb ions forming a perfect triangular lattice, have been suggested to be candidates for QSLs. Previous thermodynamics, nuclear magnetic resonance, and muon spin rotation measurements on NaYbCh$_2$ have supported the suggestion of the QSL ground states. The key signature of a QSL, the spin excitation continuum, arising from the spin quantum number fractionalization, has not been observed. Here we perform both elastic and inelastic neutron scattering measurements as well as detailed thermodynamic measurements on high-quality single crystalline NaYbSe$_2$ samples to confirm the absence of long-range magnetic order down to 40 mK, and further reveal a clear signature of magnetic excitation continuum extending from 0.1 to 2.5 meV. By comparing the structure of our magnetic excitation spectra with the theoretical expectation from the spinon continuum, we conclude that the ground state of NaYbSe$_2$ is a QSL with a spinon Fermi surface.'
author:
- 'Peng-Ling Dai$^\#$'
- 'Gaoning Zhang$^\#$'
- Yaofeng Xie
- Chunruo Duan
- Yonghao Gao
- Zihao Zhu
- Erxi Feng
- 'Chien-Lung Huang'
- Huibo Cao
- Andrey Podlesnyak
- 'Garrett E. Granroth'
- David Voneshen
- Shun Wang
- Guotai Tan
- Emilia Morosan
- Xia Wang
- Lei Shu
- Gang Chen
- Yanfeng Guo
- Xingye Lu
- Pengcheng Dai
title: 'Spinon Fermi surface spin liquid in a triangular lattice antiferromagnet NaYbSe$_2$'
---
${\bf Introduction.}$ The quantum spin liquid (QSL) is a correlated quantum state in a solid where the spins of the unpaired electrons are highly entangled over long distances, yet they do not exhibit any long-range magnetic order in the zero temperature limit. Originally proposed by P. W. Anderson as the ground state for a system of $S=1/2$ spins on a two-dimensional (2D) triangular lattice that interact antiferromagnetically with their nearest neighbors [@anderson1973], a QSL is a novel quantum state of matter beyond the traditional Landau’s symmetry breaking paradigm [@balents2010; @zhou2017; @balents2017; @broholm2020], and might be relevant for our understanding of high-temperature superconductivity [@anderson1987; @rmp2006; @Lee2007] and quantum computation in certain cases [@kitaev2003; @kitaev2006]. Beyond the simple characterization of absence of a magnetic order, one key signature of the excitations in a QSL is the presence of deconfined spinons that are fractionalized quasiparticles carrying spin-1/2, observed by inelastic neutron scattering as a spin excitation continuum fundamentally different from the integer spin wave excitations in an ordered magnet [@Han2012; @jun2016; @mourigal2017; @Balz2016; @Gao2019; @Gaudet2019].
Although spin excitation continuum has been observed in the geometrically frustrated $S=1/2$ single crystal systems with 2D Kagomé [@Han2012], 2D triangular [@jun2016; @mourigal2017], three-dimensional (3D) distorted Kagomé bilayers [@Balz2016], and 3D pyrochlore [@Gao2019; @Gaudet2019] lattices, there is no consensus on the microscopic origin of the observed spin excitation continuum. In the 2D $S=1/2$ Kagomé lattice ZnCu$_3$(OD)$_6$Cl$_2$ [@Han2012] and an effective $S=1/2$ triangular lattice magnet YbMgGaO$_4$ [@jun2016; @mourigal2017], different interpretation of the observed spin excitation continuum includes a spin glass state of magnetic [@Freedman2010] and nonmagnetic Mg-Ga site disorder due to intrinsic sample issues [@Ma2018; @Zhu2017], respectively, rather than the fractionalized quasiparticles of a QSL [@broholm2020]. To conclusively identify the presence of deconfined spinon excitations in a QSL, one needs to search for the expected spin excitation continuum among candidate QSL materials with high quality single crystals and establish their physical properties with clear experimental signatures and structures.
Recently, geometrically frustrated 2D triangular-lattice rare-earth-based materials with effective ${S=1/2}$ local moments have attracted considerable attentions [@Rau2018; @Maksimov2019]. Compared with the previous example YbMgGaO$_4$ [@ysli2015_1], the family of Yb dichalcogenide delafossites NaYbCh$_2$ (Ch=O, S, Se) does not have the issue of Mg-Ga charge disorders in the non-magnetic layers and thus provide a genuine example for an interacting spin-1/2 triangular lattice antiferromagnet [@qmzhang2018; @baenitz2018; @ranjith_o]. The combination of the strong spin-orbit coupling (SOC) and the crystal electric field (CEF) leads to a Kramers doublet ground state for the Yb$^{3+}$ ion in NaYbCh$_2$ that gives rise to the effective spin-1/2 local moment at each ion site. Since the energy gaps between the ground and first excited Kramers doublets CEF levels for NaYbSe$_2$ \[Fig. 1(b)\] [@qmzhang2020], NaYbS$_2$ [@baenitz2018], and NaYbO$_2$ [@ranjith_o] are well above $\sim$12 meV, the magnetic properties below 100 K can be safely interpreted from the interaction between the effective $S=1/2$ local moments. Although previous experiments on powder samples of NaYbO$_2$ provided some positive evidence for QSL ground states [@wilson2019; @baenitz2018; @ding2019], there are no detailed neutron scattering experiments on single crystalline samples to establish the presence of the magnetic excitation continuum and further reveal its wave vector, energy, temperature dependence. Our result in this paper is to fulfill this goal. Here we report magnetic, heat capacity, and neutron scattering results on single crystals of NaYbSe$_2$. In addition to confirming the absence of long-range magnetic order down to 40 mK, we show the presence of a spin excitation continuum extending from 0.1 to 2.5 meV. Since our careful X-ray diffraction experiments reveal only $\sim4.8\%\pm1\%$ of Yb on Na site and no evidence for a spin glass state at 40 mK, we conclude that the ground state of NaYbSe$_2$ has signatures of a QSL, consistent with the expectation of a spinon Fermi surface quantum spin liquid state [@SI; @LiY2017].
${\bf Results.}$ High quality single crystals of NaYbSe$_2$ were grown by using flux method with Te as the flux (see Methods for further synthesis and experimental details). Figure 1(a) displays schematics of crystal structure and reciprocal space of NaYbSe$_2$, where Yb ions form a perfect triangular lattice layer. Inelastic neutron scattering spectra of CEF excitations obtained by subtracting the scattering of NaYbSe$_2$ from a non-magnetic reference NaYSe$_2$ is shown in Fig. 1(b) [@SI]. Consistent with previous work [@qmzhang2020], the CEF levels of Yb$^{3+}$ have a Kramers doublet ground state and three excited Kramers doublets at $E=15.7$, 24.5, and 30.2 meV at $T=13$ K, thus ensuring that all measurements below about 100 K can be safely considered as an effective $S=1/2$ ground state [@qmzhang2020]. To characterize the behavior of the local moments of Yb and their exchange interactions, we measured the magnetic susceptibility of single-crystalline NaYbSe$_2$. The temperature dependence of magnetization and the in-plane magnetic susceptibility $\chi_{\perp}(T)$ is depicted in Fig. 1(c), and a simple fit to the Curie-Weiss law yields $\Theta_{\rm CW,\perp}\simeq -13$ K in the low-temperature region ($<20$ K), whose absolute value is larger than $|\Theta_{\rm CW,\perp}|\simeq 3.5$ K when the Van Vleck contribution is subtracted [@ranjith_se], indicating the predominantly antiferromagnetic spin interactions in NaYbSe$_2$. Heat capacity measurements were also performed to characterize the thermodynamics of NaYbSe$_2$, and the pure magnetic contribution $C_{\rm mag}(T)$ to the specific heat of NaYbSe$_2$ and its dependence on applied magnetic fields from $0$ T to $8$ T are presented in Fig. 1(d). The data shows a broad peak that shifts upward in temperature as a function of increasing magnetic field for H $\parallel c$, no sharp anomaly indicative of the onset of long-range order, consistent with the susceptibility result and earlier work [@ranjith_se]. Figure 1(e) also shows the estimated temperature dependence of $C_{\rm mag}(T)/T$ (left axis) and the corresponding magnetic entropy $S_{\rm mag}$ (right axis). It is noted that $C_{\rm mag}(T)/T$ in the low-temperature regime ($<0.5$ K) is almost a constant, well compatible with the fact that the spinon Fermi surface alone has a constant density of states and would give a heat capacity depending linearly on temperature. Moreover, the temperature dependence of the magnetic entropy saturates to a value close to $S_{\rm mag}\approx R\ln2$ (where $R$ is the ideal gas constant) around 15 K, consistent with an effective spin-$1/2$ description of the Yb$^{3+}$ local moment [@ranjith_se].
Although stoichiometric NaYbSe$_2$ has no intrinsic structural disorder in the Na${^+}$ intercalating layer [@qmzhang2018; @baenitz2018; @ranjith_o], real crystal could still have structural defects in Na$^+$ and Se$^{2-}$ sites, and these vacant sites could be replaced by Yb$^{3+}$ and Te$^{2-}$, respectively (see Methods). To accurately determine the stoichiometry of our NaYbSe$_2$, we carried out single crystal X-ray structural refinement by recording 1334 Bragg reflections, corresponding to 238 non-equivalent reflections. The Rietveld refinement results of the single-crystal X-ray diffraction data collected at $T=250$ K are shown in Fig. 1(f) and the fitting outcome reveals full occupancy of the Yb$^{3+}$ ($3a$) and Se$^{2-}$ ($6c$) sites in the YbSe$_2$ layers and $\sim4.8\%\pm1\%$ of the Na ($3b$) sites occupied by the Yb ions. These results are consistent with inductively-coupled plasma measurements of chemical composition of the sample (see method for details).
In the previous inelastic neutron scattering measurements on single crystals of CsYbSe$_2$ ($\Theta_{\rm CW}\simeq -13$ K), spin excitations were found to be centered around the $K$ point in reciprocal space \[Fig. 1(a)\], with no intensity modulation along the $c$-axis, and extending up to 1 meV [@xing2019]. To determine what happens in NaYbSe$_2$, we must first determine if the system has long/short-range magnetic order. For this purpose, we align the crystals in the $[H,H,0]\times [0,0,L]$ and $[H,0,0]\times [0,K,0]$ zones \[Fig. 1(a)\]. Figures 2(a) and 2(b) display maps of elastic scattering in the $[H,H,L]$ and $[H,K,0]$ planes, respectively, at $T=40$ mK (top panels) and $40$ mK$-10$ K (bottom panels). In both cases, no evidence of long/short magnetic order was observed at 40 mK, consistent with previous magnetic, heat capacity, and nuclear magnetic resonance measurements [@ranjith_se]. The wave vector dependence of the spin excitations of $E=0.3\pm 0.1$ meV in the $[H,H,L]$ zone at 40 mK (left panel) and 10 K (right panel) is presented in Fig. 2(c). At 40 mK, one can see a featureless rod of scattering along the $[1/3,1/3,L]$ direction, indicating that spin excitations in NaYbSe$_2$ are 2D in nature and have no $c$-axis modulations. The scattering essentially disappears at 10 K, thus confirming the magnetic nature of the scattering at 40 mK. Moreover, Fig. 2(d) shows the temperature dependence of the $E=0.3\pm 0.1$ meV spin excitations in the $[H,K,0]$ zone. The magnetic scattering is centered around the $K$ point, consistent with the previous work [@xing2019], and decreases significantly with increasing temperature.
To further reveal the intrinsic quantum dynamics of the local moments of the Yb ions, we perform the inelastic neutron scattering measurements to study the spin excitations in single crystals of NaYbSe$_2$ at both 40 mK and 10 K. Constant-energy images of spin excitations with a variety of energies in the in-plane 2D Brillouin zones at 40 mK and 10 K are summarized in Figs. 3(a-d) and 3(e-h), respectively. At $E=0.15\pm 0.05$ meV and 40 mK, the magnetic scattering spectral weights spread broadly in the Brillouin zone but with higher intensity at the $K$ point and no scattering near the zone center (the $\Gamma$ point) \[Fig. 3(a)\]. This is clearly different from the wave vector dependence of the low-energy magnetic scattering for YbMgGaO$_4$, in which the spectral weight is enhanced around the $M$ point [@jun2016]. The high intensity at the $K$ point in NaYbSe$_2$ might arise from the strong $XY$-type exchange interaction, since the strong SOC in this material indeed brings certain anisotropic interactions. With increasing energies to $E=0.6\pm 0.1$ \[Fig. 3(b)\], $1.1\pm 1$ \[Fig. 3(c)\], and $E=2.1\pm 0.1$ meV \[Fig. 3(d)\], the magnetic scattering spectral weights become more evenly distributed in the Brillouin zone and gradually decrease with increasing energy. While the spin excitation continuum at $E=0.15\pm 0.05$ meV nearly vanishes on warming from 40 mK to 10 K \[Fig. 1(e)\], the spectral weights at other energies become weaker but are still located around the Brillouin zone boundaries, especially the scattering at the $K$ points \[Figs. 3(f-g)\].
Figures 4(c) and 4(d) display the wave vector-energy dependence of the spin excitation spectral intensity (in log scale) along the magenta color arrow direction in Fig. 4(a) at 40 mK and 10 K, respectively. In both cases, the spectral intensity is broadly distributed in the energy-momentum plane, and the excitation intensity gradually decreases with increasing energy and finally vanishes above $\sim$2.2 meV. The broad neutron-scattering spectral intensity at 40 mK persists to the lowest energy that we measured implying a high density of spinon scattering states at low energies. Moreover, the spectral weight around $\Gamma$ point is suppressed to form a V-shaped upper bound. Combining these two facts, it strongly suggests a spinon Fermi surface QSL since this scenario not only provides a high density of spinon states near the Fermi surface, but also well explains the V-shaped upper bound on the excitation energy near the $\Gamma$ point [@LiY2017]. It is also noted that the low-energy spin excitations clearly peak around the $K$ point at 40 mK \[Fig. 4(c)\], and they decrease dramatically on warming but still keep the V-shaped upper bound around $\Gamma$ point at 10 K \[Fig. 4(d)\]. In addition, Figs. 4(e) and 4(f) present the wave vector-energy dependence of the spin excitation spectral intensity along the magenta color arrow directions in Fig. 4(b) at 40 mK and 10 K, respectively. The main results are similar to that in Figs. 4(c) and 4(d), and also support a spinon Fermi surface QSL.
The data points in Figs. 5(a) and 5(b) show energy dependence of spin excitations at the $K_1$ and $M_2$ points, respectively, under a variety of temperatures $T=40$ mK, 2 K, and 10 K. The solid lines in the figures display similar data at the $\Gamma_1$ point. Consistent with Fig. 4, magnetic scattering clearly decreases with increasing temperature at the $K_1$and $M_2$ points, and essentially vanishes at the $\Gamma_1$ point. The temperature differences (40 mK$-$10 K) of the imaginary part of the dynamic susceptibility, $\chi^{\prime\prime}(E)$, at the $K_1$ and $M_2$ points peak around 0.15 and 0.3 meV, respectively, as shown in the inset in Fig. 5(b). Besides, Fig. 5(c) compares energy dependence of the magnetic scattering at the $M_1$, $M_2$, and $K_1$ with the background at the $\Gamma_2$ point. To show the wave vector dependence of spin excitations, Figs. 5(d-g) plot the spectral intensity along the $[H,H,0]$ direction for various energies of $E=0.25\pm 0.1$, $0.5\pm 0.1$, $1.3\pm 0.1$, and $2.3\pm 0.1$ meV, respectively, at $T=40$ mK, 2 K, and 10 K. Similarly, Figs. 5(h) and 5(i) also plot constant-energy cuts along the $[0.5-K,0.5+K,0]$ direction for energies of $E=0.3\pm 0.1$, $0.9\pm 0.1$, $1.5\pm 0.1$, $2.3\pm 0.1$ meV at 40 mK and 10 K, respectively. All the results are compatible with Figs. 4(c-f).
[**Discussion and Conclusion.**]{} Overall, the magnetic and heat capacity measurements, combined with the neutron scattering results on single crystals of NaYbSe$_2$ demonstrate the absence of long-range magnetic order even down to 40 mK, implying a quantum disordered QSL state. In particular, besides the naive disorder and the simple spectral continuum of spin excitation, the almost linear temperature dependence of magnetic heat capacity $C_{\rm mag}(T)$ at the low temperature regime, the enormous low energy gapless excitations and the V-shaped upper bound around the $\Gamma$ point in inelastic neutron scattering spectrum all strongly indicate the existence of a spinon Fermi surface. Theoretically, although the pure compact $U(1)$ gauge theory in two spatial dimensions is always confined due to the non-perturbative instanton events [@Polyakov1977], it has been shown and understood that in the presence of spinon Fermi surface and gapless excitations, the QSL phase could be stable against gauge fluctuations, and a noncompact $U(1)$ gauge theory remains to be a good low energy description [@SSLee2008; @Lee2007]. Therefore, our experimental results and conclusion about spinon Fermi surface QSL can be compatible with theory. The scenario of spinon Fermi surface QSL could further be verified by low temperature thermal transport measurement, which has an advantage to unveil the nature of low-energy itinerant excitations.
Very recently, the pressure-induced insulator to metal transition followed by an emergence of superconductivity in NaYbSe$_2$ was observed in experiments [@Jia2020]. This is quite remarkable since the QSL has long been thought to be a parent state of the high temperature superconductors [@anderson1987; @rmp2006; @Lee2007]. It was suggested that doping a QSL could naturally result in superconductivity [@anderson1987; @rmp2006; @Lee2007] due to the intimate relationship between high temperature superconductor and QSL, but the definitive experimental evidence showing that doping QSLs give rise to superconductivity is still lacking. Instead of doping, Ref. [@Jia2020] obtained the superconductivity by pressure, which opens up a promising way to study the superconductivity in QSL candidates and sheds light on the mechanism of high temperature superconductivity.
![image](Fig_1){width="15cm"}
![image](Fig_2){width="10cm"}
![image](Fig_3){width="16cm"}
![image](Fig_4){width="16cm"}
![image](Fig_5){width="16cm"}
Anderson, P. W. Resonating valence bonds: A new kind of insulator? Mater. Res. Bull. [**8**]{}, 153 (1973). Balents, L. Spin liquids in frustrated magnets. Nature [**464**]{}, 199-208 (2010). Zhou, Y., Kanoda, K. and Ng, T.-K. Quantum spin liquid states. Rev. Mod. Phys. [**89**]{}, 025003 (2017). Savary, L. and Balents, L. Quantum spin liquids: a review. Rep. Prog. Phys. [**80**]{}, 016502 (2017). Broholm, C. [*et al.*]{} Quantum spin liquids. Science [**367**]{}, eaay0668 (2020). Anderson, P. W. The resonating valence bond state in La$_2$CuO$_4$ and superconductivity. Science [**235**]{}, 1196-1198 (1987). Lee, P. A., Nagaosa, N. and Wen, X. G. Doping a Mott insulator: physics of high-temperature superconductivity. Rev. Mod. Phys. [**78**]{}, 17-85 (2006). P. A. Lee, From high temperature superconductivity to quantum spin liquid: progress in strong correlation physics, Reports on Progress in Physics. [**71**]{}, 012501 (2007). Kitaev, A. Y. Fault-tolerant quantum computation by anyons. Ann. Phys. [**303**]{}, 2-30 (2003). Kitaev, A. Anyons in an exactly solved model and beyond. Ann. Phys. [**321**]{}, 2-111 (2006). Han, T. H. [*et al.*]{} Fractionalized excitations in the spin-liquid state of a kagome-lattice antiferromagnet. Nature [**492**]{}, 406-410 (2012). Shen, Y. [*et al.*]{} Evidence for a spinon Fermi surface in a triangular-lattice quantum-spin-liquid candidate, Nature (London) [**540**]{}, 559 (2016). Paddison, J. A. M. [*et al*]{}. Continuous excitations of the triangular-lattice quantum spin liquid YbMgGaO$_4$, Nat. Phys. [**13**]{}, 117 (2017).
Balz, C. [*et al.*]{} Physical realization of a quantum spin liquid based on a complex frustration mechanism. Nat. Phys. [**12**]{}, 9421 7949 (2016).
Gao, B. [*et al.*]{} Experimental signatures of a three-dimensional quantum spin liquid in effective spin-$1/2$ Ce$_2$Zr$_2$O$_7$ pyrochlore. Nat. Phys. [**15**]{}, 1052-1057 (2019).
Gaudet, J. [*et al.*]{} Quantum spin ice dynamics in the dipole-octupole pyrochlore magnet Ce$_2$Zr$_2$O$_7$. Phys. Rev. Lett. [**122**]{}, 187201 (2019).
Freedman, [*D. E. et al.*]{} Site specifc X-ray anomalous dispersion of the geometrically frustrated kagom$\rm \acute{e}$ magnet, herbertsmithite, ZnCu$_3$(OH)$_6$Cl$_2$. J. Am. Chem. Soc. [**132**]{}, 161851 716190 (2010). Ma, Z. [*et al.*]{} Spin-glass ground state in a triangular-lattice compound YbZnGaO$_4$, Phys. Rev. Lett. [**120**]{}, 087201 (2018). Zhu, Z., Maksimov, P. A., White, S. R., and Chernyshev, A. L. Disorder-induced mimicry of a spin liquid in YbMgGaO$_4$, Phys. Rev. Lett. [**119**]{}, 157201 (2017). Rau, J. G. and Gingras, M. J. P. Frustration and anisotropic exchange in ytterbium magnets with edge-shared octahedra, Phys. Rev. B [**98**]{}, 054408 (2018). Maksimov, P. A., Zhu, Z., White, S. R., and Chernyshev, A. L., Anisotropic-exchange magnets on a triangular lattice: spin waves, accidental degeneracies, and dual Spin Liquids, Phys. Rev. X [**9**]{}, 021017 (2019). Li, Y. [*et al.*]{} Gapless quantum spin liquid ground state in the two-dimensional spin-1/2 triangular antiferromagnet YbMgGaO$_4$, Sci. Rep. [**5**]{}, 16419 (2015). Liu, W. [*et al*]{}. Rare-earth chalcogenides: A large family of triangular lattice spin liquid candidates, Chin. Phys. Lett. [**35**]{}, 117501 (2018). Baenitz, M. [*et al*]{}. NaYbS$_2$: A planar spin-1/2 triangular-lattice magnet and putative spin liquid, Phys. Rev. B [**98**]{}, 220409(R) (2018). Ranjith, K. M. [*et al*]{}. Field-induced instability of the quantum spin liquid ground state in the $J_{eff}=1/2$ triangular-lattice compound NaYbO$_2$, Phys. Rev. B [**99**]{}, 180401(R) (2019). Zhang, Z. [*et al*]{}. Crystalline Electric-Field Excitations in Quantum Spin Liquids Candidate NaYbSe$_2$, arXiv:2002.04772. Bordelon, M. M. [*et al*]{}. Field-tunable quantum disordered ground state in the triangular-lattice antiferromagnet NaYbO$_2$, Nat. Phys. [**15**]{}, 1058 (2019). Ding, L. [*et al*]{}. Gapless spin-liquid state in the structurally disorder-free triangular antiferromagnet NaYbO$_2$, Phys. Rev. B [**100**]{}, 144432 (2019).
See supplementary information for details.
Li, Y. D., Lu, Y. M., and Chen, G., Spinon Fermi surface $U(1)$ spin liquid in the spin-orbit-coupled triangular-lattice Mott insulator YbMgGaO$_4$, Phys. Rev. B [**96**]{}, 054445 (2017). Ranjith, K. M. [*et al*]{}. Anisotropic field-induced ordering in the triangular-lattice quantum spin liquid NaYbSe$_2$, Phys. Rev. B [**100**]{}, 224417(R) (2019). Xing, J. [*et al*]{}. Field-induced magnetic transition and spin fuctuations in the quantum spin-liquid candidate CsYbSe$_2$, Phys. Rev. B [**100**]{}, 220407(R) (2019).
Polyakov, A. M., Quark confinement and topology of gauge theories, Nucl. Phys. B [**120**]{}, 429 (1977).
Lee, S. S., Stability of the $U(1)$ spin liquid with a spinon Fermi surface in $2+1$ dimensions, Phys. Rev. B [**78**]{}, 085129 (2008).
Jia, Y. [*et al*]{}. Mott Transition and Superconductivity in Quantum Spin Liquid Candidate NaYbSe$_2$, arXiv:2003.09859, (2020).
Rodríguez-Carvajal, J. Recent advances in magnetic structure determination by neutron powder diffraction. Phys. B 192, 55-69 (1993).
Ehlers, G. [*et al*]{}. The new cold neutron chopper spectrometer at the Spallation Neutron Source: Design and performance, Rev. Sci. Instrum. [**82**]{}, 085108 (2011).
Abernathy, D. L. [*et al*]{}., Design and operation of the wide angular-range chopper spectrometer ARCS at the Spallation Neutron Source, Rev. Sci. Instrum. [**83**]{}, 015114 (2012);
Bewley, R. I. [*et al*]{}. LET, a cold neutron multi-disk chopper spectrometer at ISIS, Nuclear Instruments and Methods in Physics Research A [**637**]{}, 128 (2011).
Lu, X. [*et al*]{}.; (2020): Fractionalized magnetic excitations in a quantum spin liquid candidate NaYbSe$_2$, STFC ISIS Neutron and Muon Source, https://doi.org/10.5286/ISIS.E.RB1920512.
Arnold, O. [*et al*]{}. Mantid—Data analysis and visualization package for neutron scattering and ${\mu}$SR experiments, Nuclear Instruments and Methods in Physics Research A 764, 156 (2014).
Ewings, R. A. [*et al*]{}. HORACE: Software for the analysis of data from single crystal spectroscopy experiments at time-of-flight neutron instruments, Nuclear Instruments and Methods in Physics Research A [**834**]{}, 132 (2016).
[**Acknowledgments**]{} We thank M. Stone for suggestions of appropriate neutron scattering instrumentation, and Feng Ye (ORNL) for the assistance with the single-crystal x-ray diffraction measurements. The research at Beijing Normal University is supported by the National Natural Science Foundation of China (Grant No. 11734002 and 11922402, X.L.). The work at ShanghaiTech university is supported by the National Natural Science Foundation of China (No. 11874264, Y.G.). Y.G. and X.W. thank the support from Analytical Instrumentation Center (\# SPST-AIC10112914), SPST, ShanghaiTech University. The neutron scattering work at Rice is supported by US DOE BES DE-SC0012311 (P.D.). This work is further supported by funds from the Ministry of Science and Technology of China ( grant No.2016YFA0301001, No.2018YFGH000095, No.2016YFA0300500 for G.C., and No.2016YFA0300501, No.2016YFA0300503 for L.S. and G.C.) and from the Research Grants Council of Hong Kong with General Research Fund Grant No.17303819 (G.C.). E.F. and H.C. acknowledges support of US DOE BES Early Career Award KC0402010 under Contract DE-AC05-00OR22725. E.M. and C.-L.H. acknowledge support from US DOE BES DE-SC0019503. This research used resources at Spallation Neutron Source, a DOE Office of Science User Facility operated by ORNL. We gratefully acknowledge the Science and Technology Facilities Council (STFC) for access to neutron beamtime at ISIS.
[**Methods**]{}
[**Crystal Growth**]{} NaYbSe$_2$ single crystals were grown by using Te as the flux. The starting materials are in a molar ration of Na : Yb : Se : Te = 1 : 1 : 2 : 20. To avoid the violent reaction between Na and Se, the Na (99.7%) blocks and Te (99.999%) granules were mixed and slowly heated up to 200$^\circ$C within 20 hours and pre-reacted at the temperature for 10 hours. The precursor was then thoroughly mixed with Yb (99.9%) blocks and Se (99.999%) granules in the molar ratio and placed into an alumina crucible. The crucible was sealed into a quartz tube under the vacuum of $10^{-4}$ Pa and then slowly heated up to 950$^\circ$C within 15 hours. After the reaction at this temperature for 20 hours, the assembly was slowly cooled down to 800$^\circ$C at a temperature decreasing rate of 1$^\circ$C/h. At 800$^\circ$C, the quartz tube was immediately taken out of the furnace and placed into a high-speed centrifuge to separate the excess Te flux. To show a comparison, NaYbSe$_2$ crystals were also grown by using NaCl as the flux in the similar procedure as mentioned above. The crystallographic phase and quality of the grown crystals were examined on a Bruker D8 VENTURE single crystal X-ray diffractometer using Mo K$_{\alpha1}$ radiation ($\lambda$ = 0.71073[Å]{}) at room temperature. The crystals grown by using different flux have the same high quality. Growth of the polycrystalline NaYbSe$_2$ and NaYSe$_2$ samples has been described elsewhere [@ranjith_se].
[**Stoichiometric Analysis**]{} The single crystal X-ray diffraction of NaYbSe$_2$ were performed at 250 K on Rigaku XtaLAB PRO diffractometer at Spallation Neutron Source, ORNL. Structure refinement based on the X-ray diffraction data were carried out with FullProf suite [@fullprof], generating (Na$_{0.952(10)}$Yb$_{0.048(10)}$)YbSe$_2$ without Te occupying Se sites. Elemental analysis of a group of NaYbSe$_2$ single crystals grown with Te flux with a total mass of 35mg were performed by inductively-coupled plasma (ICP) method on Thermo Fisher ICP 7400 system. The result—Na$_{0.965}$Yb$_{1.03}$Se$_{1.98}$Te$_{0.025}$—can be interpreted as $\sim 3\%$ of Na$^+$ sites being occupied by Yb ions and agrees well with the structure refinement results of single-crystal x-ray diffraction, especially considering that Te could exist as flux in the sample.
[**Heat Capacity**]{} The specific heat capacity of NaYbSe$_2$ was measured down to 50 mK using a thermal-relaxation method in DynaCool-PPMS (Physical Property Measurement System, Quantum Design) with the magnetic field applied along the $c$-axis at Fudan University and Rice University. The total specific heat is described as a sum of magnetic and lattice contributions: $C_p = C_{\rm mag} + C_{\rm phonon}$. We fit the phonon contribution with $C_{\rm phonon}=\beta T^3 + \alpha T^5$.
[**Neutron Scattering**]{} The neutron scattering measurements of the magnetic excitations in \[H, H, L\] scattering plane, and the CEF excitations were performed on the Cold-Neutron-Chopper-Sepctrometer (CNCS) [@cncs] and ARCS [@arcs] at the Spallation Neutron Source (SNS), Oak-Ridge National Laboratory (ORNL), respectively. The measurements in \[H, K, 0\] scattering plane were carried out on the LET cold neutron chopper spectrometer [@let], ISIS spallation neutron source, Rutherford Appleton Laboratory (RAL), UK. We co-aligned $\sim 3.7$ grams of NaYbSe$_2$ single crystals for the measurements of magnetic excitations and prepared $\sim 10$ grams NaYbSe$_2$ and NaYSe$_2$ polycrystalline samples for the CEF excitation measurements. The neutron scattering data was reduced with Mantid [@mantid] and analyzed with Mantidplot, Horace [@horace], and Mslice.
[**Author Contributions**]{} P.-L.D. and G.Z. contribute equally to this work. X.L., Y.G., and P.D. conceived this project. P.-L.D., G.T. and X.L. applied the beamtimes. G.Z. and Y.G. prepared the samples and did basic structure and magnetic characterizations with the help from S.W. and X.W.. E.F. and H.C. did X-ray structure refinement. Z.Z., C.-L.H., E.M. and L.S. performed the specific heat measurements. Z.Z. and L.S. analyzed the specific heat data. P.-L.D., X.L., Y.F.X. and C.D. performed neutron scattering experiments on CNCS, LET, and ARCS spectrometers with the help from A.P., G.E.G., and D.V.. P.-L.D. and X.L. analyzed the neutron scattering data and prepared the figures. Y.H.G. and G.C. provided the physical interpretation of the results. X.L., Y.H.G., G.C. and P.D. wrote the manuscript with input from Y.G.. All authors made comments.
|
---
abstract: 'We have investigated shot noise in multiterminal, diffusive multiwalled carbon nanotubes (MWNTs) at 4.2 K over the frequency $f=600 - 850$ MHz. Quantitative comparison of our data to semiclassical theory, based on non-equilibrium distribution functions, indicates that a major part of the noise is caused by a non-equilibrium state imposed by the contacts. Our data exhibits non-local shot noise across weakly transmitting contacts while a low-impedance contact eliminates such noise almost fully. We obtain $F_{\rm tube}< 0.03$ for the intrinsic Fano factor of our MWNTs.'
author:
- 'T. Tsuneta$^{1}$, P. Virtanen, F. Wu$^1$, T. Wang$^{2}$, T. T. Heikkilä$^1$, and P. J. Hakonen$^1$'
title: 'Local and Non-local Shot Noise in Multiwalled Carbon Nanotubes'
---
Multiwalled carbon nanotubes (MWNTs) are miniscule systems, their diameter being only a few nanometers. Yet, in surprisingly many cases their transport properties can be described with incoherent theories, interference effects showing up only through weak localization [@Schonenberger; @Strunk]. This is in contrast to single-walled tubes, where interference effects dominate, and give rise for example to Fabry-Perot resonances with distinctive features in conductance and current noise [@wu07].
When interference effects are washed out, semiclassical analysis based on non-equilibrium distribution functions is adequate, and the circuit theory of noise becomes a powerful tool in considering nanoscale objects [@semiclassical; @Nazarov02]. This theory makes it straightforward to calculate current noise of incoherent dots and wires, and to relate the current noise to the transmission properties of the corresponding section of the mesoscopic object. Semiclassical analysis provides a way to make a distinction between sample and contact effects, and thereby it allows one to investigate contact phenomena, of which only a little is known in carbon nanotube systems.
We have investigated the influence of contacts on the shot noise in multiterminal, diffusive carbon nanotubes. We have made four-lead measurements on MWNTs in which two middle probes have been employed for noise measurements. We show that quantitative information can be obtained from such measurements using semiclassical circuit theory in the analysis. We find that probes with contact resistance $R_C <
1$ k$\Omega$ act as strongly inelastic probes, resulting in incoherent, classical addition of noise of two adjacent sections, while “bad” contacts ($R_C \sim 10 $ k$\Omega$) act as weakly perturbing probes which need to be analysed on the same footing as the other parts of the sample. We also find that good contacts eliminate noise that couples to the probe from a non-neighboring voltage biased section. In addition, we find from our analysis that the tubes themselves are quite noise-free, with a Fano-factor $F_{\rm tube}< 0.03$. As far as we know, our results are the first shot noise measurements addressing the contact issues in carbon nanotubes.
To clarify the results of our multi-probe noise measurements, let us consider the three-terminal structure depicted schematically in Fig. \[fig:3probe\]. Assume that the average current ${\langle{I}\rangle}$ flows between 1 and 2, and the average potential of the terminal 3 adjusts to the potential of the node. In our work, terminal 3 is disconnected from the ground at low frequencies, but at the high frequencies of the noise measurement, the impedance to the ground is much lower than that of the contacts. As a result, the effect of voltage fluctuations in the third terminal on the overall noise can be neglected. We describe two kinds of noise measurements: “local”, where the noise is measured from one of the terminals 1 or 2, and “nonlocal”, where the noise is measured from terminal 3. The shot noise can thus be characterized by the local and non-local Fano factors, defined as $F_{li}=S_i/e{\langle{I}\rangle}$, $i=1,2$, and $F_{nl}=S_3/e{\langle{I}\rangle}$. Here, $S_i=\int dt{\langle{\delta
I_i(t)\delta I_i(0)}\rangle}$ is the low-frequency current noise measured in terminal $i$.
For strong inelastic scattering inside the node, the resulting expressions for $F_{l1}$ and $F_{nl}$ would be obtained from the classical circuit theory, yielding $$\begin{aligned}
\label{eq:classical3probe}
F_{l1} = \frac{(G_2+G_3)^2}{G_t^2}F_1
+ \frac{G_1^2}{G_t^2}F_2\,,
\medspace
F_{nl} = \frac{G_3^2}{G_t^2}(F_1 + F_2),\end{aligned}$$ where $G_t=G_1+G_2+G_3$. If the nonlocal terminal 3 is well connected to the node, $G_3\gg G_1, G_2$, the local noise measurements measure only the local Fano factor, $F_{l1}=F_1$ and $F_{l2}=F_2$, whereas the nonlocal noise is the sum of them, $F_{nl}=F_1+F_2$. This is because in this limit the terminal 3 suppresses the voltage fluctuations from the node, and the resulting noise is only due to the contacts. In the opposite limit $G_3\rightarrow0$, the nonlocal noise vanishes, and the local noise is given by the classical addition of voltage fluctuations, $F_{l1}=F_{l2}=(G_2^2F_1 + G_1^2F_2)/G_t^2$.
At low temperatures the inelastic scattering inside the nanotubes is suppressed. In this case, assuming that the momentum of the electrons inside the node is isotropized [@BB; @Nazarov02], the noise can be calculated with the semiclassical Langevin approach [@semiclassical; @BB]. It considers the electron energy distribution function inside each node as a fluctuating quantity $f(E)={\langle{f(E)}\rangle}+\delta{}f(E)$, where $\delta{}f(E)$ are induced by intrinsic fluctuations of currents between the nodes $I_{ij}(E)=G_{ij}[f_i(E) - f_j(E)] + \delta I_{ij}(E)$. Properties of $\delta I_{ij}$ are known from scattering theory, which allows calculating all noise correlators in the circuit [@BB].
The fluctuation-averaged energy distribution $f_n(E)$ at the node is given by a weighted average of (Fermi) distributions at the terminals, $f_n(E)=\sum_jG_jf_j(E)/G_t$. In the general case, the resulting expressions for Fano factors $F_{l1}$, $F_{l2}$, $F_{nl}$ are lengthy, but in the case of strongly coupled terminal 3, $G_3\gg G_1,G_2$, one obtains the same expressions as in the classical case. This is because in this limit the distribution function of the node is given by the Fermi function of terminal 3. In the opposite limit $G_3 \ll G_1, G_2$ the nonlocal noise vanishes and the local noise is given by the semiclassical sum rule, $$\begin{aligned}
F_{l1}=F_{l2}
=\frac{G_2^3 F_1 + G_1^3 F_2 + G_1G_2(G_1+G_2)}{(G_1+G_2)^3} .
\label{eq:semiclassicalsumrule}\end{aligned}$$ This sum rule applies for any pair of neighboring nodes of an arbitrary one-dimensional chain of junctions, assuming that inelastic scattering can be neglected. In the limit of a long chain with many nodes, applying this rule repeatedly makes the Fano factor approach the universal value $F=1/3$ characteristic for a diffusive wire [@oberholzer].
![\[fig:3probe\] (a) 3-probe structure. The node is denoted by a circle. (b) Extended contact. ](3probe)
When comparing the semiclassical model to the experiments, we ignore the resistance of the wires under the contacts and use a model of localized contacts. In practice, the large width and low interfacial resistance of the contacts makes the current flow through them non-uniform. Including this fact in the theoretical model by describing the tube using a large number of nodes on top of the contacts (see Fig. \[fig:3probe\]b) did not improve the fit between the model and the experimental noise data, whereas some improvement was obtained in the fit to the resistance data. This means that a situation between the localized and distributed contacts is realized — however including this fact in the model would increase the number of fitting parameters.
Our individual nanotube samples S1 and S2 were made out of a plasma enhanced CVD MWNTs [@Koshio] with the length of $L=2.6$ and 5 $\mu$m and the diameter of $\phi=8.9$ nm and 4.0 nm, respectively. The main parameters of the samples are given in Table \[tab:resistances\]; the noise data is summarized in Tables \[tab:s1fano\] and \[tab:s2fano\]. The contacts on the PECVD tubes were made using standard e-beam overlay lithography. In these contacts, 2 nm of Ti was employed as an adhesive layer before depositing 30 nm of gold. The width of the four contacts were $L_{1C} = 400$ nm and $L_{2C} = 550$ nm for samples 1 and 2, respectively. The strongly doped Si substrate was employed as the back gate ($C_g \sim 5$ aF), separated from the sample by 150 nm of SiO$_2$.
-------- ---------- ---------- ---------- ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
$\phi$ $L_{12}$ $L_{23}$ $L_{34}$ $R_{12}$ $R_{23}$ $R_{34}$ $R_{67}$ $R_{C1} $ $R_{C2} $ $R_{C3}$ $R_{C4} $
(nm) (nm) (nm) (nm) (k$\Omega$) (k$\Omega$) (k$\Omega$) (k$\Omega$) (k$\Omega$) (k$\Omega$) (k$\Omega$) (k$\Omega$)
8.9 430 300 540 35 30 34 17.5 - 0.5 12 -
27 27 41 17 2.4 0.1 9.7 $10^{-5}$
4.0 940 440 1110 21 25 28 16.5 - 5 1.7 -
27 16 31 12 $10^{-5}$ 2.0 1.8 $10^{-5}$
-------- ---------- ---------- ---------- ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
: Main parameters of our samples S1 (upper part) and S2 (lower part). The diameter is given by $\phi$ and the length of the tube sections are denoted $L_{ij}$ (excluding the range of contacts), where the indices $ij$ correspond to pairs of terminals (see Fig. \[exp-setup\]) 12, 23, and 34. Resistances of the individual sections $R_{ij}$ at $V_g=0$ were measured using bias $V=0.1 - 0.2$ V in order to avoid zero bias anomalies; $R_{67}$ indicates the 4-wire resistance $R_{4p}$. Contact resistances $R_{Ck}$, weakly dependent on $V_g$ and bias voltage polarity, are given at $V_g=0$ and $V>0$; index $k$ identifies the contact. All the resistance data refer to $T= 4.2$ K. The lower row values correspond to resistance values obtained in fitting of the semiclassical model. For details, see text. []{data-label="tab:resistances"}
Our measurement setup is illustrated in Fig. \[exp-setup\]. Bias-tees are used to separate dc bias and the bias-dependent noise signal at microwave frequencies. We use a LHe-cooled, low-noise amplifier [@Cryogenics04] with operating frequency range of $f=600 - 950$ MHz. A microwave switch and a high-impedance tunnel junction were used to calibrate the gain. Our setup measures voltage fluctuations with respect to ground at the node next to the contact terminal; the voltage fluctuations are converted to current fluctuations by the contact resistance of the measuring terminal. The sample was biased using one voltage source, one lead connected to (virtual) ground of the input of DL1211 current preamplifier and with the remaining two terminals floating.
From four-point measurements at 0.1 - 0.2 V, we get $R_{4p}=17.5$ k$\Omega$ and $R_{4p}=16.5$ k$\Omega$ (section 6-7 in Fig. \[exp-setup\]) for samples S1 and S2, respectively. Within diffusive transport, this yields for the resistance per unit length $r_l = 37 - 58$ k$\Omega/\mu$m, which amounts to $\sim 20$ k$\Omega$ over the length of a contact. The contact resistances for contacts 2 and 3 of S1 and S2 were determined as averages from a set of two-lead measurements: $R_{C2}=(R_{12}+R_{23}-R_{13})/2$ and $R_{C3}=(R_{23}+R_{34}-R_{24})/2$; this scheme was adopted as there is a non-local contribution in voltage [@Nonlocal]. $R_{C2}$ and $R_{C3}$ were found to be nearly constant except for a small region near zero bias. Variation of $V_g=-4 ... +4$ V changes $R_{C_2}$ from $\sim 4$ to 6 k$\Omega$ and $R_{C_3}$ from $\sim 1$ to 3 k$\Omega$ on S2 on average. In both cases, $R_{C}$ increases when going from $V<0$ to $V>0$ (cf. Table \[tab:s2fano\]). We cannot determine the contact resistance of contacts 1 and 4, only the sum of the contact and the nanotube section. For sample 1, we find $R_{C2} = 0.1 - 1$ k$\Omega$, indicating that we have an excellent contact, while $R_{C3} = 12 $ k$\Omega$ points to a weak, tunneling contact.
![Schematics of our high frequency setup. Indices 5-8 refer to nodes with different distribution functions on the nanotube. Contacts are drawn as tunnel junctions with resistances $R_{ij}$; numbers 1-4 represent the measurement terminals. A sum of lead and bonding pad capacitance is given by $C_p \sim 1$ pF while the inductors represent bond wires of $L_s \sim 10$ nH. TJ denotes a tunnel junction for noise calibration.[]{data-label="exp-setup"}](noise4probe2chSwitchB.ps){width="6.5cm"}
The measured noise as a function of current is displayed in Fig. \[noise-exp3\] for sample S2. We measured current noise $S_{i^2}$ both using DC current, $S_{i^2}= S(I)-S(0)$ and using AC modulation on top of DC bias: $S_{i^2}=\int_{0}^{I}
\left(\frac{dS}{dI}\right) dI$, where $S$ represents the noise power integrated over the 250 MHz bandwidth (divided by 50 $\Omega$) and $\left(\frac{dS}{dI}\right)$ denotes the differentially measured noise. As seen in Fig. \[noise-exp3\], the measured noise for each section of the tube is quite well linear with current at $I < 1$ $\mu$A, while at larger currents the Fano factor decreases gradually with $I$. We determine the Fano factor using linear fits to $S_{i^2}$ in the range $0.1-2 \mu$A: the results vary over 0.1 - 0.5 as seen in Tables \[tab:s1fano\] and \[tab:s2fano\].
![(color online) a) Current noise power (arbitrary units) measured from lead 2 in sample 2 as a function of bias current, applied according to the label (see text). The solid curve indicates noise with $F=1$ (tunnel junction). b) The noise measured from terminal 3 - notations as above. []{data-label="noise-exp3"}](noiseVg0lead2b.ps "fig:"){width="6.6cm"} ![(color online) a) Current noise power (arbitrary units) measured from lead 2 in sample 2 as a function of bias current, applied according to the label (see text). The solid curve indicates noise with $F=1$ (tunnel junction). b) The noise measured from terminal 3 - notations as above. []{data-label="noise-exp3"}](noiseVg0lead3b.ps "fig:"){width="6.3cm"}
The basic finding of our measurement is that the noise of the sections adjacent to probes 2 and 3 may behave quite differently, depending on how strong the contact is between the gold lead and the nanotube. For terminal 2 of S1, the noise adds in a classical fashion as expected for a good contact, i.e., $S_{2,13} \simeq
S_{2,12}+S_{2,23}$ ($F_{2,13} \simeq F_{2,12}+F_{2,23}$ in Table \[tab:s1fano\]). Here, the first index indicates that noise is measured from contact 2 while the bias is applied to the terminal specified by the middle index and the last index tells the grounded contact. For the terminal 3 of S1, on the other hand, we find that $S_{3,24}\simeq S_{3,23} \simeq S_{3,34}$. In sample S2, the results are intermediate to the above extreme cases (for example, $S_{2,12},
S_{2,23} < S_{2,13}$ and $S_{2,12} + S_{2,23} > S_{2,13}$). All the above basic relations are in accordance with semiclassical circuit analysis, while the results related over a good contact can be accounted for by purely classical circuit theory (cf. Eq. (\[eq:classical3probe\])).
We fit the semiclassical theory to our data using basically three types of fitting parameters: the tube resistance per unit length $r_{l}$, the interfacial resistances $R_{Ck}$ in the contact regions, and the Fano factor $F_{\rm tube}$ for the parts of the tube away from contacts. For contacts we use $F=1$ as it makes only a small contribution to noise in good quality contacts. The model thus contains 7 adjustable quantities, and can be used to predict the measured four resistances and the 12 noise correlators. These are fit through a least-square minimization procedure.
Tables \[tab:s1fano\] and \[tab:s2fano\] display the calculated results for the noise correlators, while the corresponding resistances are shown in Table I. In both samples the optimum is found with $F_{\rm tube} \sim 0$. The classical model, even though consistent with data with good contacts, does not provide a good overall account for either of our samples. The overall agreement of calculated noise with the measurement is $10 - 30$%, excluding the two smallest non-local noise values. Within the error bars for the measured data, especially due to the difficulties in determining the exact linear-response resistance values (see below), we obtain an upper limit $F_{\rm tube} \lesssim 0.03$, i.e. most of the noise comes from the non-equilibrium state generated by the current (as in the last term in Eq. ), not from the transport in the tube itself [@NOTE].
------------ --------- --------- ------ ------- ---------- ---------- ------- -------
$I_{-}$ $I_{+}$ Fit Dev % $I_{-}$ $I_{+}$ Fit Dev %
$S_{k,12}$ 0.11 0.080 0.11 15 $<0.005$ $<0.005$ 0.006 20
$S_{k,13}$ 0.48 0.46 0.39 16 0.41 0.39 0.38 5
$S_{k,14}$ 0.51 0.50 0.46 9 0.50 0.46 0.45 7
$S_{k,23}$ 0.41 0.40 0.29 28 0.42 0.38 0.38 6
$S_{k,24}$ 0.43 0.42 0.35 17 0.54 0.47 0.45 12
$S_{k,34}$ 0.24 0.24 0.30 26 0.46 0.50 0.40 18
------------ --------- --------- ------ ------- ---------- ---------- ------- -------
: Summary of Fano factors measured at $I<2$ $\mu$A for sample S1 at terminals $k=2$ and 3. The values in column “Fit” have been calculated using semiclassical circuit theory (see text), and the values in column “Dev” show the deviation between the theory and averaged experimental Fano factor. []{data-label="tab:s1fano"}
---------------------- --------- --------- --------- --------- ------ ------- --------- ---------
$I_{-}$ $I_{+}$ $I_{-}$ $I_{+}$ Fit Dev % $I_{-}$ $I_{+}$
$R_{C2}$ (k$\Omega$) 5.8 6.4 5.3 4.9 1.8 65 3.9 5.1
$S_{2,12}$ 0.26 0.24 0.21 0.20 0.19 10 0.24 0.23
$S_{2,13}$ 0.37 0.34 0.33 0.32 0.35 6 0.38 0.35
$S_{2,14}$ 0.42 0.38 0.37 0.36 0.41 11 0.36 0.36
$S_{2,23}$ 0.25 0.23 0.24 0.25 0.23 5 0.25 0.24
$S_{2,24}$ 0.36 0.32 0.27 0.28 0.29 7 0.28 0.21
$S_{2,34}$ 0.075 0.11 0.061 0.082 0.12 64 0.055 0.053
---------------------- --------- --------- --------- --------- ------ ------- --------- ---------
: Summary of Fano factors measured at $I<2$ $\mu$A for S2 at three different gate voltage values as well as the measured contact resistances for sample 2 at terminals 2 (top) and 3 (bottom). Columns “Fit” and “Dev” refer to theoretical fit using semiclassical analysis and its deviation from the experimental data as in Table \[tab:s1fano\].[]{data-label="tab:s2fano"}
---------------------- --------- --------- --------- --------- ------ ------- --------- ---------
$I_{-}$ $I_{+}$ $I_{-}$ $I_{+}$ Fit Dev % $I_{-}$ $I_{+}$
$R_{C3}$ (k$\Omega$) 2.4 3.3 1.4 2.1 1.7 $<$5 0.9 2.9
$S_{3,12}$ 0.081 0.079 0.13 0.13 0.13 2 0.043 0.052
$S_{3,13}$ 0.14 0.15 0.29 0.27 0.29 4 0.14 0.14
$S_{3,14}$ 0.34 0.24 0.31 0.30 0.40 30 0.20 0.21
$S_{3,23}$ - - 0.24 0.25 0.23 8 - -
$S_{3,24}$ 0.26 0.19 0.40 0.35 0.33 12 0.17 0.18
$S_{3,34}$ 0.12 0.087 0.24 0.19 0.17 23 0.085 0.077
---------------------- --------- --------- --------- --------- ------ ------- --------- ---------
: Summary of Fano factors measured at $I<2$ $\mu$A for S2 at three different gate voltage values as well as the measured contact resistances for sample 2 at terminals 2 (top) and 3 (bottom). Columns “Fit” and “Dev” refer to theoretical fit using semiclassical analysis and its deviation from the experimental data as in Table \[tab:s1fano\].[]{data-label="tab:s2fano"}
We get from Table \[tab:s2fano\] average Fano factors $F=0.26$ and $F= 0.18$ at $V_g=-4, 0, +4$ V for terminals 2 and 3, respectively. These values correspond to ensemble averaged values which describe the combined properties of the tube and its contacts. On the other hand, if we take only the noise from single sections 12 and 23 (or 23 and 34), then the average $F=0.24$ (0.16), pretty close to the above values. Altogether, the variation of local, single section measurements is 0.10-0.48, quite distinct from simple diffusive wire expectations, and the semiclassical analysis is able to account for all these under the premise that the tube is noise-free! This conclusion is in line with results in SWNT bundles that have shown small noise as well [@Roche02]. In semiconducting MWNTs rather large noise at 1 GHz has been observed, which has been assigned to the presence of Schottky barriers [@WuSemi].
The fitted resistances are slightly off from the measured values. This must be partly because in nanotubes it is difficult to avoid uncertainties in four-probe measurements (current goes in part through the voltage probes), and partly because the presence of non-local voltages. In addition, the quality of our MWNTs may be so good that the conduction becomes semiballistic and our analysis is not valid any more. For example, in section 1-2 of S1, the IV curve displays a power law which clearly differs from the rest of the sample. The fitted contact resistances range over $R_C=0.1-10$ k$\Omega$, which is in agreement with typical measured values [@Schonenberger; @deHeer].
In summary, we have investigated experimentally shot noise of multiterminal MWNTs under several biasing conditions. The noise was found to depend strongly on the contact resistance. At small interfacial resistance, our 0.4-0.5 micron contacts acted as inelastic probes and classical noise analysis was found sufficient. Weaker contacts could be accounted for by using semiclassical theory. The latter allows to comprehend the observed, broad spectrum of Fano-factors, but it leads to the conclusion that the intrinsic noise of MWNTs is nearly zero, at most $F_{\rm tube}< 0.03$. Most of the observed noise is generated by metal-nanotube contacts, which govern the non-equilibrium distributions of charge carriers on the tube.
We thank L. Lechner, B. Placais, and E. Sonin for fruitful discussions and S. Iijima, A. Koshio, and M. Yudasaka for the carbon nanotube material employed in our work. This work was supported by the Academy of Finland and by the EU contract FP6-IST-021285-2.
[99]{}
C. Schönenberger, A. Bachtold, C. Strunk, J.-P. Salvetat and L. Forro, Appl. Phys. A **69**, 283 (1999).
See, *e.g*., B. Stojetz, C. Miko, L. Forró, and C. Strunk Phys. Rev. Lett. **94**, 186802 (2005).
See, F. Wu, *et al.* cond-mat/0702332, and references therein.
K.E. Nagaev, Phys. Lett. A **169**, 103 (1992); P. Virtanen and T. T. Heikkilä, New J. Phys. [**8**]{}, 50 (2006).
Y. Nazarov and D. Bagrets, Phys. Rev. Lett. **88** (2002).
Ya.M. Blanter, M. Büttiker, Phys. Rep. **336**, 1 (2000).
S. Oberholzer, E. V. Sukhorukov, C. Strunk, and C. Schönenberger, Phys. Rev. B **66**, 233304 (2002).
A. Koshio, M. Yudasaka, and S. Iijima, Chem. Phys. Lett. **356**, 595 (2002).
L. Roschier and P. Hakonen, Cryogenics **44**, 783 (2004).
We recorded the non-local voltage $V_{ij}$ produced by current $I_{kl}$ where ij and kl were 12 (34) and 34 (12), respectively. We obtained for $V_{12}/I_{34}= 18$ k$\Omega$ and $V_{34}/I_{12}= 2$ k$\Omega$ on S1. This is a bit stronger coupling than found in B. Bourlon, *et al.* Phys. Rev. Lett. **93**, 176806 (2004).
This upper limit was obtained with the localized contact model and assuming $F=1$ for the contacts.
P. E. Roche, [*et al.*]{}, Eur. Phys. J. B **28**, 217 (2002).
F. Wu, [*et al.*]{} Phys. Rev. B **75**, 125419 (2007).
P. Poncharal, C. Berger, Yan Yi, Z. L. Wang, and W. A. de Heer, J. Phys. Chem. B **106**, 12104 (2002).
|
---
abstract: 'Recent years have witnessed tremendous progress in single image super-resolution (SISR) owing to the deployment of deep convolutional neural networks (CNNs). For most existing methods, the computational cost of each SISR model is irrelevant to local image content, hardware platform and application scenario. Nonetheless, content and resource adaptive model is more preferred, and it is encouraging to apply simpler and efficient networks to the easier regions with less details and the scenarios with restricted efficiency constraints. In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR). In particular, our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth. Adaptive inference can then be performed with the support of efficient sparse convolution, where only a fraction of the layers in the backbone is performed at a given position according to its predicted depth. The network learning can be formulated as the joint optimization of reconstruction and network depth losses. In the inference stage, the average depth can be flexibly tuned to meet a range of efficiency constraints. Experiments demonstrate the effectiveness and adaptability of our AdaDSR in contrast to its counterparts (, EDSR and RCAN).'
author:
- 'Ming Liu[^1^]{}'
- 'Zhilu Zhang[^1^]{}'
- 'Liya Hou[^1^]{}'
- 'Wangmeng Zuo[^1\ ()^]{}'
- |
Lei Zhang[^2^]{}\
[Harbin Institute of Technology, Harbin, China]{}\
[The Hong Kong Polytechnic University, Hong Kong, China]{}\
[`csmliu@outlook.com,`]{} [`cszlzhang@outlook.com,`]{} `h_liya@outlook.com,` [`wmzuo@hit.edu.cn,`]{} [`cslzhang@comp.polyu.edu.hk`]{}
bibliography:
- 'AdaDSR.bib'
title: |
Deep Adaptive Inference Networks for\
Single Image Super-Resolution
---
Introduction {#sec:intro}
============
Image super-resolution aims at recovering high-resolution (HR) image from its low-resolution (LR) counterpart, is a representative low-level vision task with many real-world applications such as medical imaging [@shi2013cardiac], surveillance [@zou2011very] and entertainment [@old_film]. Recently, driven by the development of deep convolutional neural networks (CNNs), tremendous progress has been made in single image super-resolution (SISR). On the one hand, the quantitative performance of SISR has been continuously improved by many outstanding representative models such as SRCNN [@SRCNN], VDSR [@VDSR], SRResNet [@SRResNet], EDSR [@EDSR], RCAN [@RCAN], SAN [@SAN], . On the other hand, considerable attention has also been given to handle several other issues in SISR, including visual quality [@SRResNet], degradation model [@DPSR], and blind SISR [@zhang2019multiple].
Albeit their unprecedented success of SISR, for most existing networks, the computational cost of each model is still independent to image content and application scenarios. Given such an SISR model, once the training is finished, the inference process is deterministic and only depends on the model architecture and the input image size. Actually, instead of deterministic inference, it is inspiring to make the inference to be adaptive to local image content. To illustrate this point, Fig. \[fig:intro\](c) shows the SISR results of three image patches using EDSR [@EDSR] with different numbers of residual blocks. It can be seen that EDSR with 8 residual blocks is sufficient to super-resolve a smooth patch with less textures. In contrast, at least 24 residual blocks are required for the patch with rich details. Consequently, treating the whole image equally and processing all regions with identical number of residual blocks will certainly lead to the waste of computation resource. Thus, it is encouraging to develop the spatially adaptive inference method for better tradeoff between accuracy and efficiency.
Moreover, the SISR model may be deployed to diverse hardware platforms. Even for a given hardware device, the model can be run under different battery conditions or workloads, and has to meet various efficiency constraints. One natural solution is to design and train numerous deep SISR models in advance, and dynamically select the appropriate one according to the hardware platform and efficiency constraints. Nonetheless, both the training and storage of multiple deep SISR models are expensive, greatly limiting their practical applications to the scenarios with highly dynamic efficiency constraints. Instead, we suggest to address this issue by further making the inference method to be adaptive to efficiency constraints.
To make the learned model to adapt to local image content and efficiency constraints, this paper presents a kind of adaptive inference networks for deep SISR, , AdaDSR. Considering that stacked residual blocks have been widely adopted in the representative SISR models [@SRResNet; @EDSR; @RCAN], the AdaDSR introduces a lightweight adapter module which takes image features as the input and produces a map of local network depth. Therefore, given a position with the local network depth $d$, only the first $d$ blocks are required to be computed in the testing stage. Thus, our AdaDSR can apply shallower networks for the smooth regions (, lower depth), and exploit deeper ones for the regions with detailed textures (, higher depth), thereby benefiting the tradeoff between accuracy and efficiency. Taking all the positions into account, sparse convolution can be adopted to facilitate efficient and adaptive inference.
We further improve AdaDSR to be adaptive to efficiency constraints. Note that the average of depth map can be used as an indicator of inference efficiency. For simplicity, the efficiency constraint on hardware platform and application scenario can be represented as a specific desired depth. Thus, we also take the desired depth as the input of the adapter module, and require the average of predicted depth map to approximate the desired depth. And the learning of AdaDSR can then be formulated as the joint optimization of reconstruction and network depth loss. After training, we can dynamically set the desired depth values to accommodate various application scenarios, and then adopt our AdaDSR to meet the efficiency constraints.
Experiments are conducted to assess our AdaDSR. Without loss of generality, we adopt EDSR [@EDSR] model as the backbone of our AdaDSR (denoted by AdaEDSR). It can be observed from Fig. \[fig:intro\](b) that the predicted depth map has smaller depth values for the smooth regions and higher ones for the regions with rich small-scale details. As shown in Fig. \[fig:intro\](d), our AdaDSR can be flexibly tuned to meet various efficiency constraints (, [FLOPs]{}) by specifying proper desired depth values. In contrast, most existing SISR methods can only be performed with deterministic inference and fixed computational cost. Quantitative and qualitative results further show the effectiveness and adaptability of our AdaDSR in comparison to the state-of-the-art deep SISR methods. Furthermore, we also take another representative SISR model RCAN [@RCAN] as the backbone model (denoted by AdaRCAN), which illustrates the generality of our AdaDSR. Considering the training efficiency, ablation analyses are performed on AdaDSR with EDSR backbone (, AdaEDSR).
To sum up, the contributions of this work include:
- We present adaptive inference networks for deep SISR, , AdaDSR, which adds the backbone with a lightweight adapter module to produce local depth map for spatially adaptive inference.
- Both image features and desired depth are taken as the input of the adapter, and reconstruction loss is incorporated with depth loss for network learning, thereby making AdaDSR equipped with sparse convolution to be adaptive to various efficiency constraints.
- Experiments show that our AdaDSR achieves better tradeoff between accuracy and efficiency than it counterparts (, EDSR and RCAN), and can adapt to different efficiency constraints without training from scratch.
Related Work
============
In this section, we briefly review several topics relevant to our AdaDSR, including deep SISR models and adaptive inference methods.
Deep Single Image Super-Resolution
----------------------------------
Dong introduce a three-layer convolutional network in their pioneer work SRCNN [@SRCNN], since then, the quantitative performance of SISR has been continuously promoted with the rapid development of CNNs. Kim [@VDSR] further propose a deeper model named VDSR with residual blocks and adjustable gradient clipping. Liu [@MWCNN] propose MWCNN, which accelerates the running speed and enlarges the receptive field by deploying U-Net [@U-Net] like architecture, and multi-scale wavelet transformation is applied rather than traditional down-sampling or up-sampling module to avoid information lost.
These methods take interpolated LR images as input, resulting in heavy computation burden, so many recent SISR methods choose to increase the spatial resolution via PixelShuffle [@PixelShuffle] at the tail of the model. SRResNet [@SRResNet], EDSR [@EDSR] and WDSR [@WDSR] follow this setting and have a deep main body by stacking several identical residual blocks [@ResNet] before the tail component, and they obtain better performance and efficiency by modifying the architecture of the residual blocks. Zhang [@RCAN] build a very deep (more than 400 layers) yet narrow (64 channels 256 channels in EDSR) RCAN model and learn a content-related weight for each feature channel inside the residual blocks. Dai [@SAN] propose SAN to obtain better feature representation via second-order attention model, and non-locally enhanced residual group is incorporated to capture long-distance features. Apart from the fidelity track, considerable attention has also been given to handle several other issues in SISR. For example, SRGAN [@SRResNet] incorporates adversarial loss to improve perceptual quality, DPSR [@DPSR] proposes a new degradation model and performs super-resolution and deblurring simultaneously, Zhang [@zhang2019multiple] solve real image SISR problem in an unsupervised manner by taking advantage of generative adversarial networks. In addition, lightweight networks such as IDN [@IDN] and CARN [@CARN] are proposed, but most lightweight models are accelerated at the cost of quantitative performance. In this paper, we propose an AdaDSR model, which achieves better tradeoff between accuracy and efficiency.
Adaptive Inference
------------------
Traditional deterministic CNNs tend to be less flexible to meet various requirements in the applications. As a remedy, many adaptive inference methods have been explored in recent years. Inspired by [@bengio2013better], Upchurch [@upchurch2017deep] propose to learn an interpolation of deep features extracted by a pre-trained model, and manipulate the attributes of facial images. Shoshan [@DynamicNet] further propose a dynamic model named DynamicNet by deploying tuning blocks alongside the backbone model, and linearly manipulate the features to learn an interpolation of two objectives, which can be tuned to explore the whole objective space during the inference phase. Similarly, CFSNet [@CFSNet] implements continuous transition of different objectives, and automatically learns the trade-off between perception and distortion for SISR.
Some methods also leverage adaptive inference to obtain computing efficient models. Li [@li2019improved] deploy multiple classifiers between the main blocks, and the last one performs as a teacher net to guide the previous ones. During the inference phase, the confidence score of a classifier indicates whether to perform the next block and the corresponding classifier. Figurnov [@patch_adaptive] predict a stop score for the patches, which determines whether to skip the subsequent layers, indicating different regions have unequal importance for detection tasks. Therefore, skipping layers at less important regions can save the inference time. Yu [@Path-restore] propose to build a denoising model with several multi-path blocks, and in each block, a path finder is deployed to select a proper path for each image patch. These methods are similar to our AdaDSR, however, they perform adaptive inference on patch-level, and the adaptation depends only on the features. In this paper, our AdaDSR implements pixel-wise adaptive inference via sparse convolution and is manually controllable to meet various resource constraints.
Proposed Method
===============
This section presents our AdaDSR model for single image super-resolution. To begin with, we equip the backbone with a network depth map to facilitate spatially variant inference. Then, sparse convolution is introduced to speed up the inference by omitting the unnecessary computation. Furthermore, a lightweight adapter module is deployed to predict the network depth map. Finally, the overall network structure (see Fig. \[fig:network\]) and learning objective are provided.
![image](figures/architecture.pdf){width="\linewidth"}
AdaDSR with Spatially Variant Network Depth
-------------------------------------------
Single image super-resolution aims at learning a mapping to reconstruct the high-resolution image $\hat{\mathbf{y}}$ from its low-resolution (LR) observation $\mathbf{x}$, and can be written as, $$\hat{\mathbf{y}} = \mathcal{F}(\mathbf{x}; \mathrm{\Theta}),
\label{eqn:deepSR}$$ where $\mathcal{F}$ denotes the SISR network with the network parameters $\mathrm{\Theta}$. In this work, we consider a representative category of deep SISR networks that consist of three major modules, , feature extraction $\mathcal{F}_e$, residual blocks, and HR reconstruction $\mathcal{F}_r$. Several representative SISR models, , SRResNet [@SRResNet], EDSR [@EDSR], and RCAN [@RCAN], belong to this category. Using EDSR as an example, we let $\mathbf{z}_0 = \mathcal{F}_e(\mathbf{x})$. The output of the residual blocks can then be formulated as, $$\mathbf{z}^{o} = \mathbf{z}_0 + \sum_{l=1}^{D} \mathcal{F}_l(\mathbf{z}_{l-1}; \mathrm{\Theta}_l),
\label{eqn:ResBlocks}$$ where $\mathrm{\Theta}_l$ is the network parameters associated with the $l$-th residual block. Given the output of the $(l-1)$-th residual block, the $l$-th residual block can be written as $\mathbf{z}_{l} = \mathbf{z}_{l-1} + \mathcal{F}_l(\mathbf{z}_{l-1}; \mathrm{\Theta}_l)$. Finally, the reconstructed HR image can be obtained by $\hat{\mathbf{y}} = \mathcal{F}_r(\mathbf{z}^{o}; \mathrm{\Theta}_r)$.
As shown in Fig. \[fig:intro\], the difficulty of super-resolution is spatially variant. For examples, it is not required to go through all the $D$ residual blocks in Eqn. (\[eqn:ResBlocks\]) to reconstruct the smooth regions. As for the regions with rich and detailed textures, more residual blocks generally are required to fulfill high quality reconstruction. Therefore, we introduce a 2D network depth map $\mathbf{d}$ ($0 \leq d_{ij} \leq D $) which has the same spatial size with $\mathbf{z}_0$. Intuitively, the network depth $d_{ij}$ is smaller for the smooth region and larger for the region with rich details. To facilitate spatially adaptive inference, we modify Eqn. (\[eqn:ResBlocks\]) as, $$\mathbf{z}^{o} = \mathbf{z}_0 + \sum_{l=1}^{D} \mathcal{G}_l(\mathbf{d}) \circ \mathcal{F}_l(\mathbf{z}_{l-1}; \mathrm{\Theta}_l),
\label{eqn:AdaResBlocks}$$ where $\circ$ denotes the entry-wise product. Here, $\mathcal{G}_l({d}_{ij})$ is defined as, $$\mathcal{G}_l(d_{ij})=\left\{
\begin{array}{cl}
0,\ \ & d_{ij} < l-1 \\
1,\ \ & d_{ij} > l \\
d_{ij}-(l-1),\ \ & otherwise
\end{array}\right..\label{eqn:MaskFunc}$$
Let $\lceil \cdot \rceil$ be the ceiling function, thus, the last $D - \lceil d_{ij} \rceil$ residual blocks are not required to compute for a position with the network depth $d_{ij}$. Given the 2D network depth map $\mathbf{d}$, we can then exploit Eqn. (\[eqn:AdaResBlocks\]) to conduct spatially adaptive inference.
![An example to illustrate the im2col [@im2col] based sparse convolution. $\star$, $\circ$ and $\ast$ represent convolution, entry-wise product and matrix multiplication, respectively. $\mathbf{f}$, $\mathbf{w}$ and $\mathbf{o}$ are input feature, convolution kernel and output feature of standard convolution operation, which is implemented by arbitrary convolution implementation algorithms, while $\mathbf{F}'$ and $\mathbf{w}'$ are reorganized from $\mathbf{f}$ and $\mathbf{w}$ during the im2col procedure. Given the mask $\mathbf{m}$, the reorganized $\mathbf{m}'$ indicates that the shaded rows can be safely ignored in the im2col based sparse convolution, therefore reducing computation amount comparing to standard convolution based sparse convolution (as shown in the upper half).](figures/im2col_3.pdf){width=".9\linewidth"}
\[fig:im2col\]
Sparse Convolution for Efficient Inference
------------------------------------------
Let $\mathbf{m}$ (, $\mathbf{m}_l = \mathcal{G}_l(\mathbf{d})$ for the $l$-th residual block) be a mask to indicate the positions where the convolution activations should be kept. As shown in Fig. \[fig:im2col\], for some convolution implementations such as fast Fourier transform (FFT) [@fft-1; @fft-2] and Winograd [@winograd] based algorithms, one should first perform the standard convolution to obtain the whole output feature map by $\mathbf{o}=\mathbf{w}\star\mathbf{f}$. Here, $\mathbf{f}$, $\mathbf{w}$ and $\star$ denote input feature map, convolution kernel and convolution operation, respectively. Then the sparse results can be represented by $\mathbf{o}^{*} = \mathbf{m} \circ \mathbf{o}$. Nonetheless, such implementations meet the requirement of spatially adaptive inference while maintaining the same computational complexity with the standard convolution.
Instead, we adopt the im2col [@im2col] based sparse convolution for efficient adaptive inference. As shown in Fig. \[fig:im2col\], the patch from $\mathbf{f}$ related to a point in $\mathbf{o}$ is organized as a row in matrix $\mathbf{F}'$, and the convolution kernel ($\mathbf{w}$) is also converted as vector $\mathbf{w}'$. Then the convolution operation is transformed into a matrix multiplication problem, which is highly optimized in many Basic Linear Algebra Subprograms (BLAS) libraries. Then, the result $\mathbf{o}'$ can be organized back to the output feature map. Given the mask $\mathbf{m}$, we can simply skip the corresponding row when constructing the reorganized input feature $\mathbf{F}'$ if it has zero mask value (see the shaded rows of $\mathbf{F}'$ in Fig. \[fig:im2col\]), and the computation is skipped as well. Thus, the spatially adaptive inference in Eqn. (\[eqn:AdaResBlocks\]) can be efficiently implemented via the im2col and col2im procedure. Moreover, the efficiency can be further improved when more rows are masked out, , when the average depth of $\mathbf{d}$ is smaller.
It is worth noting that sparse convolution has been suggested in many works and evaluated in image classification [@SparseWinogradConv], object detection [@SparseCNN; @SBNet], model pruning [@FasterCNN] and 3D semantic segmentation [@SparseConvNet] tasks. [@SparseCNN] and [@SBNet] are based on im2col and Winograd algorithm respectively, however, these methods implement patch-level sparse convolution. [@SparseConvNet] designs new data structure for sparse convolution and constructs a whole CNN framework to suit the designed data structure, making it incompatible with standard methods. [@SparseWinogradConv] incorporates sparsity into Winograd algorithm, which is not mathematically equivalent to the vanilla CNN nor the conventional Winograd CNN. The most relevant work [@FasterCNN] skips unnecessary points when traversing all spatial positions and achieves pixel-level sparse convolution, which is implemented on serial devices (, CPUs) via for-loops. In this work, we use im2col based sparse convolution, which combines this intuitive thought and im2col algorithm, and deploy the proposed model on the parallel platforms (, GPUs). To the best of our knowledge, this is the first attempt to deploy pixel-wise sparse convolution on SISR task and achieves image content and resource adaptive inference.
Lightweight Adapter Module
--------------------------
In this subsection, we introduce a lightweight adapter module $\mathcal{P}$ to predict a 2D network depth map $\mathbf{d}$. In order to adapt to local image content, the adapter module $\mathcal{P}$ is required to produce lower network depth for smooth region and higher depth for detailed region. Let $\bar{d}$ be the average value of $\mathbf{d}$, and $d$ be the desired network depth. To make the model to be adaptive to efficiency constraints, we also take the desired network depth $d$ into account, and require that the decrease of $d$ can result in smaller $\bar{d}$, , better inference efficiency.
As shown in Fig. \[fig:network\], the adapter module $\mathcal{P}$ takes the feature map $\mathbf{z}_0$ as the input and is comprised of four convolution layers with PReLU nonlinearity followed by another convolution layer with ReLU nonlinearity. Let $\mathbf{d} = \mathcal{P}(\mathbf{z}_0; \mathrm{\Theta}_a)$. We then use Eqn. (\[eqn:MaskFunc\]) to generate the mask $\mathbf{m}_l$ for each residual block. It is noted that $\mathbf{m}_l$ may not be a binary mask but contains many zeros. Thus, we can construct a sparse residual block which can omit the computation for the regions with zero mask values to facilitate efficient adaptive inference. To meet the efficiency constraint, we also take the desired network depth $d$ as the input to the adapter, and predict the network depth map by $$\mathbf{d} = \mathcal{P}(\mathbf{z}_0, d; \mathrm{\Theta}_a),$$ where $\mathrm{\Theta}_a$ denotes the network parameters of the adapter module. Specifically, denote the weight of the first convolution layer in the adapter as $\mathrm{\Theta}^{(1)}_a$, we make the convolution adjustable by replacing the weight $\mathrm{\Theta}^{(1)}_a$ with $d\cdot\mathrm{\Theta}^{(1)}_a$ when the desired depth is $d$, therefore the adapter is able to meet the aforementioned $d$-oriented constraints.
Network Architecture and Learning Objective
-------------------------------------------
**Network Architecture**. As shown in Fig. \[fig:network\], our proposed AdaDSR is comprised of a backbone SISR network and a lightweight adapter module to facilitate image content and efficiency adaptive inference. Without loss of generality, in this section, we take EDSR [@EDSR] as the backbone to illustrate the network architecture, and it is feasible to apply our AdaDSR to other representative SISR models [@SRResNet; @WDSR; @RCAN] with a number of residual blocks [@ResNet]. Following [@EDSR], the backbone involves 32 residual blocks, each of which has two $3\times3$ convolution layers with stride 1, padding 1 and 256 channels with ReLU nonlinearity. Another $3\times3$ convolution layer is deployed right behind the residual blocks. The feature extraction module $\mathcal{F}_e$ is a convolution layer, and the reconstruction module $\mathcal{F}_r$ is comprised of an upsampling unit to enlarge the features followed by a convolution layer which reconstructs the output image. The upsampling unit is composed by a series of Convolution-PixelShuffle [@PixelShuffle] according to the super-resolution scale. Besides, the lightweight adapter module takes both the feature map $\mathbf{z}_0$ and the desired network depth $d$ as the input, and consists of five convolution layers to produce an one-channel network depth map.
It is worth noting that, we implement two versions of AdaDSR. The first takes EDSR [@EDSR] as backbone, which is denoted by AdaEDSR. To further show the generality of proposed AdaDSR and compare against state-of-the-art methods, we also take RCAN [@RCAN] as backbone and implement an AdaRCAN model. The main difference is that, RCAN replaces the 32 residual blocks with 10 residual groups, and each residual groups is composed of 20 residual blocks equipped with channel attention. Therefore, we modify the adapter to generate 10 depth maps simultaneously, and each of which is deployed to a residual group.
**Learning Objective**. The learning objective of our AdaDSR includes a reconstruction loss term and a network depth loss term to achieve a proper tradeoff between SISR performance and efficiency. In terms of the SISR performance, we adopt the $\ell_1$ reconstruction loss defined on the super-resolved output and the ground-truth high-resolution image, $$\mathcal{L}_\mathit{rec}\
=\|\mathbf{y} - \hat{\mathbf{y}}\|_1,$$ where $\mathbf{y}$ and $\hat{\mathbf{y}}$ respectively represent the high-resolution ground-truth and the super-resolved image by our AdaDSR. Considering the efficiency constraint, we require the average $\bar{d}$ of the predicted network depth map to approximate the desired depth $d$, and then introduce the following network depth loss, $$\mathcal{L}_\mathit{depth} = \max(0, \bar{d} - d).$$ To sum up, the overall learning objective of our AdaDSR is formulated as, $$\mathcal{L} = \mathcal{L}_\mathit{rec} + \lambda \mathcal{L}_\mathit{depth},$$ where $\lambda$ is a tradeoff hyper-parameter and is set to $0.01$ in all our experiments.
Experiments {#sec:experiments}
===========
Implementation Details
----------------------
**Model Training.** For training our AdaDSR model, we use the 800 training images and the first five validation images from DIV2K dataset [@div2k] as training and validation set, respectively. The input and output images are in RGB color space, and the input images are obtained by bicubic degradation model. Following previous works [@SRResNet; @EDSR; @RCAN], during training we subtract the mean value of the DIV2K dataset on RGB channels and apply data augmentation on training images, including random horizontal flip, random vertical flip and $90^{\circ}$ rotation. The AdaDSR model is optimized by the Adam [@adam] algorithm with $\beta_1=0.9$ and $\beta_2=0.999$ for 800 epochs. In each iteration, there are 16 LR patches of size $48\times48$. And the learning rate is initialized as $5\times10^{-5}$ and decays to half after every 200 epochs. During training, the desired depth $d$ is randomly sampled from $[0, D]$, where $D$ is 32 and 20 for AdaEDSR and AdaRCAN, respectively. Note that, due to the data structure of the sparse convolution is identical with standard convolution, we can use the pretrained backbone model to initialize the AdaDSR model to improve the training stability and save training time.
**Model Evaluation.** Following previous works [@SRResNet; @EDSR; @RCAN], we use PSNR and SSIM as model evaluation metrics, and five standard benchmark datasets (, Set5 [@set5], Set14 [@set14], B100 [@b100], Urban100 [@urban100] and Manga109 [@manga109]) are employed as test sets, and the PSNR and SSIM indices are calculated on the luminance channel (a.k.a. Y channel) of YCbCr color space with *scale* pixels on the boundary ignored. Furthermore, the computation efficiency is evaluated by FLOPs and inference time. For a fair comparison with the competing methods, when counting the running time, we implement all competing methods in our framework and replace the convolution layers of the main body with im2col [@im2col] based convolutions. All evaluations are conducted in the PyTorch [@pytorch] environment running on a single Nvidia TITAN RTX GPU. The source code and pre-trained models are publicly available at <https://github.com/csmliu/AdaDSR>.
Comparison with State-of-the-arts
---------------------------------
To evaluate the effectiveness of our AdaDSR model, we first compare AdaDSR[^1] with the backbone EDSR [@EDSR] and RCAN [@RCAN] models as well as four other state-of-the-art methods, , SRCNN [@SRCNN], VDSR [@VDSR], RDN [@RDN] and SAN [@SAN]. Note that all visual results of other methods given in this section are generated by the officially released models, while the FLOPs and inference time are evaluated in our framework.
As shown in Table \[tab:performance\], both AdaEDSR and AdaRCAN perform favorably against their counterparts EDSR and RCAN in terms of quantitative PSNR and SSIM metrics. Besides, it can be seen from Table \[tab:efficiency\], although the adapter module introduces extra computation cost, it is very lightweight and efficient in comparison to the backbone super-resolution model, and the deployment of the lightweight adapter module greatly reduces computation amount of the whole model, resulting in lower FLOPs and faster inference, especially on large images (, Urban100 and Manga109). Note that SAN [@SAN] has similar performance with RCAN and AdaRCAN, yet its computation cost is too heavy on large images.
Apart from the quantitative comparison, visual results are also given in Fig. \[fig:compare\]. One can see that AdaEDSR and AdaRCAN are able to generate super-resolved images of similar or better visual quality to their counterparts. Kindly refer to the supplementary materials for more qualitative results. We also show the pixel-wise depth map $\mathbf{d}$ of AdaRCAN (due to space limit, we show the average of the depth maps for 10 groups of AdaRCAN) to discuss the relationship between the processed image and the depth map. As we can see from Fig. \[fig:compare\], greater depth is predicted for the regions with detailed textures, while most of the computation in smooth areas can be omitted for efficiency purpose, which is intuitive and verifies our discussions in Sec. \[sec:intro\].
Considering both quantitative and qualitative results, our AdaDSR can achieve comparable performance against state-of-the-art methods while greatly reducing the computation amount. Further analysis on the adaptive adjustment of $d$ please refer to Sec. \[sec:adaptive\_inference\].
Adaptive Inference with Varying Depth {#sec:adaptive_inference}
-------------------------------------
Taking both the feature map $\mathbf{z}_0$ and desired depth $d$ as input, the adapter module is able to predict an image content adaptive network depth map while satisfying the computation efficiency constraints. Consequently, our AdaDSR can be flexibly tuned to meet various efficiency constraints on the fly. In comparison, the competing methods are based on deterministic inference and can only be performed with the fixed complexity. As shown in Fig. \[fig:adaptive\], we evaluate our AdaDSR model with different desired depth $d$ (, 8, 16, 24, 32 for AdaEDSR and 5, 10, 15, 20 for AdaRCAN), and record the corresponding FLOPs and PSNR values on Set5. More results please refer to the supplementary materials.
From the figures, we can draw several conclusions. First, our AdaDSR can be tuned with the hyper-parameter $d$, and resulting in a curve in the figures, rather than a single point as the competing methods. With an increasing desired depth $d$, AdaDSR requires more computation resources and generates better super-resolved images. It is worth noting that, AdaDSR taps the potential of the backbone models, and can obtain comparable performance against the well-trained backbone model when higher $d$ is set. Furthermore, AdaDSR reaches the saturation point with a relatively lower FLOPs, which indicates that a shallower model is sufficient for most regions. Experiments on both versions (, AdaEDSR and AdaRCAN) verify the effectiveness and generality of our adapter module.
Ablation Analysis {#sec:ablation}
=================
Considering the training efficiency in multi-GPU environment, we perform ablation analysis with EDSR backbone. Without loss of generality, we select AdaEDSR model and scale $\times2$.
**EDSR variants.** To begin with, we train EDSR variants in our framework, , EDSR (8), EDSR (16), EDSR (24) and EDSR (32) by setting the number of residual blocks to 8, 16, 24 and 32, respectively. Note that EDSR (32) performs slightly better than released EDSR model, so we use this one for a fair comparison. The quantitative results on Set5 are given in Table \[tab:ablation\]. Comparing all EDSR variants, generally one can observe performance gains as the model depth grows.
Besides, as previously illustrated in Fig. \[fig:intro\](c), a shallow model is sufficient for smooth areas, while regions with rich contexture usually require a deep model for better reconstruction of the details. Taking advantage of this phenomenon, with lightweight adapter, AdaDSR is able to predict suitable depth for various areas according to difficulty and resource constraints, and achieves better efficiency-performance tradeoff, resulting in the curve at the top left of their corresponding counterparts as shown in Figs \[fig:intro\](d) and \[fig:adaptive\]. Detailed data can be found in Table \[tab:ablation\].
**AdaEDSR variants.** We further implement several AdaEDSR variants, , FAdaEDSR (8), FAdaEDSR (16), FAdaEDSR (24) and FAdaEDSR (32), which are trained with a fixed depth $d=$ 8, 16, 24 and 32 respectively, and the adapter module takes only the image features as input. The models are trained under the same settings (except for the fixed $d$ in the learning objective) with AdaEDSR. As shown in Table \[tab:ablation\], with the per-pixel depth map, these models obtain much better quantitative results than EDSR variants with similar computation cost.
It is worth noting that FAdaEDSR (32) achieves comparable performance with RDN [@RDN], which clearly shows the effectiveness of the predicted network depth map. Furthermore, we also show the performance of our AdaEDSR $\times2$ model in Table \[tab:ablation\]. Specifically, AdaEDSR ($n$) means that desired depth $d=n$ at the test time. One can see that although the quantitative performance is slightly worse than FAdaEDSR, AdaEDSR is more computationally efficient and can be flexibly tuned in the testing phase, indicating that AdaDSR achieves adaptive inference with minor sacrifice of performance.
Conclusion {#sec:conclusion}
==========
In this paper, we revisit the relationship between the model depth and quantitative performance on single image super-resolution task, and present an AdaDSR model by incorporating a lightweight adapter module and sparse convolution in deep SISR networks. The adapter module predicts an image content oriented network depth map, and the value is higher in regions with detailed textures and lower in smooth areas. According to the predicted depth, only a fraction of residual blocks are performed at each point by using im2col based sparse convolution. Furthermore, the parameters of the adapter module are adjustable on the fly according to the desired depth, so that the AdaDSR model can be tuned to meet various efficiency constraints in the inference phase. Experimental results show the effectiveness and adaptiveness of our AdaDSR model, and indicate that AdaDSR can obtain state-of-the-art performance while adaptive to a range of efficiency requirements.
[^1]: Note that the desired depth $d$ is set to 32 and 20 for AdaEDSR and AdaRCAN in Tables \[tab:performance\] and \[tab:efficiency\] respectively, , the number of residual blocks in EDSR and that of each group in RCAN.
|
---
abstract: 'The collapse of young massive stars or the coalescence of a black hole-neutron star binary is expected to give rise to a black hole-torus system. When the torus is strongly magnetized, the black hole produces electron-positron outflow along open magnetic field-lines. Through curvature radiation in gaps, this outflow rapidly develops into a $e^\pm\gamma$-wind, which is ultra-relativistic and of low comoving density, proposed here as a possible input to GRB fireball models.'
address: 'MIT, Cambridge, MA 02139'
author:
- 'Maurice H.P.M. van Putten'
title: 'Electron-positron outflow from black holes and the formation of winds'
---
Here, I discuss some aspects of black holes when exposed to external magnetic fields. For example, black hole-torus systems are a probable outcome of the collapse of young massive stars[@woo93; @pac98] and the coalescence of black hole-neutron star binaries [@pac91], both of which are possible progenitors of cosmological gamma-ray bursts (GRBs). If all black holes are produced by stellar collapse, they should be nearly maximally rotating [@bar70; @bet98]. A surrounding torus or accretion disk is expected to be magnetized by conservation of magnetic flux and linear amplification (cf. [@pac98; @klu98]).
A black hole-torus system will have open magnetic field-lines from the horizon to infinity and closed magnetic field-lines between the black hole and the torus [@mvp99]. The closed magnetic field-lines mediate Maxwell stresses[@mvp99b]. This may be seen by way of similarity to pulsar magnetospheres [@gol69]. In a poloidal cross-section, the torus can be identified with a pulsar which rotates at an angular velocity $\Omega_P\sim \Omega_H-\Omega_T$, wherein the black hole horizon corresponds to infinity. Then, the inner light-surface [@zna77] corresponds to the pulsar light-cylinder, and a ‘bag’ attached to the torus to the last closed field-line. Here, $\Omega_H$ and $\Omega_T$ denote the angular velocities of the black hole and the torus, respectively. The work performed by the Maxwell stresses is commonly attributed to an outgoing Poynting flux emanating from the horizon [@bz77; @tho86]. These Maxwell stresses are likely to be important to the evolution of the torus, and tend to delay accretion onto the black hole. The open magnetic field-lines, on the other hand, enable the black hole to produce an outflow to infinity. Such outflows generate emissions by deceleration against the interstellar medium and through internal shocks.\
\
[To appear in: Explosive Phenom. Astrophys. Compact Objects, edited by C.-H. Lee, M. Rho, I. Yi & H.K. Lee, AIP Conf. Proc.]{}
Here, the outflow along open magnetic field-lines is studied, and found to produce a pair-dominated $e^\pm\gamma$–wind in combination with curvature radiation.
Open field-lines from the horizon to infinity have radiative ingoing boundary conditions at the horizon as seen by zero-angular momentum observers (ZAMOs), and outgoing boundary conditions at infinity. It is well-known that for an outflow to exist, there must be regions in which pairs are created (gaps), somewhere on these open field-lines [@bz77; @phi83; @pun90; @bes00]. The gaps are powered by an electric current $I$ along the field–lines, which is limited by a horizon surface resistivity of $4\pi$, in the presence of a certain potential drop across them. The net particle flow is limited by the black hole luminosity into the gap. The magnetosphere within the gaps is differentially rotating, beyond which the magnetosphere may be force-free and in rigid rotation. Note that, in contrast, the currents along closed magnetic field-lines are fixed by the angular velocity $\Omega_T$ of the surrounding matter, where the gaps are most likely residing between the horizon and the inner light surface. Of interest here is the location of the gaps on the open magnetic field-lines and the power dissipated within, as sites of linear acceleration of charged particles and their curvature radiation.
A rotating black hole tends to produce electrons and positrons by spontaneous emission along open magnetic field-lines in an effort to evolve to a lower energy state by shedding off its angular momentum. Indeed, in the adiabatic limit, the radiated particles possess a specific angular momentum of at least $2M$, whereas the specific angular momentum of the black hole, $a$, is at most $M$. In the approximation of an asymptotically uniform magnetic field, e.g., in a Wald-field [@wal74], the emissions at infinity to satisfy a Fermi-Dirac distribution of radiative Landau states, neglecting curvature radiation and magnetic mirror effects. This results from a modification to the Hawking radiation process[@mvp00]. This is a highly idealized picture derived in the perturbative limit of small particle densities, which will be modified significantly by curvature radiation and the formation of force-free regions. The spontaneous emission process concerns particles with energy-at-infinity $\omega$ below the Fermi-level $V_F$. Here, $V_F$ is the energy-at-infinity associated with the particles as seen on a null-generator of the horizon, such as the ZAMO-derivative $\xi^a\partial_a=\partial_t-\beta\partial_\phi$, where $\beta$ denotes the angular velocity of the sky as seen by ZAMOs. That is, $$\begin{aligned}
V_F\psi=[\xi^aD_a]^H_\infty\psi=(\nu\Omega_H-eV)\psi,\end{aligned}$$ where $\Omega_H$ is the angular velocity of the horizon, using the sign-convention $\psi\propto e^{-i\omega t}e^{i\nu\phi}$. The energy-at-infinity $\omega$ and the azimuthal quantum number $\nu$ are associated with the asymptotically time-like Killing vector $\partial_t$ and azimuthal Killing vector $\partial_\phi$, whereas $D_a=i^{-1}\partial_a+eA_a$ denotes the gauge-covariant derivative in the presence of an electromagnetic vector potential $A_a$. In calculating $V_F$, it is relevant to identify the ground state of the black hole-magnetic field configuration. It has been shown that the lowest energy state, in the process of an angular momentum exchange between the black hole and the surrounding electromagnetic field by variations of the horizon charge $q$, assumes when $q=2BJ$ [@wal74; @dok87], where $J$ is the angular momentum of the black hole. Rotation of the equilibrium charge $q=2BJ$ on the horizon recovers $4\pi BM^2$ as the maximal horizon flux of the magnetic field from the uncharged flux $4\pi BM^2\cos\lambda$, where $\sin\lambda=J/M^2$ [@dok87; @mvp00]. With the sign convention that $B$ is parallel to $\Omega_H$, we then have $V_F=\nu\Omega_H$ with $\nu=eA_\phi$ (for $e^-$) and $A_a=B(\partial_\phi)_a/2$ in the Wald electrostatic equilibrium state. Note that the spontaneous emission process is anti-symmetric under pair-conjugation.
The rate of spontaneous emission is given by a certain barrier transmission coefficient in the level-crossing picture of electrons and positrons [@dam77]. This follows from frame-dragging by $\beta$, and the resulting shift between the energy-at-infinity $\omega$ and the energy $\omega_Z$ as seen by ZAMOs: $$\begin{aligned}
\omega_Z=\pm\sqrt{m_e^2+|eB|(2n+1\pm\alpha)}
=\omega+\nu\beta=\left\{
\begin{array}{cc}
\omega-\nu\Omega_H & \mbox{on the horizon}\\
\omega & \mbox{at infinity}.
\end{array}\right.\end{aligned}$$ Here, it is the quantum number $\nu$ which gives rise to different energies between ZAMOs and Boyer-Linquist observers. Figure 1 (a) shows an equivalent classical picture, where the frame-dragging $\beta$ induces a potential energy $V_{BL}=e\beta A_\phi$ on a flux surface $A_\phi$=const. with respect to the axis of rotation, itself at zero potential in the $q=2BJ$ state.
\
\
[**Figure 1**]{}. [(a) A classical picture of the potential energy $V_{BL}$ as seen in Boyer-Lindquist coordinates along surfaces of constant magnetic flux $A_\phi$. Note that the axis of rotation has zero potental $V=0$ in electrostatic equilibrium $q=2BJ$. Hence, $V_{BL}=e\beta A_\phi$ in the presence of frame-dragging $\beta$. Since $\beta$ describes differential rotation in the surrounding space-time, a potential drop emerges along $A_\phi=$const.: $\Delta V_{BL}=\left(\beta_2-\beta_1\right)A_{\phi}$. When the potential drop is steep, a Schwinger-type process generates electron–positron pairs. (b) Cartoon of the formation of a black hole-wind in a black hole-torus system. There is a minimum opening angle $\theta_{min}\sim\sqrt{B_c/3B}$, beyond which spontaneous emission along open field-lines by the black hole is effective. Flux surfaces with $\theta\sim\theta_{min}$ have a gap length of order $M$, which decreases for $\theta>\theta_{min}$. These gaps, indicated in grey, create pairs, which are subject to linear acceleration and produce curvature radiation. The net outflow $L_p$ in particles is a combination of an inner, current-free outflow with vanishingly small Poynting flux $L_S$ inside $\theta<\theta_{min}$ and an outer, current-carrying outflow with $\theta>\theta_{min}$. Both derive most of their particles from pair-cascade through curvature radiation, and flow along open field-lines to infinity. ]{}
Since $\beta$ describes a differentially rotating space-time, it varies with distance to the black hole and $V_{BL}$ introduces a potential energy drop along the magnetic field-lines. When sufficiently strong, a Schwinger-type process is set in place, which locally produces pairs at a certain rate per unit volume. Formally, the rate of pair-production follows from a scattering calculation in the WKB approximation [@haw75; @gib76; @dam77] (cf. also [@gol78]). The pair-production rate is found to be given by a barrier transmission coefficient $\Gamma\sim e^{-\pi B_c/B\theta^2}$, where $B_c=m_e^2/e=4.4\times 10^{13}$G is the QED value of the magnetic field-strength and $\theta$ is the poloidal angle in Boyer-Linquist coordinates. More precisely, the gradient $\eta=-\nabla V_{BL}$ parallel to $B$ drives a pair-production rate per unit volume by a Schwinger-process ${d^2N}/{dtdV}\sim({e^2\eta B}/{4\pi^2})e^{-\pi B_c/\eta\theta^2}
~~~(B>>B_c,~a\sim M).$ This pair-production process will be in place, whenever the charge-density is low so that the magnetosphere remains in differential rotation.
The magnetosphere on open field–lines away from a gap assumes a force-free, rigidly rotating state with a Goldreich-Julian charge density[@bz77; @tho86; @tre00]. This is similar to the analogous case in pulsar magnetospheres [@gol71; @gol71b]. In view of the horizon boundary conditions below, I shall assume that the gap is attached to the horizon.
To a first approximation, the local structure of a gap follows from the ingoing radiative horizon–boundary conditions. The flow in a gap is described by a charge-density $\rho_e$, a pair-density $n_w$ and a Lorentz factor $\Gamma$. This flow is powered by an electric current $I$ along the open field–lines to infinity through a polar cap of area $A_p$ at the cost of a certain potential drop across. The ingoing radiative boundary condition at the horizon applies to electrons and positrons alike: in the limit as we approach the horizon, $I$ and the electric charge density $\rho_e$ are no longer independent, but become proportional to one another ($cf.$ [@pun90]): $$\begin{aligned}
I\longrightarrow -\rho_e A_p,
\label{EQN_BC}\end{aligned}$$ since all particles fall into the black hole with the velocity of light. The sign of $\rho_e$ in (\[EQN\_BC\]) is that seen by ZAMOs. Here, $\rho_e$ (and $n_w$) are normalized by factoring in the redshift factor. $I$ saturates against the horizon surface resistivity of $4\pi$: $4\pi I\sim \nu\Omega_H$, up to a logarithmic factor on the left hand-side. Hence, $\rho_e\sim \rho_{GJ}/2$, where $\rho_{GJ}=B\Omega_H/2\pi$ is the Goldreich-Julian charge density near the horizon. With curvature $R_B\sim \sqrt{2}M/\theta^2$ of the Wald–field, curvature radiation produces $n_w>>\rho_e/e$ in momentum balance: $n_{w}2e^2\Gamma^4/3R_B^2\sim \rho_e E_\perp,$ where $E_\perp\sim\nu\Omega_H/Me$ is the equivalent electric field normal to the horizon as seen in Boyer-Lindquist coordinates. Note, however, that the magnetic field of the torus will have larger curvature than that of the Wald-field. Given energy balance of the outflow $n_w\Gamma m_e A_{cap}$ with the total power $IE_\perp L$ dissipated in the gap, where $L$ is the linear gap size, the solutions are governed by the unknown $L$. The gap size $L$ determines the degree to which the black hole luminosity is put to work in accelerating particles. The gap produces a radiation pressure $\propto L$, which acts on the interface with the force-free magnetosphere above. The interface is probably Raleigh-Taylor unstable against this radiation pressure. Moreover, the gap itself may well widen due to this pressure. The arguments given above are intended as a first sketch towards the structure of the gaps, and it appears to be of interest to consider the gap size $L$ in the context of a detailed stability analysis.
A continuous outflow establishes with appropriate current closure. Note that closure over the polar axis introduces Poynting flux with negative helicity, whereas current closure over a gap across the equator of the black hole and the bag of the torus (corresponding to the last field-line in pulsar magnetospheres) introduces Poynting with positive helicity - indicative of positive energy and angular momentum transport outwards. As the latter is energetically favorable over the former, thereby leaving negligible Poynting flux over the axis of symmetry. A similar conclusion has been found in the case of current closure around neutron stars [@gol71; @gol71b]. It follows that the black hole-wind is pair-dominated with the property that $\sigma=L_S/L_p\sim 0$ within $\theta<\theta_{min}=\sqrt{B_c/3B}$, where $L_S$ and $L_p$ are the luminosities in Poynting flux and pairs, respectively[@mvp00b]. Figure 1 (b) sketches this wind–formation process, assuming a $L$ to be large on the flux surfaces with $\theta\sim\theta_{min}$.
It will be of interest to look for observational evidence in GRB–afterglow emissions for the presented ultra–relativistic, low density pair-dominated wind.\
\
**Acknowledgements**
\
This work is partially supported by NASA Grant 5-7012 and an MIT Reed Award. The author thanks the hospitality of Theoretical Astrophysics, Caltech, and the Korean Institute of Advanced Study (KIAS), where some of this work was performed, and gratefully acknowledges stimulating discussions with P. Goldreich, E.S. Phinney, R.D. Blandford and K.S. Thorne.
Bardeen J.M., [*Nature*]{}, [**226**]{}, 64 (1970). Bethe H.A. & Brown G.E., astro-ph/9805355. Blandford R.D. & Znajek R.L. [*Mon. Not. R. Astron. Soc.*]{}, [**179**]{}:433–456 (1977). Beskin V.S. & Kuznetsova I.V., [*Il Nuovo Cimento*]{} (to appear, 2000). Chevalier R.A. and Li Z.-Y., [*astro-ph/9908272*]{} (1999). Damour T. [*in*]{} Proc. 1$^{st}$ Marcel Grossmann Meeting on Gen. Rel., ed.R. Ruffini (North Holland, Amsterdam, 1977), p459-482. Dokuchaev V.I. [*Sov. Phys. JETP*]{}, [**65**]{}(6):1079-1086 (1987). Gavrilov S.P. & Gitman D.M., [*Phys. Rev. D*]{}, [**53**]{}:7162 (1996). Gibbons G.W. [*MNRAS*]{}, [**177**]{}:37P (1976). Goldreich P. & Julian W.H., [*ApJ.*]{}, [**157**]{}:869 (1969). Goldreich P., [*in*]{} Publ Astron Soc Pacific, [**83**]{}(495):599 (1971). Goldreich P., [*in*]{} Accademia Nazionale Dei Lincei, N. 162, p151 (1971). Goldreich P. & Tremaine S., [*ApJ.*]{}, [**222**]{}:850 (1978). Hawking S.W. [*Commun. Math. Phys.*]{}, [**43**]{}:199 (1975). Kluzniak W. & Ruderman M., [*ApJ.*]{}, [**505**]{}:L113 (1998). Lin Q., hep-th/9810037 (1998). Paczyński B. [*ApJ.*]{}, [**494**]{}:45 (1998). Paczyński B. [*Acta Astron.*]{}, [**41**]{}: 257-267 (1991). Phinney E.S., [*in*]{} Astrophysical Jets (Reidel, Dordrecht 1983), p201. Punsly B. & Coroniti F.V., [*Phys. Rev. D.*]{}, [**40**]{}:3834 (1989); [*ibid.*]{} [*ApJ.*]{}, [**350**]{}:518 (1990). Thorne K.S., Price R.H. and Macdonald D.A. [*Black holes: the membrane paradigm*]{} (Yale Univ. Press, 1986). Treves A., Turolla R. & Popov S.B., astro-ph/0005508 (2000). van Putten M.H.P.M. and Wilson A. [*in*]{} ITP Conference on black holes:\
Theory confronts reality, http://www.itp.ucsb.edu/online/bhole$\mbox{}-$c99 (1999). van Putten M.H.P.M. [*Science*]{}, [**284**]{}:115-118 (1999). van Putten M.H.P.M., [*Phys. Rev. Lett.*]{}, [**84**]{}(17):3752 (2000). van Putten M.H.P.M., http://online.itp.ucsb.edu/online/astro99. Wald R.M., [*Phys. Rev. D.*]{}, [**10**]{}:1680–1684 (1974). Woosley S., [*ApJ.*]{}, [**405**]{}:273-277 (1993). Znajek R.L., [*Mon. Not. R. Astron. Soc.*]{}, [**179**]{}:457–472 (1977).
|
---
abstract: |
\
We use the Jeans equations for an ensemble of collisionless particles to describe the distribution of broad-line region (BLR) cloud in three classes: (A) non disc (B) disc-wind (C) pure disc structure. We propose that clumpy structures in the brightest quasars belong to class A, fainter quasars and brighter Seyferts belong to class B, and dimmer Seyfert galaxies and all low-luminosity AGNs (LLAGNs) belong to class C. We derive the virial factor, $f$, for disc-like structures and find a negative correlation between the inclination angle, $\theta_{0}$, and $f$. We find similar behaviour for $f$ as a function of the FWHM and $\sigma_{z}$, the $z$ component of velocity dispersion. For different values of $\theta_{0}$ we find that $ 1.0 \lesssim f \lesssim 9.0 $ in type1 AGNs and $ 0.5 \lesssim f \lesssim 1.0 $ in type2 AGNs. Moreover we have $ 0.5 \lesssim f \lesssim 6.5 $ for different values of $\textsc{FWHM}$ and $ 1.4 \lesssim f \lesssim 1.8 $ for different values of $ \sigma_{z} $. We also find that $ f $ is relatively insensitive to the variations of bolometric luminosity and column density of each cloud and the range of variation of $ f $ is in order of 0.01. Considering wide range of $ f $ we see the use of average virial factor $ \langle f \rangle $ is not very safe. Therefore We propose AGN community to divide a sample into a few subsamples based on the value of $\theta_{0}$ and $\textsc{FWHM}$ of members and calculate $ \langle f \rangle $ for each group separately to reduce uncertainty in black hole mass estimation.
author:
- |
Mohammad Ghayuri[^1]\
\
Independent Scholar, Mashhad, Iran\
title: 'Kinematics and Structure of Clumps in Broad-line Regions in Active Galactic Nuclei'
---
\[firstpage\]
galaxies: active - galaxies: nuclei - galaxies: Seyfert - galaxies: kinematics and dynamics - black hole physics
Introduction
============
Now it is widely accepted that an active galactic nucleus (AGN) is a supermassive black hole surrounded by an accretion disc. Above the accretion disc, there is dense, rapidly-moving gas making up the so-called broad line region (BLR) emitting broad emission lines by reprocessing the continuum radiation from the inner accretion disc (see @Gaskell09). The BLR is believed to consist of dense clumps of hot gas ($n_H > 10^9$ cm$^{-3}$; $T \sim 10^4$K) in a much hotter, rarefied medium. The motions of the line-emitting clouds is the main cause of the broadening of the line profiles.
The profiles of the broad emission lines is a major source of information about the geometry and kinematics of the BLR. Profiles can be broadly categorized into two shape: single-peaked and double-peaked. It is believed that double-peaked profiles are emitted from disc-like clumpy structure (e.g., [@Chen89b; @Chen89a; @Eracleous94; @Eracleous03; @Strateva03]). Although obvious double-peaked profiles are seen in only small fraction of AGNs, this does not mean that such discs are absent in other AGNs. Based on some studies, under the specific conditions, disc-like clumpy structure can even produce single-peaked broad emission lines (e.g., [@Chen89a; @Dumont90a; @Kollatschny02; @Kollatschny03]). Spectropolarimetric observations (@Smith05) imply the presence of clumpy discs in BLR. On the other hand, other authors have suggested a two-component model in which they consider a spherical distribution of clouds surrounding the central black hole in addition to their distribution in the midplane (e.g., [@Popovic04; @Bon09]). In this model, while the disc is responsible for the production of the broad wings, the spherical distribution is responsible for the narrow cores. The integrated emission line profile is a combination of wings and cores.
Using the width of the broad Balmer emission lines, $\textsc{FWHM}$, and an effective BLR radius, $r_{BLR}$, which is obtained by either reverberation mapping (RM) (e.g., [@Blandford82; @Gaskell86; @Peterson93; @Peterson04]) or from the relationship between optical luminosity and $r_{BLR}$ ([@Dibai77; @Kaspi00; @Bentz06]), masses of black holes are estimated from the virial theorem, $M = fr_{BLR}\textsc{FWHM}^{2}/G$, where $G$ is the gravitational constant and $f$ is the “virial factor" depending on the geometry and kinematics of the BLR and the inclination angle, $\theta_{0}$. Unfortunately with current technology it is impossible to directly observe the structure of the BLR. Therefore the true value of $ f $ for each object is unknown and we are required to use average virial factor, $ \langle f \rangle $, to estimate the mass of supermassive black hole. Comparison of virial masses with independent estimates of black hole masses using to $M - \sigma$ relationship (see @Kormendy13) have given empirical estimates of the value of $ \langle f \rangle $. However each study has found different value for $ \langle f \rangle $. For example @Onken04 calculate $ \langle f \rangle = 5.5 \pm 1.8 $, @Woo10 calculate $ \langle f \rangle = 5.2 \pm 1.2 $, @Graham11 calculate $ \langle f \rangle = 3.8^{+0.7}_{-0.6} $ and @Grier13 calculate $ \langle f \rangle = 4.31 \pm 1.05 $. This is because each group take a different sample. Different values for $ \langle f \rangle $ prevent us to have a reliable value for black hole mass.
The situation for low-luminosity AGNs (LLAGNs) is somewhat different. The lack of broad optical emission lines in the faintest cases ($ L = 10^{-9}< L <10^{-6} L_{Edd}$) has led to two scenarios about the presence of BLR in LLAGNs. The first, which is somewhat supported by the theoretical models (e.g., [@Nicastro00; @Elitzur06]), simply says that the BLR is absent in such faint objects. However, there is clear evidence in favor of the presence of the BLR in some LLAGNs, at least those with $ L > 10^{-5} L_{Edd}$. This supports a second scenario which says that the BLR exists in LLAGNs but we cannot detect them in the faintest cases because the intensity of their broad emission lines is below the detection threshold set mostly by starlight in the host galaxy. In the Palomar survey, it was found that broad H$\alpha$ emission is present in a remarkably high fraction of LINERs (LLAGNs)(see @Ho08). Moreover, double-peaked broad emission lines have been found in some LINERs including NGC 7213 (@Filippenko84), NGC 1097 (@Storchi93), M81 (@Bower96), NGC 4450 (@Ho00) and NGC 4203 (@Shields00). Other studies have also shown the presence of variable broad emission lines for NGC 1097 (@Storchi93), M81 (@Bower96) and NGC 3065 (e.g., @Eracleous01). Recently, @Balmaverde14 found other LLAGNs with $ L=10^{-5} L_{Edd}$ showing BLRs.
Since the optical spectra of LLAGNs are severely contaminated by the host galaxy, some authors have suggested using the widths of the Paschen lines rather than Balmer lines to determine a $\textsc{FWHM}$ ([@Landt11; @Landt13; @La; @Franca15]). Also, in order to estimate the BLR radius, $r_{BLR}$, we can use near-IR continuum luminosity at 1 $\mu m$ ([@Landt11; @Landt13]) or X-ray luminosity ([@Greene10]) rather than the optical luminosity.
@Whittle86 used the Boltzmann equation to describe the kinematics of BLR. More recently, @Wang12 showed that the BLR can be considered as a collisionless ensemble of particles. By considering the Newtonian gravity of the black hole and a quadratic drag force, they used the collisionless Boltzmann equation (CBE) to study the dynamics of the clouds for the case where magnetic forces are unimportant. Following this approach, some authors included the effect of magnetic fields on the dynamics of the clouds in the BLR (e.g., @Khajenabi14). In this paper, we use the CBE to describe the distribution of the clouds in the BLR. The structure of this paper is as follows: in the section (\[s2\]) we will establish our basic formalism and apply it in order to classify the clumpy structure of the BLR. In the section (\[s3\]) we will concentrate on LLAGNs and give more details about the distribution of the clouds in such systems. Moreover, we will derive the virial factor $f$ for them and in the final section, the conclusions are summarized.
Kinematic Equations of Clouds in the BLR {#s2}
========================================
This paper only considers axisymmetric, steady-state systems. In the subsection (\[ss21\]) we start by deriving the general form of the Jeans equations in cylindrical coordinates ($R, \phi, z$) for a system of particles subject to velocity-dependent forces as well as position-dependent forces. Then, by assuming axisymmetry ($\partial /\partial \phi =0$) and a steady state ($\partial /\partial t =0$), we simplify the Jeans equations and use them to describe the distribution of the clouds in the BLR.
Jeans equations {#ss21}
---------------
If we define the distribution function $F$ as $F=\Delta n/\Delta x \Delta y \Delta z \Delta^{3} v$, the continuity equation in the phase space (see [@Binney87 eq 4.11]) is given by $ \partial F/\partial t + \Sigma_{\alpha = 1}^{6} \partial (F \dot{\omega_{\alpha}})/\partial \omega_{\alpha} = 0 $. This equation can be rewritten as $DF/Dt+F(\partial a_{x}/\partial v_{x}+\partial a_{y}/\partial v_{y}+\partial a_{z}/\partial v_{z})=0 $ in Cartesian position-velocity space ($x,y,z,v_{x},v_{y},v_{z}$) where $DF/Dt$ and $\mathbf{a}$ are respectively the Lagrangian derivative in the phase space and the resultant acceleration vector. On the other hand by the chain rule, the partial derivatives with respect to the Cartesian components of the velocity are related to those respect to the cylindrical components by $ \partial/\partial v_{x}=\cos \phi (\partial/\partial v_{R}) - \sin \phi (\partial/\partial v_{\phi}) $, $ \partial/\partial v_{y}=\sin \phi (\partial/\partial v_{R}) + \cos \phi (\partial/\partial v_{\phi}) $ and $ \partial/\partial v_{z}=\partial/\partial v_{z} $. Also the relationship between the Cartesian and cylindrical components of the acceleration vector is $ a_{x}=a_{R}\cos \phi - a_{\phi} \sin \phi $, $ a_{y}=a_{R}\sin \phi + a_{\phi} \cos \phi $ and $ a_{z}=a_{z} $. Combining all of these equations, we obtain the extended form of the CBE in cylindrical position-velocity space ($R,\phi, z, v_{R}, v_{\phi}, v_{z}$) as $$\frac{\partial F}{\partial t}+v_{R} \frac{\partial F}{\partial R}+\frac{v_{\phi}}{R} \frac{\partial F}{\partial \phi}+v_{z} \frac{\partial F}{\partial z}+\left(a_{R}+\frac{v^{2}_{\phi}}{R}\right) \frac{\partial F}{\partial v_{R}}+$$$$$$ $$\label{eq1}
\left(a_{\phi}-\frac{v_{R}v_{\phi}}{R}\right) \frac{\partial F}{\partial v_{\phi}}+a_{z} \frac{\partial F}{\partial v_{z}}+F\left(\frac{\partial a_{R}}{\partial v_{R}}+\frac{\partial a_{\phi}}{\partial v_{\phi}}+\frac{\partial a_{z}}{\partial v_{z}}\right)=0.$$ As can be seen, in the absence of velocity-dependent forces, equation (\[eq1\]) agrees with the standard form (see [@Binney87 eq 4.15]). As is shown in the Appendix \[a1\], the Jeans equations derived from equation (\[eq1\]) can be written as $$\label{eq2}
\frac{\partial n}{\partial t} + \frac{1}{R} \frac{\partial}{\partial R} (nR\langle v_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{z} \rangle)=0,$$ and $$\frac{\partial}{\partial t} (n\langle v_{R} \rangle)+\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{R}v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{R}v_{z} \rangle)$$ $$\label{eq3}
+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle}{R}-n \langle a_{R} \rangle =0,$$ and $$\frac{\partial}{\partial t} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{1}{R} \frac{\partial}{\partial \phi} (n\langle v^{2}_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{\phi}v_{z} \rangle)$$ $$\label{eq4}
+\frac{2n}{R}\langle v_{\phi}v_{R}\rangle -n \langle a_{\phi} \rangle =0,$$ and $$\frac{\partial}{\partial t} (n\langle v_{z} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{z} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi}v_{z} \rangle)+\frac{\partial}{\partial z} (n\langle v^{2}_{z} \rangle)$$ $$\label{eq5}
+\frac{n\langle v_{R}v_{z}\rangle}{R}-n \langle a_{z} \rangle =0,$$ where $n$ is the volume number density in the position-place. Equations (\[eq2\]) - (\[eq5\]) are an extended form of the Jeans equations describing a collisionless system of particles undergoing both velocity-dependent and position-dependent forces. Considering gravity as dominant force for a axisymmetric system of particles, equations (\[eq2\]) - (\[eq5\]) reduce to the standard form of the Jeans equations (see equations 4.28 and 4.29 of [@Binney87]).
Dynamics and geometry of BLR {#ss22}
----------------------------
In this subsection, by considering a steady axisymmetric system, we include the Newtonian gravity of the black hole, the isotropic radiation pressure of the central source, and the drag force between the clouds and the ambient medium for the linear regime as the dominant forces. First we discuss about the role of radiation pressure and gravity and after that, through the analysis of the clouds near the midplane, we classify the distribution of the clouds in the BLR.
### Radiation pressure versus gravity
Assuming that the clouds are optically thick, the radiative force can be expressed as $$\label{eq6}
\mathbf{F}_{rad}=\frac{\sigma}{c}\mathcal{F}\mathbf{e}_{r},$$ where $\sigma $ and $ c $ are the cloud’s cross-section and the speed of the light respectively, and the isotropic radiation flux, $\mathcal{F}$, is $$\label{eq7}
\mathcal{F}(r)=\frac{L}{4\pi r^{2}}.$$ $L$ is the bolometric luminosity of the central source and $ r=\sqrt{R^{2}+z^{2}}$ is the spherical radius. For the clouds near the midplane, $ z \ll R$, the radiative force per unit of mass can be written as $$\label{eq8}
\mathbf{a}_{Rad}=\Omega_{k,mid}^{2}\frac{3l}{2\mu \sigma_{T}N_{cl}}(R\mathbf{e}_{R}+z\mathbf{e}_{z}),$$ where $\Omega_{k,mid}=\sqrt{GM/R^{3}}$ is the Keplerian angular velocity in the midplane, $\mu $ is the mean molecular weight, $\sigma_{T}$ is the Thomson cross-section, $l$ is the Eddington ratio, and $ N_{cl}$ is the column density of each cloud. Following previous studies, we consider the clouds, with conserved mass $ m_{cl}$, in pressure equilibrium with the inter-cloud gas (e.g., [@Netzer10; @Krause11; @Khajenabi15]). Furthermore, the pressure of the ambient medium, and hence the gas density in individual clouds, $n_{gas}$, are assumed to have a power-law dependence on the (spherical) distance from the centre as $n_{gas} \propto r^{-s}$. As a result, since $r \approx R $ for the clouds near the midplane, the column density defined by $ N_{cl}=m_{cl}/R_{cl}^{2}$ finally becomes $$\label{eq9}
N_{cl}=N_{0}\left(\frac{R}{R_{0}}\right)^{-2s/3},$$ where $R_{0}$ is one light day and $N_{0}$ is the column density at $R_{0}$.
In addition to the gravitational and radiative forces, the drag force opposing relative movement of clouds and the ambient gas is another force that needs to be taken into consideration. Depending on the size of the clouds, there are two regimes for the drag force. These include the Epstein and the Stokes regimes (e.g., @Armitage13). In the Epstein regime, which dominates for the small clouds, the magnitude of the force is proportional to the relative velocity. In the Stokes regime the drag affecting the movement of the large clouds increases as the square of the relative velocity. We assume the clouds have spherical shapes, so the drag coefficient in the Stokes regime depends solely on the Reynolds number which is proportional to the relative velocity. @Shadmehri15 demonstrated that the Reynolds number in the inter-cloud gas is lower than unity. This means that: (1) we can consider the inter-cloud gas as having a laminar flow and (2) the drag coefficient in the Stokes regime is proportional to the inverse of the Reynolds number (e.g., @Armitage13). This means that the drag coefficient in the Stokes regime is proportional to the inverse of relative velocity. As a result, in both the Epstein and Stokes regimes, the magnitude of the drag force is proportional to the relative velocity and both small and large clouds are affected by a linear drag force as $\mathbf{F_{d}}=f_{l}(\mathbf{v}-\mathbf{w})$, where $\mathbf{w}$ is the velocity of the ambient medium and $f_{l}$ is the drag coefficient. The equations of motion of an individual cloud near the midplane are therefore given by
$$a_{R}=-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-f_{l}(v_{R}-w_{R}),$$ $$a_{\phi}=-f_{l}(v_{\phi}-w_{\phi}),$$ $$\label{eq10}
a_{z}=-\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-f_{l}(v_{z}-w_{z}),$$ where $R_{c}$ is the critical radius defined as $$\label{eq11}
R_{c}=R_{0}\left( \frac{2\mu\sigma_{T}N_{0}}{3l}\right)^{3/2s}.$$ From equations (\[eq8\]) and (\[eq9\]), we see that at $R=R_{c}$, the attractive gravitational and repulsive radiative force cancel each other and the magnitude of the resultant force is equal to zero. For $R > R_{c}$, the radiative force is stronger than gravity and accelerates the clouds away from the central black hole. For $R < R_{c}$ gravity overcomes the radiative force and pulls the clouds inwards. If we assume $ s=3/2 $ , $ \mu=0.61 $, $ l=0.001 $, $ N_{0}\approx 10^{23} cm^{-2} $ and $ \sigma_{T}=6.7\times 10^{-25} cm^{2} $, the value of $ R_{c} $ almost becomes 27 light days which is in order of BLR radius (e.g., @Krolik91). Assuming a steady state and axisymmetry, we substitute equations (\[eq10\]) into equations (\[eq3\]) - (\[eq5\]) to obtain
![Classification of clumpy BLR structure. The black filled circle on the left shows the supermassive black hole. The grey and white areas show the clumpy and non-clumpy regions in the BLR respectively. The first panel shows class A in which the clouds occupy all positions in the BLR. The second panel shows class B. In this class we have a disc for $R < R_{c}$ and a cloudy torus (wind region) for $R > R_{c}$. The third panel shows class C in which the clumpy structure in the BLR is disc-like.[]{data-label="figure1"}](panel1a.eps "fig:"){width="0.8\columnwidth"}\
![Classification of clumpy BLR structure. The black filled circle on the left shows the supermassive black hole. The grey and white areas show the clumpy and non-clumpy regions in the BLR respectively. The first panel shows class A in which the clouds occupy all positions in the BLR. The second panel shows class B. In this class we have a disc for $R < R_{c}$ and a cloudy torus (wind region) for $R > R_{c}$. The third panel shows class C in which the clumpy structure in the BLR is disc-like.[]{data-label="figure1"}](panel1b.eps "fig:"){width="0.8\columnwidth"}\
![Classification of clumpy BLR structure. The black filled circle on the left shows the supermassive black hole. The grey and white areas show the clumpy and non-clumpy regions in the BLR respectively. The first panel shows class A in which the clouds occupy all positions in the BLR. The second panel shows class B. In this class we have a disc for $R < R_{c}$ and a cloudy torus (wind region) for $R > R_{c}$. The third panel shows class C in which the clumpy structure in the BLR is disc-like.[]{data-label="figure1"}](panel1c.eps "fig:"){width="0.8\columnwidth"}
![(a) $\log \chi$ versus $\log L$, for different values of the column density, $N_{0}$. (b) $\log \chi$ versus $\log N_{0}$ for different values of the bolometric luminosity, $L$. In this Figure, $\log \chi < -2$ and $ -2 < \log \chi < 0$ and $\log \chi > 0$ represents classes A, B, and C respectively. []{data-label="figure2"}](panel2a.eps "fig:"){width="0.8\columnwidth"}\
![(a) $\log \chi$ versus $\log L$, for different values of the column density, $N_{0}$. (b) $\log \chi$ versus $\log N_{0}$ for different values of the bolometric luminosity, $L$. In this Figure, $\log \chi < -2$ and $ -2 < \log \chi < 0$ and $\log \chi > 0$ represents classes A, B, and C respectively. []{data-label="figure2"}](panel2b.eps "fig:"){width="0.8\columnwidth"}
$$\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+\frac{\partial}{\partial z} (n\langle v_{R}v_{z} \rangle)+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle }{R}$$ $$\label{eq12}
+n\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]+nf_{l}(\langle v_{R}\rangle -w_{R}) =0,$$ and $$\label{eq13}
\frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{\phi}v_{z} \rangle)+\frac{2n}{R}\langle v_{\phi}v_{R}\rangle+nf_{l}(\langle v_{\phi}\rangle -w_{\phi}) =0,$$ and $$\frac{\partial}{\partial R} (n\langle v_{R}v_{z} \rangle)+\frac{\partial}{\partial z} (n\langle v^{2}_{z} \rangle)+\frac{n\langle v_{R}v_{z}\rangle}{R}+n\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]$$ $$\label{eq14}
+nf_{l}(\langle v_{z}\rangle -w_{z}) =0.$$ In this paper, we assume that the drag coefficients are sufficiently large that the clouds and the ambient medium are strongly coupled to each other. Thus we can write $\langle v_{R} \rangle=w_{R}$, $\langle v_{\phi} \rangle=w_{\phi}$ and $\langle v_{z} \rangle=w_{z}$. For simplicity we also assume $\langle v_{z} \rangle =0$, $\langle v_{R}v_{z} \rangle =0 $ and $\langle v_{\phi}v_{z} \rangle =0$. Therefore equations (\[eq2\]), (\[eq12\]), (\[eq13\]) and (\[eq14\]) are respectively reduced to $$\label{eq15}
\frac{1}{R} \frac{\partial}{\partial R} (nR\langle v_{R} \rangle)=0,$$ $$\label{eq16}
\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle }{R}+n\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]=0,$$ $$\label{eq17}
\frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{2n}{R}\langle v_{R}v_{\phi}\rangle =0,$$ and $$\label{eq18}
\langle v^{2}_{z} \rangle \frac{\partial n}{\partial z} +n\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]=0.$$ For simplicity $\langle v^{2}_{z} \rangle $ is assumed to be constant in equation (\[eq18\]). Equation (\[eq18\]) shows while for $R < R_{c}$ we have $\partial n/\partial z<0 $ and most of the clouds are distributed near the midplane, for $R>R_{c}$ we have $\partial n/\partial z>0 $ and the clouds tend to be located in the higher altitudes. This result is quite in an agreement with what we discussed about $ R_{c} $.
### Classification of clumpy distribution
![The scale height $ h_{g}$ versus the radial distance, $R$. The panels (a), (b), (c) and (d) respectively show the effect of the central luminosity , the black hole mass, z-component of the velocity dispersion and the density index on the geometric shape and thickness of the clumpy disc.[]{data-label="figure3"}](panel3a.eps "fig:"){width="0.8\columnwidth"}\
![The scale height $ h_{g}$ versus the radial distance, $R$. The panels (a), (b), (c) and (d) respectively show the effect of the central luminosity , the black hole mass, z-component of the velocity dispersion and the density index on the geometric shape and thickness of the clumpy disc.[]{data-label="figure3"}](panel3b.eps "fig:"){width="0.8\columnwidth"}\
![The scale height $ h_{g}$ versus the radial distance, $R$. The panels (a), (b), (c) and (d) respectively show the effect of the central luminosity , the black hole mass, z-component of the velocity dispersion and the density index on the geometric shape and thickness of the clumpy disc.[]{data-label="figure3"}](panel3c.eps "fig:"){width="0.8\columnwidth"}\
![The scale height $ h_{g}$ versus the radial distance, $R$. The panels (a), (b), (c) and (d) respectively show the effect of the central luminosity , the black hole mass, z-component of the velocity dispersion and the density index on the geometric shape and thickness of the clumpy disc.[]{data-label="figure3"}](panel3d.eps "fig:"){width="0.8\columnwidth"}
![The distribution of BLR clouds for different values of the central luminosity. $ M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ s=1 $ and $ n_{tot}=10^{6}$. (a) The surface number density versus radial distance (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light day (lt-d).[]{data-label="figure4"}](panel4a.eps "fig:"){width="0.8\columnwidth"} ![The distribution of BLR clouds for different values of the central luminosity. $ M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ s=1 $ and $ n_{tot}=10^{6}$. (a) The surface number density versus radial distance (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light day (lt-d).[]{data-label="figure4"}](panel4b.eps "fig:"){width="0.8\columnwidth"}
![The distribution of BLR clouds for different density indices. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-4}$ and $n_{tot}=10^{6}$. (a) The surface number density versus the radial distance, (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).[]{data-label="figure5"}](panel5a.eps "fig:"){width="0.8\columnwidth"} ![The distribution of BLR clouds for different density indices. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-4}$ and $n_{tot}=10^{6}$. (a) The surface number density versus the radial distance, (b) The volume number density versus radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).[]{data-label="figure5"}](panel5b.eps "fig:"){width="0.8\columnwidth"}
![The distribution of BLR clouds for different values of the total number of the clouds. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-2}$ and $ s=1 $. (a) Surface number density versus the radial distance (b) Volume number density versus the radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).[]{data-label="figure6"}](panel6a.eps "fig:"){width="0.8\columnwidth"} ![The distribution of BLR clouds for different values of the total number of the clouds. $M=4.1 \times 10^{6}M_{\odot}$, $\sigma_{z}=10^{7} cm/s$, $ l=10^{-2}$ and $ s=1 $. (a) Surface number density versus the radial distance (b) Volume number density versus the radial distance at $ z=0, 0.2 $ and $ 0.5 $ light days (lt-d).[]{data-label="figure6"}](panel6b.eps "fig:"){width="0.8\columnwidth"}
In this part we compare $R_{c}$ with the innermost radius of the BLR $R_{in}$ and the outermost radius $R_{out}$ to show that there are three classes for clumpy distribution in the BLR:
**Class A**: $R_{c } < R_{in}$ and the clouds fill all positions in the intercloud gas (see the first panel in Figure 1).
**Class B**: $R_{in} < R_{c} < R_{out}$ and the clumpy structure is the combination of the inner disc extending from $R_{in}$ to $R_{c}$ and the outer cloudy torus (wind region) extending from $R_{c}$ to $R_{out}$ (see the second panel in Figure 1).
**Class C**: $R_{c}>R_{out}$ and the clumpy structure is disc-like (see the third panel in Figure 1).
Depending on the observational data which we know about each AGN we suggest three methods to find what category it belong to.
1\) For some AGNs with black hole mass independently estimated from the $M - \sigma$ relationship (e.g., [@Onken04; @Woo10; @Graham11; @Grier13]), we can determine the Eddington luminosity defined by $ L_{Edd}=1.3 \times 10^{38} M/M_{\odot} erg/s $. On the other hand, by measuring the bolometric luminosity, $L$, we have the Eddington ratio $ l=L/L_{Edd}$. Also, we assume $ 1.2 \times 10^{21}cm^{-2}< N_{0} < 1.5 \times 10^{24}cm^{-2}$ for the column density (e.g., @Marconi08) and $1 < s < 2.5$ (e.g., @Rees89). We can then calculate the value of $R_{c}$ from equation (\[eq11\]) and compare it with $R_{in}$ and $R_{out}$ specified by reverberation mapping technique (e.g., @Krolik91) to determine what class each object belongs to.
2\) For cases for which reverberation mapping is not available, the $R-L$ relationship ([@Kaspi00; @Bentz06]) can help us to estimate the BLR radius and compare it with $R_{c}$ estimated by the method described above.
3\) For AGNs with unknown black hole mass, if we assume $R_{in}=10R_{Sch}$ and $R_{out}=1000R_{Sch}$ we can define the parameter $\chi$ as $\chi =R_{c}/R_{out}= (0.005c^{2}R_{0}/GM)(2\mu \sigma_{T}N_{0}/3l)^{3/2s}$. Clearly $ \chi < 0.01 $ implies $R_{c} < R_{in}$ and such systems belong to class A. Also, in the cases for which $0.01 < \chi < 1$ and $\chi > 1$, we see that they belong to the classes B and C respectively. Assuming that $s=3/2$ we can combine the black hole mass with the Eddington ratio to get $\chi$ as $$\label{eq19}
\chi = 4.89 \times 10^{21} \frac{\mu N_{0}}{L}.$$ The parameter $\chi$ derived from equation (\[eq19\]) can give the class of BLR structure for AGNs with unknown black hole mass. The top and bottom panels in Figure \[figure2\] show $\log \chi$ as a function of $\log L$ and $\log N_{0}$ respectively. In Figure 2 we can see that for quasars with luminosities ranging from $10^{44} erg/s $ to $ 10^{47} erg/s$, we have $ -2<\log \chi<0 $ for less luminous systems and $\log \chi<-2 $ for the brightest cases. We therefore expect that the BLR structure of AGNs is similar to classes A and B for the higher and lower luminosity cases. For Seyfert galaxies with $ 10^{41} erg/s < L < 10^{44} erg/s$, the expected distribution of clouds is as in classes B and C for higher and lower luminosity objects. Finally for all the LLAGNs with $ L<10^{41} erg/s$ we see $\log \chi$ is positive and BLR structure is as in class C (i.e., disc-like). Some studies, through a discussion on the shape of broad emission lines, have shown the existence of disc sturcture (class C) in LLAGNs (e.g., [@Eracleous03; @Storchi16]). Moreover they have found the existence of a non-disc region (outer torus in class B) in outer parts of BLR in Seyfert galaxies.
As the final step in this section, we solve equations (\[eq15\]) and (\[eq18\]) for the disc-like approximation to obtain the volume number density $n$. Because of the assumption of strong coupling between the BLR clouds and the hot intercloud medium we have $\langle v_{R} \rangle =w_{R}$. Furthermore, if we assume an advection-dominated accretion flow (ADAF) for the model describing the intercloud gas and because of the self-similar solution we can write $ w_{R}=-\alpha c_{1}v_{k,mid}$ (e.g., @Narayan94), where $\alpha $ and $ c_{1}$ are two constants in order of unity and $ v_{k,mid}$ is the Keplerian velocity in the midplane. On the other hand, from equation (\[eq15\]) we can see that $ nR\langle v_{R} \rangle $ does not depend on $R$, so $$\label{eq20}
n(R,z)=-\frac{\Lambda(z)}{\alpha c_{1}\sqrt{GM}}R^{-\frac{1}{2}},$$ where $\Lambda(z)$ give the vertical dependence of $n$. By substituting equation (\[eq20\]) into equation (\[eq18\]) and after some algebraic manipulations, equation (\[eq18\]) can be written as $$\label{eq21}
\langle v^{2}_{z} \rangle \frac{\partial \Lambda(z)}{\partial z} +\Omega_{k,mid}^{2}z\left[1-\left(\frac{R}{R_{c}}\right)^{2s/3}\right]\Lambda(z)=0.$$ If we substitute $\Lambda(z)$ derived by integrating equation (\[eq21\]) into equation (\[eq20\]), the volume number density is given by $$\label{eq22}
n(R,z)=-\frac{k_{0}}{\alpha c_{1}\sqrt{GM}}R^{-\frac{1}{2}}\exp \left(-\frac{z^{2}}{2h^{2}_{g}}\right),$$ where $k_{0}$ is the constant of integration and $h_{g}$ is the scale height which is $$\label{eq23}
h_{g}(R)=\frac{\sigma_{z}}{\Omega_{k,mid}}\sqrt{\frac{1}{1-(R/R_{c})^{\frac{2s}{3}}}}.$$ Integrating $n$ over all positions occupied by the clouds, we calculate $ k_{0}$ in terms of the total number of the cloud $ n_{tot}$, and substitute it into equation (\[eq22\]) to derive $n$ as $$\label{eq24}
n(R,z)=\frac{\sqrt{GM}}{(2\pi)^{3/2}}\frac{n_{tot}}{\sigma_{z}\gamma}R^{-1/2}\exp \left(-\frac{z^{2}}{2h^{2}_{g}}\right),$$ where $\sigma_{z}=\sqrt{\langle v^{2}_{z} \rangle}$ is $z$-component of the velocity dispersion and $\gamma $ is defined by $\gamma=\int_{R_{in}}^{R_{out}}\sqrt{1/1-(R/R_{c})^{\frac{2s}{3}}}R^{2}dR$, and where $R_{in}$ and $R_{out}$ are assumed to be 1 and 10 light day respectively. To calculate the surface number density, $\Sigma$, we integrate equation (\[eq24\]) over $z$ to get $$\label{eq25}
\Sigma (R)=\frac{1}{2\pi}\frac{n_{tot}}{\gamma}R\sqrt{\frac{1}{1-(R/R_{c})^{\frac{2s}{3}}}}.$$ In all subsequent sections, we adopt $\mu =0.61$.
VIRIAL FACTOR {#s3}
=============
Disc-like configurations {#ss31}
------------------------
In the last section we saw that in class C we have a disc-like distribution for the BLR. There are now two questions: what is the geometric shape of the clumpy disc? and is its thickness small compared to its radial sizes? In this section, we will answer to these questions and explore the role of various physical parameters on the shape and thickness of the clumpy disc. According to equation (\[eq24\]) we consider the curved surface $ z=h_{g}(R)$ as the boundary between the clumpy and non-clumpy regions. Through plotting the scale height profile as a function of the radial distance in Figure \[figure3\] we show the geometric shape of the clumpy disc.
In Figure \[figure3\], we see that there is a positive correlation between the radial distance and scale height. Panels a and b show the role of the central luminosity and black hole mass on the thickness of the clumpy disc respectively. From these we see that with an increase in the central luminosity, the disc thickness increases, and that an increase in the black hole mass leads to a decrease in thickness. This is because as $l$ increases, radiative pressure dominates and pushes the clouds away from the midplane. However, an increase in the black hole mass leads to an increase in the gravitational attraction pulling the clouds toward the midplane. Panels c and d show that there is similar behaviour for $h_{g}$ as a function of $\sigma_{z}$ and $s$. With increases in both of them, $h_{g}$ increases as well. Figure (\[figure3\]) also shows that, as suggested by some authors (e.g., [@Dumont90b; @Goad12]), the clumpy disc is flared (bowl-shaped). The thickness of the clumpy disc is small compared to the radial size ($ h_{g} \approx 0.1R$). This is important.
Obviously, as the central luminosity declines, the clouds exposed to this radiation reprocess less amount of energy. As a result the broad emission lines are weak in LLAGNs. However, in addition to this, the small thickness of the clumpy disc results in a small solid angle being covered by the clumpy disc and this leads to little capture of the central radiation. The fraction of radiation captured is given by $ d\Omega /4\pi \approx \Theta^{2}/2 \approx h_{g}^{2}/2R^{2}$ which is of the order of $ 10^{-3}$, where $ d\Omega $ is the solid angle covered by the clumpy disc and $\Theta \approx h_{g}/R$. Therefore, all the clouds located in the BLR of LLAGN can only receive $ 0.001 $ of the central radiation which itself is in order of $ 10^{-4}-10^{-5}$ the Eddington luminosity and this leads to the presence of very weak broad emission lines in the spectra of LLAGNs. In the very lowest luminosity cases ($ L=10^{-8} - 10^{-9} L_{Edd}$), the small thickness of the clumpy disc together with the faintness of the central source can cause that the broad emission lines to fall below observational detection thresholds and this can explain the lack of detection of any broad emission lines in such faint objects.
Figures \[figure4\], \[figure5\] and \[figure6\] respectively clarify the effect of $l$, $s$ and $n_{tot}$ on the surface number density $\Sigma $ and volume number density $n$ which are plotted versus the radius $R$. In these three figures, the first panels present the behaviour of $\Sigma $ versus $R$ and the second panels show the variations of $n$ with radius at $ z=0, 0.2 $ and $ 0.5 $ light day (lt-d). Values of the fixed parameters are specified in the captions. In these figures, the profiles of the surface number density show that most of the clouds are located in the outer parts of the BLR. However the profiles of the volume number density are somewhat different. Whilst in the midplane most of the clouds are close to the central black hole, at $ z=0.2$ lt-d, the value of $n$ in the central parts is almost zero and with increasing the radius, it increases to a maximum and then, for larger radial distances $n$ gradually declines. At $ z = 0.5$ lt-d the behaviour of the curve is similar to that at $z = 0.2$ lt-d but the peak of the curve moves towards radii. Note that, by considering the positive correlation between $ h_{g}$ and $R$, there is no contradiction between the behaviour of $\Sigma $ and $n$ as a function of $R$.
Panel 4a shows that as $l$ increases, the ratio of the clouds in the outer regions of the BLR to those in the inner regions increases as well. On the other hand, by assuming strong coupling between the clouds and the ambient medium, we have $\langle v_{\phi} \rangle=w_{\phi} \propto v_{k,mid}$ and $\langle v_{R} \rangle=w_{R} \propto v_{k,mid}$. Consequently, we conclude that, with increasing of $l$, the number of slowly moving clouds in the outer parts increases and the number of quickly-moving clouds in the inner parts is reduced. In other words, as obtained by @Netzer10, we expect that with increasing $l$, the width of broad emission lines $\textsc{FWHM}$ decreases. Panel 4b shows that an increase in $l$ leads to the reduction in the number of the clouds near the midplane. This is because they are distributed to higher altitudes. As is shown in Figure \[figure5\] the effect of the density index is similar to that of the central luminosity. Finally, in Figure \[figure6\] we see that an increase in the total number of the clouds leads, as is expected, to an increase in the value of $\Sigma $ and $n$.
![The panels (a), (b) and (c) show the virial factor as a function of inclination angle $\theta_{0}$, z-component of the velocity dispersion $\sigma_{z}$, and width of broad emission line $\textsc{FWHM}$ respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure7"}](panel7a.eps "fig:"){width="0.79\columnwidth"} ![The panels (a), (b) and (c) show the virial factor as a function of inclination angle $\theta_{0}$, z-component of the velocity dispersion $\sigma_{z}$, and width of broad emission line $\textsc{FWHM}$ respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure7"}](panel7b.eps "fig:"){width="0.79\columnwidth"} ![The panels (a), (b) and (c) show the virial factor as a function of inclination angle $\theta_{0}$, z-component of the velocity dispersion $\sigma_{z}$, and width of broad emission line $\textsc{FWHM}$ respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure7"}](panel7c.eps "fig:"){width="0.79\columnwidth"}
![The panels (a), (b), (c) and (d) show the virial factor as a function of bolometric luminosity ($L$), column density ($N_{0}$), $\beta$ and the density index ($s$) respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure8"}](panel8a.eps "fig:"){width="0.79\columnwidth"} ![The panels (a), (b), (c) and (d) show the virial factor as a function of bolometric luminosity ($L$), column density ($N_{0}$), $\beta$ and the density index ($s$) respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure8"}](panel8b.eps "fig:"){width="0.79\columnwidth"} ![The panels (a), (b), (c) and (d) show the virial factor as a function of bolometric luminosity ($L$), column density ($N_{0}$), $\beta$ and the density index ($s$) respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure8"}](panel8c.eps "fig:"){width="0.79\columnwidth"} ![The panels (a), (b), (c) and (d) show the virial factor as a function of bolometric luminosity ($L$), column density ($N_{0}$), $\beta$ and the density index ($s$) respectively. Values of other fixed parameter are listed on each panel.[]{data-label="figure8"}](panel8d.eps "fig:"){width="0.79\columnwidth"}
Calculation of Virial factor for disc-like structure {#ss32}
----------------------------------------------------
We can derive the virial factor, $f$, for the disc-like clumpy structure as a function of the kinematic parameters of the BLR and the inclination angle (the angle between the observer’s line of sight and the axis of symmetry of the thin-disc. In Appendix \[a2\], we will show that the averaged line of sight velocity square $\langle v_{n}^{2} \rangle _{avr}$ can be written as $$\label{eq26}
\langle v_{n}^{2} \rangle _{avr}=\frac{1+\beta}{2}\langle v_{R}^{2} \rangle \sin^{2} \theta_{0}+\langle v_{z}^{2} \rangle \cos^{2} \theta_{0},$$ where $\theta_{0}$ is the inclination angle and $\beta = \langle v_{\phi}^{2} \rangle/\langle v_{R}^{2} \rangle $ is taken to be constant. As in section (\[s2\]), if we assume $\langle v_{z} \rangle =0$, we have $\langle v_{z}^{2} \rangle=\sigma_{z}^{2}$ which is taken to be constant. However, $\langle v_{R}^{2} \rangle $ has to be derived by solving equation (\[eq16\]). Dividing equation (\[eq16\]) into $ n$, we can write $$\label{eq27}
\frac{\partial \langle v^{2}_{R} \rangle}{\partial R}+\left[\frac{\partial \ln (n)}{\partial R}+\frac{1-\beta}{R}\right]\langle v^{2}_{R}\rangle =-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right],$$ where $\partial \ln (n)/\partial R $ is derived by the substitution of $ n(R,z)$, from equation (\[eq24\]). Equation (\[eq27\]), as a first order differential equation, then becomes $$\label{eq28}
\frac{\partial \langle v^{2}_{R} \rangle}{\partial R}+\left[\frac{1-2\beta}{2R}+\frac{z^{2}}{h_{g}^{3}}\frac{dh_{g}}{dR}\right]\langle v^{2}_{R}\rangle =-\Omega_{k,mid}^{2}R\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right] .$$ Finding the integrating factor, we can write the solution of equation (\[eq28\]) as $$\langle v^{2}_{R} \rangle =-GMR^{\frac{2\beta -1}{2}}\exp\left(\frac{z^{2}}{2h_{g}^{2}}\right)\int R^{-\frac{3+2\beta}{2}}\left[1-\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right]$$ $$\label{eq29}
\times \exp\left(-\frac{z^{2}}{2h_{g}^{2}}\right)dR.$$ In this integral, the range of the variation of $\exp(-z^{2}/2h_{g}^{2})$ as a function of $R$, is given by $ [\partial \exp(-z^{2}/2h_{g}^{2})/\partial R]\Delta R \approx \exp(-z^{2}/2h_{g}^{2})(z^{2}/h_{g}^{3})(h_{g}/R)\Delta R $ which is in the order of unity. This is because, in thin disc, we have $ z \approx h_{g}$ and $\Delta R \approx R $. On the other hand, by assuming $R_{out}\approx 10R_{in}$, $\beta=3 $ and $ s=3/2$, we see that, as $R$ increases from $R_{in}$ to $R_{out}$, the value of the other terms inside the integral, $R^{-(3+2\beta)/2}[1-(R/R_{c})^{2s/3}]$, becomes smaller by a factor of 1000. We therefore assume that the value of $\exp(-z^{2}/2h_{g}^{2})$ remains constant and by taking it out of the integral, $\langle v_{R}^{2} \rangle $ is given by $$\label{eq30}
\langle v_{R}^{2}\rangle = \frac{GM}{R}\left[\frac{2}{1+2\beta}+\frac{6}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{\frac{2s}{3}}\right]+c_{0},$$ where $ c_{0}$ is the constant of integration calculated as follows. As we discussed in the subsection (\[ss22\]), for the thin-disc structure, we have $R<R_{c}$. As a result, in order to find the value of $ c_{0}$, we suppose that $\langle v_{R}^{2} \rangle + \langle v_{\phi}^{2} \rangle = (1+\beta)\langle v_{R}^{2} \rangle \approx 2\textsc{FWHM}^{2}$ at $R=0.5R_{c}$. Finally, by substituting the constant of the integration into equation (\[eq30\]), $\langle v_{R}^{2} \rangle $ can be expressed as $$\langle v_{R}^{2}\rangle = \frac{GM}{R}\left\{\left[\frac{2}{1+2\beta}+\frac{6}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{2s/3}\right]-\left[\frac{4}{1+2\beta}\right.\right.$$ $$\label{eq31}
\left. \left. +\frac{12}{4s-6\beta -3}\left(\frac{1}{2}\right)^{2s/3}\left(\frac{R}{R_{c}}\right)\right]+\frac{2}{1+\beta}\frac{R}{GM}\textsc{FWHM}^{2}\right\}.$$
By substituting $\langle v_{R}^{2}\rangle $ into equation (\[eq26\]), the virial factor defined by $ f=GM/R\langle v_{n}^{2} \rangle _{avr}$, is given by $$f=\left\{\left[\left(\frac{1+\beta}{1+2\beta}+\frac{3+3\beta}{4s-6\beta -3}\left(\frac{R}{R_{c}}\right)^{2s/3}\right)-\left(\frac{2+2\beta}{1+2\beta}\right.\right.\right.$$ $$\left.\left.+\frac{6+6\beta}{4s-6\beta -3}\left(\frac{1}{2}\right)^{2s/3}\left(\frac{R}{R_{c}}\right)\right)+\frac{R}{GM}\textsc{FWHM}^{2}\right]\sin^{2}\theta_{0}$$ $$\label{eq32}
\left.+\frac{R}{GM}\sigma_{z}^{2}\cos^{2}\theta_{0}\right\}^{-1} .$$ Finally, assuming $R \approx R_{out} \approx 1000R_{Sch}$, the virial factor becomes $$f=\left\{\left[\left(\frac{1+\beta}{1+2\beta}+\frac{3+3\beta}{4s-6\beta -3}\right)\chi^{-\frac{2s}{3}}-\left(\frac{2+2\beta}{1+2\beta}\right.\right.\right.$$ $$\left.\left.\left.\left.+\frac{6+6\beta}{4s-6\beta -3}\left(\frac{1}{2}\right)^{\frac{2s}{3}}\right)\chi^{-1}+2000\left(\frac{\textsc{FWHM}}{c}\right)^{2}\right]\sin^{2}\theta_{0}\right.\right.$$ $$\label{eq33}
\left.+2000\left(\frac{\sigma_{z}}{c}\right)^{2}\cos^{2}\theta_{0}\right\}^{-1},$$ where $\chi = R_{c}/R_{out}$ is defined by equation (\[eq19\]).
Figures (\[figure7\]) and (\[figure8\]) show the variation of the virial factor as a function of the various parameters. In spite of all the approximations we have used, we see that the value of $f$ is of the order of unity. This is in an agreement with previous works (e.g., [@Onken04; @Woo10; @Graham11; @Grier13]). Figure (\[figure7\]) shows that the virial factor changes significantly with the inclination angle $\theta_{0}$, the width of broad emission lines $\textsc{FWHM}$, and $z$-component of the velocity dispersion $\sigma_{z}$. From the first panel of Figure (\[figure7\]), it can be seen that as $\theta_{0}$ increases from 0 to 40 (type1 AGNs), the value of $f$ rapidly falls from nearly 9.0 to 1.0 and with increasing $\theta_{0}$ from 40 to 90 (type2 AGNs), it gradually decreases from nearly 1.0 to 0.5. The negative correlation between $ f $ and $ \theta_{0} $ is similar to the finding derived by @Pancoast14 for five Seyfert galaxies including: Arp 151, Mrk 1310, NGC 5548, NGC 6814 and SBS 1116+583A. In the second panel of Figure (\[figure7\]) showing similar behaviour for the virial factor as a function of $\textsc{FWHM}$, we see $ 0.5 \lesssim f \lesssim 6.5 $. The anticorrelation between $f$ and $\textsc{FWHM}$ has been confirmed by @Brotherton15. Finally the third panel shows that the value of $ f $ is nearly between 1.4 to 1.8. However, unlike two first panels, the slope of the curve in $ f - \sigma_{z} $ diagram is shallow for lower values of $ \sigma_{z} $ and steep for higher values.
From Figure (\[figure8\]) we see that with increasing $\beta$, $s$ and $N_{0}$ the value of the Virial factor, $ f $, decreases and with increasing $L$ it increases. But we have to note that the range of variation of $ f $ is so small (of the order of 0.01 for the variation of $ L $ and 0.001 for the variation of three other quantities). In other words, $f$ is relatively insensitive to the variation of $L$, $\beta$, $s$ and $N_{0}$. The insensitive correlation between the Virial factor and bolometric luminosity is in agreement with the results found by @Netzer10. They found that while cloud orbits are strongly affected by radiation pressure, there is a relatively small change in $r_{BLR}\textsc{FWHM}^{2}$. This means that radiation pressure does not change the value of virial factor significantly.
CONCLUSIONS {#s5}
===========
In this work, considering the clouds as a collisionless ensemble of particles, we employed the cylindrical form of Jeans equations calculated in section 2 to describe a geometric model for their distributions in the BLR. The effective forces in this study are the Newtonian gravity of the black hole, the isotropic radiative force arisen from the central source and the drag force for linear regime. Taking them into account we showed that there are three classes for BLR configuration: (A) non disc (B) disc-wind (C) pure disc structure (see Figure \[figure1\]). We also found that the distribution of BLR clouds in the brightest quasars belongs to class A, in the dimmer quasars and brighter Seyfert galaxies it belongs to class B, and in the fainter Seyfert galaxies and all LLAGNs (LINERs) it belongs to class C.
Then we derived the Virial factor, $ f $, for disc-like structures and found a negative correlation for $f$ as a function of the inclination angle, the width of broad emission line and z-component of the velocity dispersion. We also found $ 1.0 \lesssim f \lesssim 9.0 $ for type1 AGNs and $ 0.5 \lesssim f \lesssim 1.0 $ for type2 AGNs. Moreover we saw that $ f $ approximately varies from 0.5 to 6.5 for different values of $\textsc{FWHM}$ and from 1.4 to 1.8 for different values of $ \sigma_{z} $. We also indicated that $ f $ doesn’t change significantly with the variations of bolometric luminosity, column density of each cloud, density index and $ \beta = \langle v_{\phi}^{2} \rangle/\langle v_{R}^{2} \rangle $ and the maximum change in the value of $ f $ is of order of 0.01.
In introduction, we mentioned that since each group take a different sample of AGNs, they find different values for average virial factor, $ \langle f \rangle $. However different values leads to significant uncertainties in the estimation of black hole mass. On the other hand, in this paper, we saw that $ f $ significantly changes with the inclination angle $ \theta_{0} $ and $ \textsc{FWHM} $ (Figure \[figure7\]). Therefore in order to have more accurate estimation for black hole mass, we suggest observational campaigns to divide a sample of objects into a few subsamples based on the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of objects and then determine the value of $ \langle f \rangle $ for each subsample separately. Therefore we will have several values for $ \langle f \rangle $. Finally regarding the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of each object with unknown black hole mass, we use the appropriate value of $ \langle f \rangle $ in the virial theorem to have more accurate estimation of black hole mass.
Acknowledgements {#acknowledgements .unnumbered}
================
I am very grateful to the referee, Jian-Min Wang, for his very useful comments which improved the manuscript. I also thank Scott Tremaine for his useful suggestions that clarified some points about the extended form of the collisionless Boltzmann equation.
[99]{} Armitage P. J., 2013, Astrophysics of Planet Formation, Cambridge University Press Balmaverde B., Capetti A., 2014, A&A, 563, A119 Bentz M. C., Peterson B. M., Pogge R. W., Vestergaard M., Onken C. A., 2006, ApJ, 644, 133 Binney J., Tremaine S., 1987, Galactic dynamics, Princeton, NJ, Princeton University Press (BT) Blandford R. D., & McKee, C. F. 1982, ApJ, 255, 419 Bon E., Popovic L. C., Gavrilovic N., La Mura G., Mediavilla E., 2009, MNRAS, 400, 924 Brotherton M. S., Singh V., Runnoe J., 2015, MNRAS, 454, 3864 Bower G. A., Wilson A. S., Heckman T. M., & Richstone D. O. 1996, AJ, 111, 1901 Chen K. & Halpern, J.P. 1989, ApJ, 344, 115. Chen K., Halpern, J.P. & Filippenko, A.V. 1989, ApJ, 339, 742. Dibai E. A. 1977, Soviet Astron. Lett., 3, 1 Dumont A.M. & Collin-Souffrin, S. 1990a, A&AS 83, 71. Dumont, A. M., & Collin-Souffrin, S. 1990b, A&A, 229, 313 Elitzur M. & Shlosman, I. 2006, ApJ, 648, L101 Eracleous M. & Halpern, J.P. 1994, ApJS, 90, 1. Eracleous M., Halpern J. P., 2001, ApJ, 554, 240 Eracleous M. & Halpern, J.P. 2003, ApJ, 599, 886. Filippenko A. V., Halpern J. P., 1984, ApJ, 285, 458 Gaskell C. M., Sparke L. S., 1986, ApJ, 305, 175 Gaskell C.M., 2009, NewAR, 53, 140 Goad M. R., Korista, K. T., & Ruff, A. J. 2012, MNRAS, 426, 3086 Graham A. W., Onken C. A., Athanassoula E., & Combes F. 2011, MNRAS, 412, 2211 Greene J. E., Hood C. E., Barth A. J., Bennert V. N., Bentz M. C., Filippenko A. V., Gates E., Malkan M. A., Treu T., Walsh J. L., Woo J.-H., 2010, ApJ, 723, 409 Grier C.J., Martini P., Watson, L.C., Peterson B.M., Bentz M.C., Dasyra K.M., Dietrich M., Ferrarese L., Pogge R.W. & Zu, Y. 2013, ApJ, 773, 90 Ho L. C., Rix H.-W., Shields J. C., Rudnick G., McIntosh D. H., Filippenko A. V., & Sargent W. L. W. , & Eracleous, M. 2000, ApJ, 541, 120 Ho L. C., 2008, ARA&A, 46, 475 Kaspi S., Smith P.S., Netzer H., Maoz D., Jannuzi B.T., Giveon U., 2000, ApJ, 533, 631 Khajenabi F., Rahmani M., Abbassi S., 2014, MNRAS, 439, 2468 Khajenabi F., 2015, MNRAS, 446, 1848 Kollatschny W. & Bischoff, K. 2002, A&A, 386, L19. Kollatschny W. 2003, A&A, 407, 461 Kormendy J., Ho L. C., 2013, ARA&A, 51, 511 Krause M., Burkert A., Schartmann M., 2011, MNRAS, 411, 550 Krolik J. H., Horne Keith., Kallman T. R., Malkan M. A., Edelson R. A., Kriss, G. A. 1991, ApJ, 371, 541 La Franca F., Onori F., Ricci F., Sani E., Brusa M., Maiolino R., Bianchi S., Bongiorno A., Fiore F., Marconi A., Vignali C. 2015, MNRAS, 449, 1526 Landt H., Elvis M., Ward M. J., Bentz M. C., Korista K. T., Karovska M., 2011, MNRAS, 414, 218 Landt H., Ward M. J., Peterson B. M., Bentz M. C., Elvis M., Korista K. T., Karovska M., 2013, MNRAS, 432, 113 Marconi A., Axon D. J., Maiolino R., Nagao T., Pastorini G., Pietrini P., Robinson A., Torricelli G., 2008, ApJ, 678, 693 Narayan R., Yi I., 1994, ApJ, 428, L13 Netzer H., Marziani P., 2010, ApJ, 724, 318 Nicastro F. 2000, ApJ, 530, L65 Onken C. A., Ferrarese, L., Merritt, D., Peterson, B. M., Pogge, R. W., Vestergaard, M., & Wandel, A. 2004, ApJ, 615, 645 Pancoast Anna., Brewer Brendon J., Treu Tommaso., Park Daeseong., Barth Aaron J., Bentz Misty C., Woo Jong-Hak., 2014, MNRAS, 445, 3073 Peterson B. M. 1993, PASP, 105, 247 Peterson B. M., Ferrarese, L., Gilbert, K. M., et al. 2004, ApJ, 613, 682 Popovic L. C., Mediavilla E., Bon E., Ilic D., 2004, A&A, 423, 909 Rees M. J., Netzer H., Ferland G. J., 1989, ApJ, 347, 640 Shadmehri., 2015, MNRAS, 451.3671 Shields, J. C. et al. 2000, ApJ, 534, L27 Smith J. E., Robinson A., Young S., Axon D. J., Corbett E. A., 2005, MNRAS, 359, 846 Storchi-Bergmann T., Baldwin J. A., & Wilson A. S., 1993, ApJ, 410, L11 Storchi-Bergmann T., Schimoia J.S., Peterson B.M., Elvis M., Denney K.D., Eracleous M., Nemmen R. S., 2016, submitted to ApJ Strateva I.V., Strauss, M.A., Hao, L. et al. 2003, AJ, 126, 1720 Wang J.-M., Cheng C., Li Y.-R., 2012, ApJ, 748, 147 Whittle M., Saslaw W. C., 1986, ApJ, 310, 104 Woo J.-H., Treu, T., Barth, A. J., et al. 2010, ApJ, 716, 269
DERIVATION OF JEANS EQUATION {#a1}
============================
In this appendix, we derive the Jeans equations from the collisionless Boltzmann equation (CBE) for the particles which their movements are affected by both position-dependent and velocity dependent forces. First we demonstrate the mathematical formula which is used in this way several times. From the Product Rule for Derivatives, we have $$\label{eqa1}
\int \psi \frac{\partial F}{\partial v_{i}}d^{3}v=\int \frac{\partial (\psi F)}{\partial v_{i}}d^{3}v - \int F\frac{\partial \psi}{\partial v_{i}}d^{3}v,$$ where, $\psi$, and $F$ are an arbitrary function and the distribution function respectively. Moreover $ v_{i}$ represents the velocity components in the cylindrical coordinates ($ v_{R}, v_{\phi}, v_{z}$). Since we don’t have any particle with infinite velocity, so the value of $F$ for sufficiently large velocities is equal to zero (e.g., @Binney87) and, due to the divergence theorem, the first term on the right side of equation (\[eqa1\]) vanishes and we have $$\label{eqa2}
\int \psi \frac{\partial F}{\partial v_{i}}d^{3}v=-\int F\frac{\partial \psi}{\partial v_{i}}d^{3}v.$$ On the other hand, we define the averaged parameter in the velocity space as $\langle X \rangle=n^{-1} \int X F d^{3}v $ where $ X $ is an arbitrary parameter and $n$ is the volume number density in position-place which is calculated by $ n=\int F d^{3}v $. Integrating CBE in the velocity space, we have $$\int \frac{\partial F}{\partial t}d^{3}v+\int v_{R}\frac{\partial F}{\partial R}d^{3}v+\int \frac{v_{\phi}}{R}\frac{\partial F}{\partial \phi}d^{3}v+\int v_{z}\frac{\partial F}{\partial z}d^{3}v$$ $$+\int \left(a_{R}+\frac{v_{\phi}^{2}}{R}\right)\frac{\partial F}{\partial v_{R}}d^{3}v+\int \left(a_{\phi}-\frac{v_{R}v_{\phi}}{R}\right)\frac{\partial F}{\partial v_{\phi}}d^{3}v$$ $$\label{eqa3}
+\int a_{z}\frac{\partial F}{\partial v_{z}}d^{3}v+\int F\frac{\partial a_{R}}{\partial v_{R}}d^{3}v+\int F\frac{\partial a_{\phi}}{\partial v_{\phi}}d^{3}v+\int F\frac{\partial a_{z}}{\partial v_{z}}d^{3}v=0,$$ since the velocity components ($ v_{R}, v_{\phi}, v_{z}$) don’t depend on $R$, $\phi $ and $z$ and also the range of velocities over which we integrate depends on neither time nor space, so the partial derivatives respect to both time ($\partial /\partial t$) and space ($\partial /\partial R, \partial /\partial \phi, \partial /\partial z$) in the first four terms on the left side of equation (\[eqa3\]) are taken outside. Also by using equation (\[eqa2\]) for the fifth, sixth and seventh terms of equation (\[eqa3\]), the first relation of the Jeans equations is gained as $$\label{eqa4}
\frac{\partial n}{\partial t} + \frac{1}{R} \frac{\partial}{\partial R} (Rn\langle v_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{z} \rangle)=0,$$ the derivation of other relations of the Jeans equations is similar to that of the first one. By multiplying CBE by $ v_{R} , v_{\phi}$ and $ v_{z}$ respectively and integrating them in the velocity-space, other equations can be given by $$\frac{\partial}{\partial t} (n\langle v_{R} \rangle)+\frac{\partial}{\partial R} (n\langle v^{2}_{R} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{R}v_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{R}v_{z} \rangle)$$ $$\label{eqa5}
+n\frac{\langle v^{2}_{R}\rangle -\langle v^{2}_{\phi}\rangle}{R}-n \langle a_{R} \rangle =0,$$ and $$\frac{\partial}{\partial t} (n\langle v_{\phi} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{\phi} \rangle)+\frac{1}{R} \frac{\partial}{\partial \phi} (n\langle v^{2}_{\phi} \rangle)+\frac{\partial}{\partial z} (n\langle v_{\phi}v_{z} \rangle)$$ $$\label{eqa6}
+\frac{2n}{R}\langle v_{\phi}v_{R}\rangle -n \langle a_{\phi} \rangle =0,$$ and $$\frac{\partial}{\partial t} (n\langle v_{z} \rangle)+\frac{\partial}{\partial R} (n\langle v_{R}v_{z} \rangle)+\frac{1}{R}\frac{\partial}{\partial \phi} (n\langle v_{\phi}v_{z} \rangle)+\frac{\partial}{\partial z} (n\langle v^{2}_{z} \rangle)$$ $$\label{eqa7}
+\frac{n\langle v_{R}v_{z}\rangle}{R}-n \langle a_{z} \rangle =0.$$
CALCULATION OF AVERAGED LINE OF SIGHT VELOCITY SQUARE {#a2}
=====================================================
In this part, we calculate the averaged line of sight velocity square. We assume that the central black hole is placed at the origin of our coordinates system and $ x-y $ plane is the midplane of the BLR. In this configuration, the velocity of the cloud placed at ($R, \phi, z$) can be expressed as $$\label{eqb1}
\mathbf{v}=v_{R}\hat{R}+v_{\phi}\hat{\phi}+v_{z}\hat{z},$$ where $\hat{R}$, $\hat{\phi}$ and $\hat{z}$ are $$\hat{R}=\cos \phi \hat{x}+\sin \phi \hat{y},$$ $$\hat{\phi}=-\sin \phi \hat{x}+\cos \phi \hat{y},$$ $$\label{eqb2}
\hat{z}=\hat{z}.$$ On the other hand, the unit vector pointing towards the observer can be written as $$\label{eqb3}
\hat{n}=\sin \theta_{0}\cos \phi_{0}\hat{x}+\sin \theta_{0}\sin \phi_{0}\hat{y}+\cos \theta_{0}\hat{z},$$ where $\theta_{0}$ is the inclination angle, and $\phi_{0}$ determines the direction of $\hat{n}$. Therefore the line of sight velocity square defined by $ v_{n}^{2}=(\mathbf{v}.\hat{n})^{2}$ becomes $$v_{n}^{2}=v_{R}^{2}\sin^{2}\theta_{0}\cos^{2}(\phi_{0} - \phi)+v_{\phi}^{2}\sin^{2}\theta_{0}\sin^{2}(\phi_{0} - \phi)+v_{z}^{2}\cos^{2}\theta_{0}$$ $$+2v_{R}v_{\phi}\sin^{2}\theta_{0}\sin(\phi_{0} - \phi)\cos (\phi_{0} - \phi)+2v_{R}v_{z}\sin\theta_{0}\cos \theta_{0}\cos (\phi_{0} - \phi)$$ $$\label{eqb4}
+2v_{\phi}v_{z}\sin \theta_{0}\cos \theta_{0} \sin(\phi_{0} - \phi).$$ Like subsection (\[ss22\]) we assume $\langle v_{R}v_{z}\rangle=\langle v_{\phi}v_{z}\rangle=0 $. Thus $\langle v_{n}^{2}\rangle $ is given by $$\langle v_{n}^{2}\rangle=\langle v_{R}^{2}\rangle \sin^{2}\theta_{0}\cos^{2}(\phi_{0} - \phi)+\langle v_{\phi}^{2}\rangle \sin^{2}\theta_{0}\sin^{2}(\phi_{0} - \phi)$$ $$\label{eqb5}
+\langle v_{z}^{2} \rangle \cos^{2}\theta_{0}+2\langle v_{R}v_{\phi} \rangle\sin^{2}\theta_{0}\sin(\phi_{0} - \phi)\cos (\phi_{0} - \phi).$$ Here we define the average of $\langle v_{n}^{2}\rangle $ over the $\phi $ coordinate as $\langle v_{n}^{2}\rangle_{avr}=(1/2\pi) \int_{0}^{2\pi}\langle v_{n}^{2}\rangle d\phi $. By taking the average of equation (\[eqb5\]) and by using $\beta = \langle v_{\phi}^{2}\rangle/\langle v_{R}^{2}\rangle$, we can finally write $\langle v_{n}^{2}\rangle_{avr}$ as $$\label{eqb6}
\langle v_{n}^{2} \rangle _{avr}=\frac{1+\beta}{2}\langle v_{R}^{2} \rangle \sin^{2} \theta_{0}+\langle v_{z}^{2} \rangle \cos^{2} \theta_{0}.$$
[^1]: E-mail: mohammadghayuri@gmail.com
|
---
abstract: 'The purpose of this note is to give a simple description of a (complete) family of functions in involution on certain hermitian symmetric spaces. This family obtained via the bi-hamiltonian approach using the Bruhat Poisson structure is especially simple for projective spaces, where the formulas in terms of the momentum map coordinates are presented. We show how these functions are related to the Gelfand-Tsetlin coordinates. We also show how the Lenard scheme can be applied.'
author:
- Philip Foth
date: 'June 25, 2001'
title: 'Integrable systems associated with the Bruhat Poisson structures.'
---
[H]{} Ł[[L]{}]{} §Ø ¶[[P]{}]{} i ł [u]{} Ø[[O]{}]{} ø \[section\] \[Th\][DEFINITION]{} \[Th\][PROPOSITION]{} \[Th\][PROPOSITION]{} \[Th\][LEMMA]{} \[Th\][LEMMA]{} \[Th\][COROLLARY]{} \[Th\][COROLLARY]{} \[Th\][Remark]{}
amssym.def
Introduction.
=============
Let $K$ be a compact real form of a complex semi-simple Lie group $G$ and let $H\subset K$ be a subgroup of $K$ defined by $H=K\cap P$, where $P$ is a parabolic subgroup of $G$ containing a Borel subgroup $B\subset G$. The Bruhat Poisson structure $\pii$ on $X=K/H$, first introduced by Soibelman [@Soi] and Lu-Weinstein [@LW], has the property that its symplectic leaves are precisely the Bruhat cells in $X$. If $T=K\cap B$ is a maximal torus of $K$, then $\pii$ is $T$-invariant. Let $\o_s$ (respectively, $\pi_s$) stand for a $K$-invariant symplectic form (respectively, dual bi-vector field) on $X$, which we assume now to be a compact hermitian symmetric space. It was shown by Khoroshkin-Radul-Rubtsov in [@KKR] that the two Poisson structures, $\pii$ and $\pi_s$ are compatible, meaning that the Schouten bracket of $\pii$ and $\pi_s$ vanishes, $[\pii, \pi_s ]=0$. In particular, any bi-vector field of the form $\alpha \pii +\beta \pi_s$, $(\alpha,\beta )\in \R^2$, is Poisson. In this situation, one can introduce the following family $\{ f_k\}$ of functions: $$f_k:= (\pii^{\wedge k},
\o_s^{\wedge k}),$$ obtained by the duality pairing of exterior powers of $\o$ and $\pii$. If the (real) dimension of $X$ is equal to $2n$, then we have $n$ functions, $f_1$, ..., $f_n$ which may carry some useful information about $X$. These functions are in involution with respect to either of the two Poisson structures. We make explicit computations for $\CP^n$, since this is the only case, where we can present an explicit coordinate approach. We show how the function that we have obtained are related to the Gelfand-Tsetlin integrable systems studied by Guillemin and Sternberg [@GSGT]. Analogous statements for other hermitian symmetric spaces will appear elsewhere [@EF]. In the last part of the paper we make explicit computations using the Lenard scheme [@Magri].
[**Acknowledgments.**]{} I would like to thank Lu Jiang-Hua and Sam Evens for answering many questions regarding Bruhat-Poisson structures. Lu Jiang-Hua also provided simple proofs of Propositions 2.1 and 2.2. I thank Hermann Flaschka for conversations about integrable systems. I thank Yan Soibelman for historical remarks, and Ping Xu for discussions about Poisson-Nijenhuis manifolds.
Families of functions in involution.
====================================
Multi-hamiltonian structures are very important in the theory of integrable systems. Starting with the fundamental works of Magri [@Magri], bi- and multi-Hamiltonian structures found many interesting and fundamental applications, as in [@KSM], [@GZ], [@RST] and references therein.
Let $M$ be a manifold and let $\pi_b$ and $\pi_s$ be two Poisson structures on $M$ such that
1\. The Poisson structure $\pi_s$ is non-degenerate (so the subscript $s$ stands for symplectic).
2\. The Poisson structures $\pi_s$ and $\pi_b$ are compatible, meaning that the Schouten bracket $[\pi_s, \pi_b]$ vanishes. Or, equivalently, for any two real numbers $\alpha$ and $\beta$, the bi-vector field $\alpha\pi_s + \beta\pi_b$ defines a Poisson structure on $M$.
If ${\rm dim}(M) =2n$, then we can define $n$ functions $f_1$, ..., $f_n$ as follows:
$$f_j = {\pi_b^j \wedge \pi_s^{n-j}\over \pi_s^n}.$$ The operation of division by the top degree bi-vector field makes perfect sense, since $\pi_s$ is non-degenerate, and thus in any local coordinate system $(x_1,$ ... $x_{2n})$ the $2n$-vector field $\pi_s^{n}$ looks like $$\pi_s^{n}=h(x_1, ...,x_{2n}) \del_{x_1}\wedge \cdots\wedge\del_{x_{2n}},$$ for a non-vanishing function $h(x_1$, ..., $x_{2n})$. Equivalently, if $\o_s$ is the symplectic form dual to $\pi_s$, then one can define $$f_k:= (\pii^{\wedge k}, \o_s^{\wedge k}),$$ where we use the duality pairing $$\Gamma(M, \wedge^{2k}TM)\otimes \Gamma(M, \wedge^{2k}T^*M)\to C^{\infty}(M).$$
It turns out that this family of functions has the following property.
The family of functions $f_i$ defined above are in involution with respect to either Poisson structure, $\pi_b$ or $\pi_s$.
([**J.-H. Lu**]{}) Let $X_i=i_{df_i}\pi_b$ and let $Y_j=i_{df_j}\pi_s$. Consider the equality $f_k\pi_s^n = \pi_b^k\wedge\pi_s^{n-k}$ and compute $L_{X_l}$ of both sides to arrive to the following identity: $${n-k\over k+1} \{ f_{k+1}, f_l\}_s = -\{ f_k, f_l\}_b+nf_k\{ f_1, f_l \}_s,$$ where the subscripts $s$ or $b$ indicate with respect to which Poisson structure the Poisson bracket is taken. Finally, use the induction on $l$. $\bigcirc$
[*Remark.*]{} The approach that we have followed here is intimately related to the Poisson-Nijenhuis structures, that were studied by Magri and Morosi [@MM], Kosmann-Schwarzbach and Magri [@KSM], Vaisman [@Vais] and others. The set of our functions $\{ f_j\}$ can be expressed, polynomially, through the traces of powers of the intertwining operator corresponding to the Nijenhuis tensor.
Now let us take $M=X$ to be a coadjoint orbit in $\kk^*$, which we assume to be a compact hermitian symmetric space. We take $\pi_s=\pi$ - the Kirillov-Kostant-Souriau symplectic structure and $\pi_b=\pii$ - the Bruhat-Poisson structure, which is obtained via an identification of $X$ with $K/(P\cap K)$ as in Introduction. Under this identification, $\pi$ is $K$-invariant. The following was first proved in [@KKR]:
If $X$ is a hermitian symmetric space as above, then the Poisson structures $\pi$ and $\pii$ are compatible.
([**J.-H. Lu**]{}) Let $X$ be a generating vector field for the $K$-action. Clearly, the $K$-invariance of $\pi$ implies that $L_X\pi=0$. Since $\pii$ came from $\kk\wedge\kk$ by applying left and right actions of $K$, $L_X\pii$ is obtained from $\delta(X)$ by applying the $K$-action. Here, $\delta(X)$ is the co-bracket of $X$, which is an element of $\kk\otimes \kk$, since we can view $X$ as an element of $\kk$. Therefore, $L_X\pii$ is a sum of wedges of generating vector fields for the action of $K$. Accordingly, $$[L_X\pii, \pi]=0,$$ which in turn implies that $$L_X[\pii, \pi]=0,$$ and thus $[\pii, \pi]$ is a $K$-invariant 3-vector field on $X$. When $X$ is a hermitian symmetric space, there are none such (since the nil-radical of the corresponding parabolic group is abelian), so it must be zero. $\bigcirc$
Therefore, we have the following
Let $X$ be a coadjoint orbit in $K$. Assume that $X$ is a hermitian symmetric space of complex dimension $n$. The above recipe yields $n$ functions $(f_1$, ..., $f_n)$ on $X$, which are in involution with respect to either $\pii$ or $\pi$.
The functions $(f_1$, ..., $f_n)$ that we have constructed turn out to be related to the Gelfand-Tsetlin coordinates in the case when $K=SU(n)$, as we will see later on. In the next section we will carry explicit computations of these functions on the projective spaces.
Computations for the projective spaces.
=======================================
Let $\CP^n$ be a complex projective space of (complex) dimension $n$, and let $[Z_0:Z_1:...:Z_n]$ be a homogeneous coordinate system on it. We use the standard Fubini-Study form $\o$ for $\o_s$ and the following description of $\pii$ obtained by Lu Jiang-Hua in [@Lu2] and [@Lu3]. First, we need Lu’s coordinates on the largest Bruhat cell, where $Z_0\ne 0$ and we let $z_i=Z_i/Z_0$: $$y_i:= {z_i\over\sqrt{1+|z_{i+1}|^2 +\cdots + |z_n|^2}}, \ \ 1\le i\le n.$$
Lu’s coordinates are not holomorphic, but convenient for the Bruhat Poisson structure, which now assumes the following form $$\pii = \i \sum_{i=1}^n (1+|y_i|^2)\d_{y_i}\wedge \d_{\bar{y}_i}.$$ In order to be able to compute with $\o$ and $\pii$, we need to move to the polar variables $r_i, \phi_j$ defined by $z_i=r_ie^{\i\phi_i}$ and eventually to the momentum map variables $x_i, \phi_j$ defined by $$x_i=\delta_{1,i}-{r_i^2\over 1+r_1^2+\cdots r_n^2}.$$ These variable are just a slight distortion (for later convenience) of the standard coordinates on $\R^n$ for the momentum map associated with the maximal compact torus action on $\CP^n$. One of the advantages of using this coordinate system is that the Fubini-Study symplectic structure has the following simple form: $$\o = \sum_{i=1}^n dx_i\wedge d\phi_i.$$ In fact, the simplest form for the Bruhat Poisson structure is also achieved in this coordinate system.
The Bruhat Poisson structure $\pii$ on $\CP^n$ can be written in the coordinate system $(x_j, \phi_i)$ as $$\pii=\sum_{i=1}^n \T_i\wedge \d_{\phi_i},$$ where $$\Theta_i=(x_1+\cdots +x_i)\d_{x_i} +\sum_{j=i+1}^n x_j\d_{x_j}.$$
The proof of this statement is purely computational. One can introduce auxiliary variables $q_i=\log(1+|y_i|^2)$, and use those to write $$\pii = \sum_{i=1}^n \d_{q_i}\wedge \d_{\phi_i}$$ - the action-angle form for $\pii$. Eventually, one can establish the following relations: $x_1=e^{-q_1}$, and for $j>1$, $$x_j=e^{-(q_1+\cdots +q_j)}-e^{-(q_1+\cdots +q_{j-1})}.$$ The rest is straightforward. $\bigcirc$
Now, one can see that the simple linear and triagonal form of $\pii$ makes the computation of the functions $\{ f_i\}$ extremely simple. We will introduce the following linear change of variables on $\R^n$: $$c_k=\sum_{i=1}^k x_i.$$ In these variables, the set of functions $\{ f_i\}$ looks as follows.
The integrals $f_i$ (up to constant multiples) arising from the bihamiltonian structure $(\pi_s, \pii)$ on $\CP^n$ are given by the elementary polynomials in $(c_1, ..., c_n)$: $$f_1=c_1+\cdots + c_n,$$ $$f_j= \sum_{i_1< ... <i_j}c_{i_1}\cdots c_{i_j},$$ $$f_n=c_1\cdots c_n.$$ \[Thwe\]
The explicit nature of these integrals is essential in looking at the relation with the certain natural flows [@Toda]. The hamiltonian $f_1$ in terms of the momentum map variables is given by $$f_1= nx_1 +(n-1)x_2 +\cdots + 2x_{n-1}+x_n.$$ Then the gradient in the momentum simplex has coordinates $\l_i=n+1-i$. Those numbers also are the weights assigned to the vertices (which correspond to the centers of the Bruhat cells). Thus we arrive to
The above flow on $\CP^n$ with eigenvalues consecutive integers from $1$ to $n$ determines the standard Bruhat cell decomposition.
Relation with Gelfand-Tsetlin coordinates.
==========================================
When $X=Gr(k)$ - the grassmannian of $k$-planes in $\C^{n+1}$, we have obtained $k(n-k+1)$ functions in involution on $X$. Let us recall the standard embedding $$\Psi: \ \ F_n \hookrightarrow Gr(1)\times \cdots \times Gr(n),$$ where $F_n$ is the manifold of full flags in $\C^{n+1}$, and the locus of the embedding is given by the incidence relations. This embedding respects the KKS Kaehler structures on the manifolds involved, if we would like to view them as coadjoint orbits in $\kk^*$. Moreover, this embedding is equivariant with respect to the $K=SU(n+1)$-action.
Recall the Gelfand-Tsetlin system on $F_n$. We fix the orbit type of $F_n$, i.e. we fix the eigenvalues $\i \l_i$ and order them, so $\l_1 > \l_2 > \cdots > \l_{n+1}$. For convinience and easier visualization, we will assume that $\l_{n+1}=0$ (so all the eigenvalues are non-negative), which will correspond to working with $\u_{n+1}^*$ rather than with $\su_{n+1}^*$. For convenience, we also identify the Lie algebra $\u_k$ with its dual via $-{\rm Tr}(AB)$. The Gelfand-Tsetlin system looks like [@GS5], [@GSGT]:
$$\begin{array}{ccccccccc}
\l_1 & > & \l_2 & > & \cdots & > & \l_n & > & 0 \\
{} & \mu_1^1 & \ge & \mu_2^1 & \ge & \cdots & \ge & \mu_n^1 & {}\\
{} & {} & {} & \cdots & \cdots & \cdots & {} & {} & {} \\
{} & {} & {} & \mu_1^{n-1} & \ge & \mu_2^{n-1} & {} & {} & {} \\
{} & {} & {} & {} & \mu_1^n & {} & {} & {} & {}
\end{array},$$ where the $i$-th row corresponds to the projection $\u_{n+1}^*\to \u_{n+2-i}^*$, which is dual to the embedding $U(n+2-i)\hookrightarrow U(n+1)$ in the left upper corner. The eigenvalues $\mu_i^j$ together with $\l_i$’s satisfy the interlacing property.
The picture above can be adapted to any orbit, in particular to $Gr(k)$, where we would take $\l_1=\cdots =\l_k>0$, and other $\l$’s equal to zero. When $k$ varies from $1$ to $n$, the picture above acquires more and more non-zero elements. At each step, while going from level $k$ to $k+1$, we will get new integrals on $Gr(k+1)$, which we can pull back to $F_n$ using $\Psi$.
Our goal is to relate the integrals $f_j$, that we obtained in Section 3 using the bi-hamiltonian approach on hermitian symmetric spaces, and the Gelfand-Tsetlin coordinates. We will start working with $M = \CP^n$, the complex projective space.
Let $B$ be the $(n+1)\times (n+1)$ matrix, representing an element of $\u(n+1)^*$ such that the only non-zero element of $B$ is $\i\l$, located in the very left upper place. The coadjoint orbit $\O_B$ of $B$ is isomorphic to $\CP^n$, where the identification goes as follows. Any element in the coadjoint orbit of $B$ can be viewed as $ABA^{-1}$, where $A\in U(n+1)$. Let $(a_{ij})$ be the entries of $A$. Then the identification $$w: \ \O_B\to \CP^n$$ is given by $$w(ABA^{-1}) = [a_{11}: a_{21}: \cdots : a_{n+1,1}],$$ in terms of a homogeneous coordinate system $[Z_0: \cdots : Z_n]$ on $\CP^n$. We suspect that the following is well-known, and in any case, is not hard to compute, that the Gelfand-Tsetlin coordinates are: $$\mu_r^k=0 \ \ {\rm for} \ \ r\ne 1,$$ $$\mu_1^{k} = \lambda (x_1+ \cdots + x_{n-k+1}),$$ where $(x_1, ..., x_n)$ are the momentum map coordinates that we used in the previous section. We arrive to the conclusion that the Gelfand-Tsetlin coordinates $\{ \mu_1^k\}$ coincide (up to the multiple of $\lambda$, which we can assume equal to one) with the coordinates $\{ c_k \}$ introduced in the previous section. Now, it remains to notice that the Theorem \[Thwe\] from the previous Section immediately yields
The complete family of integrals in involution $\{ f_i \}$ on $\CP^n$ obtained using the bi-hamiltonian approach with respect to the Bruhat Poisson structure and an invariant symplectic structure are expressed by the elementary polynomials in the Gelfand-Tsetlin coordinates.
We prove a similar result for other hermitian symmetric spaces in a forthcoming paper [@EF].
Comparison to the Lenard scheme.
================================
Recall the following result [@Magri]. If $\alpha\pi_0 + \beta\pi_1$ is a Poisson pencil on a manifold $M$, and $V$ a vector field, preserving this pencil, then there exists a sequence of smooth functions $\{ g_i\}$ on $M$, such that $g_1$ is the Hamiltonian of $V$ with respect to $\pi_0$ and the vector field of the $\pi_0$-hamiltonian $f_j$ is the same as the vector field of the $\pi_1$-hamiltonian $f_{j+1}$: $$i_{df_j}\pi_0 = i_{df_{j+1}}\pi_1.$$ Moreover, the functions in the family $\{ f_j\}$ are in involution with respect to both $\pi_0$ and $\pi_1$.
Our goal in this section is to show that if we start with $M=\CP^n$, and take the pencil $(\pi_s, \pii)$ as before, then there is a natural choice of $V$ on $\CP^n$ leading to a completely integrable systems, and the integrals $\{ g_j \}$ in question can be easily expressed in terms of the coordinates $(c_1, ..., c_n)$ that we introduced in Section 3.
It is a matter of a simple computation that if we start with a hamiltonian $g_1 = a_1 x_1+\cdots +a_n x_n$, where $(x_1, ..., x_n)$ are the momentum map coordinates as before, then the corresponding initial vector field $V$ is given by $$V = i_{dg_1}\pii = \sum_j [a_j(x_1+\cdots x_j)+a_{j+1}x_{j+1}+\cdots +
a_nx_n]\partial_{\phi_j}.$$ From this, one can compute $\displaystyle{g_2=\sum_j {a_j\over 2}x_j^2 +
\sum_{l < k} a_kx_lx_k},$ etc. An interesting choice for $g_1$ turns out to be $$g_1=c_1+\cdots + c_n = nx_1+(n-1)x_2 +\cdots + x_n,$$ which coincides with $f_1$ from Section 3. The reason for this choice is
The Lenard scheme associated to the Poisson pencil $(\pii$, $\pi_s)$ on $\CP^n$ which starts with $g_1=c_1+\cdots + c_n$ and $$V=\sum_j[(n-j+1)(x_1+\cdots + x_j)+(n-j)x_{j+1}+\cdots + 2x_{n-1} +
x_n]\partial_{\phi_j},$$ yields $$g_k=c_1^k+c_2^k+\cdots + c_n^k,$$ which determines a completely integrable bi-hamiltonian system on $\CP^n$.
With all the explicit formulas that we have presented in this paper, the proof is a simple computation. $\bigcirc$
We should remark, that the constants $(a_1, ...,a_n)$ for the first hamiltonian in the Lenard scheme have to be chosen with care for two reasons. First, the computations are not simple for an arbitrary choice. Second, as the next example shows, we do not always arrive to a completely integrable system.
[**Example.**]{} If one takes $g_1=x_1+\cdots + x_n$, and $V=\sum_j (x_1+\cdots + x_n)\partial_{\phi_j}$, then applying the above scheme, one would obtain $$g_k=(x_1+\cdots + x_n)^k = (g_1)^k.$$ The differentials of all functions in this family are clearly linearly dependent.
0.3in
Department of Mathematics\
University of Arizona\
Tucson, AZ 85721-0089\
foth@math.arizona.edu
[*AMS subj. class.*]{}: primary 58F07, secondary 53B35, 53C35.
|
---
abstract: 'This paper is motivated by recent applications of Diophantine approximation in electronics, in particular, in the rapidly developing area of Interference Alignment. Some remarkable advances in this area give substantial credit to the fundamental Khintchine–Groshev Theorem and, in particular, to its far reaching generalisation for submanifolds of a Euclidean space. With a view towards the aforementioned applications, here we introduce and prove quantitative explicit generalisations of the Khintchine–Groshev Theorem for non–degenerate submanifolds of ${\mathbb{R}}^n$. The importance of such quantitative statements is explicitly discussed in Jafar’s monograph [@J2010 §4.7.1].'
address: 'Department of Mathematics, University of York, York, YO10 5DD, UK '
author:
- 'F. Adiceam, V. Beresnevich, J. Levesley, S. Velani'
- 'E. Zorin'
title: Diophantine Approximation and applications in Interference Alignment
---
[^1]
[*Dedicated to Maurice Dodson* ]{}
[2000 [*Mathematics Subject Classification*]{}: Primary 11J83; Secondary 11J13, 11K60, 94A12]{}
[: Metric Diophantine approximation, Khintchine–Groshev Theorem, non-degenerate manifolds.]{}
Introduction
============
The present paper is motivated by a recent series of publications, including [@GMK2010; @J2010; @MOGMAK; @MOGMAK2; @MMK2010; @NW2012; @WuShamaiVerdu; @XU2010; @ZamanighomiWang], which utilize the theory of metric Diophantine approximation to develop new approaches in interference alignment, a concept within the field of wireless communication networks. This new link is both surprising and striking. The key ingredient from the number theoretic side is the fundamental Khintchine–Groshev Theorem and its variations. In this paper we seek to address certain problems in Diophantine approximation which crop up, or impinge upon, the applications to interference alignment. The results obtained represent quantitative refinements of the Khintchine–Groshev Theorem that are relevant to the applications mentioned above. Indeed, the desirability of such quantitative statements is explicitly eluded to in Jafar’s monograph [@J2010 §4.7.1].
Although the main emphasis will be on the Khintchine–Groshev Theorem for submanifolds of ${\mathbb{R}}^n$, we begin by considering the classical theory for systems of linear forms of independent variables. This approach has two benefits. Firstly, we are able to introduce the key ideas without too much technical machinery obscuring the picture. Secondly, the refinements of the classical theory produce effective results with much better constants.
In order to recall Khintchine’s theorem we first define the set $ {\mathcal W}(\psi) $ of *$\psi$-well approximable numbers*. To this end, denote by ${{\mathbb{R}}_+}$ the set of non–negative real numbers. Given a real positive function $\psi: {{\mathbb{R}}_+}\to {{\mathbb{R}}_+}$ with $ \psi(r)\rightarrow0$ as $r\rightarrow\infty$, let then $${\mathcal W}(\psi):=\left\{ x \in {\mathbb{R}}: |qx-p|<\psi(q)\text{ for i.m. }(q,p)\in{\mathbb{N}}\times{\mathbb{Z}}\right\},$$ where ‘i.m.’ reads ‘infinitely many’. For obvious reasons the function $\psi$ is often referred to as an *approximating function*. The points $x$ in ${\mathcal W}(\psi)$ are characterized by the property that they admit approximation by rational points $p/q$ with the error at most $\psi(q)/q$.
A simple ‘volume’ argument together with the Borel–Cantelli Lemma from probability theory implies that $$|{\mathcal W}(\psi)|= 0 \quad {\rm \ if \ } \quad \sum_{q=1}^\infty \,\,\psi(q) \; <\infty \,,$$ where $|X|$ stands for the Lebesgue measure of $X\subset{\mathbb{R}}$. The above convergence statement represents the easier part of the following beautiful result due to Khintchine which gives a criterion for the size of the set ${\mathcal W}(\psi)$ in terms of Lebesgue measure. In what follows, we say that $X\subset{\mathbb{R}}$ is *full* in ${\mathbb{R}}$ and write $|X| = \text{\rm F\small{ULL}}$ if $|{\mathbb{R}}\setminus X | = 0 $; that is, the complement of $X$ in ${\mathbb{R}}$ is of Lebesgue measure zero. The following is a slightly more general version of Khintchine, see [@Beresnevich-Dickinson-Velani-06:MR2184760].
\[thmA\] Let $\psi$ be an approximating function. Then $$|{\mathcal W}(\psi)| \ = \ \begin{cases} 0
&{\displaystyle}\text{if } \quad \sum_{q=1}^\infty \psi(q)<\infty \, , \\[3ex]
\text{\rm F\small{ULL}}
&{\displaystyle}\text{if } \quad \sum_{q=1}^\infty \psi(q)=\infty
\ \text{ and $\psi$ is
monotonic}.
\end{cases}$$
Thus, given any monotonic approximating function $\psi$, for almost all[^2] $x \in{\mathbb{R}}$ the inequality $|x-p/q | < \psi(q)/q$ holds for infinitely many rational numbers $p/q$ if and only if the sum $\sum_{q=1}^\infty\psi(q)$ diverges.
There are various generalisations of Khintchine’s theorem to higher dimensions — see [@BBDV09] for an overview. Here we shall consider the case of systems of linear forms which originates from a paper by Groshev in 1938. In what follows, $m$ and $n$ will denote positive integers and ${M_{m,n}}$ will stand for the set of $m\times n$ matrices over ${\mathbb{R}}$. Given a function $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$, let $${\mathcal W}_{m,n}(\Psi):=\left\{ {{\mathbf{X}}}=(x_{i,j}) \in {M_{m,n}}: \|{{\mathbf{X}}}{{\mathbf{a}}}\|<\Psi({{\mathbf{a}}})\text{ for i.m. }{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}\right\},$$ where ${{\mathbf{a}}}=(a_1,\dots,a_n)$, $$\|{{\mathbf{X}}}{{\mathbf{a}}}\| := \max_{1\le i\le m}\|x_{i,1}a_1 + \ldots + x_{i,n}a_n\|$$ and $ \| x\| := \min \{ | x - k | : k \in {\mathbb{Z}}\}$ is the distance of $ x \in {\mathbb{R}}$ from the nearest integer. Given a subset $X$ in ${M_{m,n}}$, we will write $|X|_{mn}$ for its ambient (i.e. $mn$–dimensional) Lebesgue measure. It is easily seen that ${\mathcal W}_{1,1}(\Psi)$ coincides with ${\mathcal W}(\psi)$ when $\Psi(q)=\psi(|q|)$. Therefore the following result is the natural extension of Theorem \[thmA\] to higher dimensions. Notice that there is no monotonicity assumption on the approximating function.
\[thmB\] Let $m,n\in{\mathbb{N}}$ with $nm>1$, $\psi:{\mathbb{N}}\to{{\mathbb{R}}_+}$ be an approximating function and $$\label{theorem_one_def_S}
\Sigma_\psi:=\sum_{q=1}^\infty q^{n-1}\psi(q)^m\,.$$ Let $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$ be given by $\Psi({{\mathbf{a}}}):=\psi(|{{\mathbf{a}}}|)$ for ${{\mathbf{a}}}=(a_1,\dots,a_n)\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}$, where $|{{\mathbf{a}}}|=\max_{1\le i\le n}|a_i|$. Then $$|{\mathcal W}_{m,n}(\Psi)|_{mn} \ = \ \begin{cases} 0
&{\displaystyle}\text{if } \quad \Sigma_\psi<\infty \, , \\[1ex]
\text{\rm F\small{ULL}}
&{\displaystyle}\text{if } \quad \Sigma_\psi=\infty\,.
\end{cases}$$
Theorem B was first obtained by Groshev under the assumption that $q^n\psi(q)^m$ is monotonic in the case of divergence. The redundancy of the monotonicity condition for $n\ge 3$ follows from Schmidt’s paper [@Schmidt-1960 Theorem 2] and for $ n = 1 $ from Gallagher’s paper [@Ga65]. Theorem B as stated was eventually proved in [@BV10] where the remaining case of $n=2$ was addressed. The convergence case of Theorem B is a relatively simple application of the Borel–Cantelli Lemma and it holds for arbitrary functions $\Psi$. Thus together with Theorem A, we have the following extremely general statement in the case of convergence.
\[thmC\] Let $m,n\in{\mathbb{N}}$ and $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$ be any function such that the sum $$\label{S}
\Sigma_\Psi:=\sum_{{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}\Psi({{\mathbf{a}}})^m$$ converges. Then $$|{\mathcal W}_{m,n}(\Psi)|_{mn} = 0\,.$$
An immediate consequence of Theorem \[thmC\] is the following statement.
\[cor1\] Let $\Psi$ be as in Theorem \[thmC\]. Then, for almost every ${{\mathbf{X}}} \in {M_{m,n}}$ there exists a constant $\kappa ({{\mathbf{X}}}) >0 $ such that $$\label{vb1}
\|{{\mathbf{X}}}{{\mathbf{a}}} \| \ > \ \kappa({{\mathbf{X}}}) \, \Psi({{\mathbf{a}}}) \qquad \forall \ \ {{\mathbf{a}}} \in {\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}\,.$$
In recent years estimates of this kind have become an important ingredient in the study of the achievable number of degrees of freedom in various schemes on Interference Alignment from electronics communication — see, e.g., [@MOGMAK]. The applications typically require that $\kappa({{\mathbf{X}}})$ is independent of ${{\mathbf{X}}}$. Unfortunately, this is impossible to guarantee with probability 1, that is on a set of full Lebesgue measure. To demonstrate this claim, let us define the following set : $$\label{def_B}
\mathcal{B}_{m,n}(\Psi,\kappa):=\Big\{{{\mathbf{X}}} \in {M_{m,n}}: \|{{\mathbf{X}}}{{\mathbf{a}}}\| > \kappa\Psi({{\mathbf{a}}}) \ \ \ \forall\ {{\mathbf{a}}} \in {\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}\Big\}\,.$$ Then, for any $\kappa$ and $\Psi$, the set $\mathcal{B}_{1,n}(\Psi,\kappa)$ will not contain $$[-\kappa\Psi({{\mathbf{a}}}),\kappa\Psi({{\mathbf{a}}})]\times{\mathbb{R}}^{n-1}$$ with ${{\mathbf{a}}}=(1,0,\dots,0)$. This set is of positive probability. In the light of this example it becomes highly desirable to address the following problem :
**Problem.** *Investigate the dependence between $\kappa$ and the probability of $\mathcal{B}_{m,n}(\Psi,\kappa)$.*
As the first step to understanding this problem we obtain the following straightforward consequence of Theorem C.
\[theorem\_linear\_forms\] Let $m,n\in{\mathbb{N}}$ and $\mu$ be a probability measure on ${M_{m,n}}$ that is absolutely continuous with respect to Lebesgue measure on ${M_{m,n}}$. Let $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$ be any function such that converges. Then for any $\delta\in(0,1)$ there is a constant $\kappa>0$ depending only on $\mu$, $\Psi$ and $\delta$ such that $$\label{theorem_linear_forms_ie_statement}
\mu\left(\mathcal{B}_{m,n}(\Psi,\kappa)\right)\ge 1-\delta.$$
Prior to giving a proof of this theorem recall that a measure $\mu$ on ${M_{m,n}}$ is *absolutely continuous with respect to Lebesgue measure* if there exists a Lebesgue integrable function $f:{M_{m,n}}\rightarrow{{\mathbb{R}}_+}$ such that for every Lebesgue measurable subset $A$ of ${M_{m,n}}$, one has that $$\label{vb2}
\mu(A)=\int_Af,$$ where $\int_A{}f$ is the Lebesgue integral of $f$ over $A$. The function $f$ is often referred to as the *distribution (or density) of $\mu$*.
Since $\mu$ is absolutely continuous with respect to Lebesgue measure, Theorem C implies that $\mu({\mathcal W}_{m,n}(\Psi))=~0$. Hence $\mu(M_{m,n}\setminus{\mathcal W}_{m,n}(\Psi))=\mu(M_{m,n})=1$. Note that $$\bigcup_{\kappa>0}\mathcal{B}_{m,n}(\Psi,\kappa)=M_{m,n}\setminus{\mathcal W}_{m,n}(\Psi)\,.$$ Theorem \[theorem\_linear\_forms\] now follows on using the continuity of measures.
In view of our previous discussion we have that $\kappa\to0$ as $\delta\to0$. Then, the above problem specialises to the explicit understanding of the dependence of $\kappa$ on $\delta$. This will be the main content of the next section. Subsequent sections will be devoted to obtaining similar effective version of the convergence Khintchine–Groshev Theorem for non–degenerate submanifolds of ${\mathbb{R}}^n$. This constitutes the main substance of the paper. The results are obtained by exploiting the techniques of Bernik, Kleinbock and Margulis [@Bernik-Kleinbock-Margulis-01:MR1829381] originating from the seminal work of Kleinbock and Margulis [@Kleinbock-Margulis-98:MR1652916] on the Baker–Sprindžuk conjecture.
The theory for independent variables
====================================
To begin with we give an alternative proof of Theorem \[theorem\_linear\_forms\] which introduces an explicit construction that will be utilized for quantifying the dependence of $\kappa$ on $\delta$. Indeed, in the case that $\mu$ is a uniform distribution on a unit cube the proof already identifies the required dependence.
Theorem \[theorem\_linear\_forms\] revisited {#2.1}
--------------------------------------------
By a unit cube in ${M_{m,n}}$ we will mean a subset of ${M_{m,n}}$ given by $$\big\{(x_{i,j})\in{M_{m,n}}:\alpha_{i,j}\le x_{i,j}<\alpha_{i,j}+1\big\}$$ for some fixed matrix $(\alpha_{i,j})\in{M_{m,n}}$. Given ${{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}$ and ${\varepsilon}>0$, let $\mathcal{W}({{\mathbf{a}}},{\varepsilon})$ denote the set of ${{\mathbf{X}}}\in{M_{m,n}}$ such that $$\label{theorem_linear_forms_reciproque_inequality}
\|{{\mathbf{X}}}{{\mathbf{a}}}\| \ \le {\varepsilon}\,.$$ It is easily seen that $\mathcal{W}({{\mathbf{a}}},{\varepsilon})$ is invariant under additive translations by an integer matrix; that is, $$\mathcal{W}({{\mathbf{a}}},{\varepsilon})+{{\mathbf{B}}}=\mathcal{W}({{\mathbf{a}}},{\varepsilon})$$ for any ${{\mathbf{B}}}\in{M_{m,n}}({\mathbb{Z}})$, where ${M_{m,n}}({\mathbb{Z}})$ denotes the set of $m\times n$ matrices with integer entries. Furthermore, we have that $$\label{v103}
|\mathcal{W}({{\mathbf{a}}},{\varepsilon})\cap P|_{mn}=(2{\varepsilon})^m$$ for any $0\le {\varepsilon}\le\tfrac12$ and any unit cube $P$ in ${M_{m,n}}$. This follows, for example, from [@Sprindzuk-1979-Metrical-theory Chapter 1, Lemma 8]. Then, since $$\label{sv111} \Sigma_\Psi:=\sum_{{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}\Psi({{\mathbf{a}}})^m <\infty \,$$ we must have that $$\label{v105}
M_\Psi:=\sup\{\Psi({{\mathbf{a}}}):{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}\}<\infty.$$ In what follows we will assume that $$\label{v104}
2\kappa M_\Psi\le 1\,.$$ This condition ensures that we can apply with ${\varepsilon}=\kappa\Psi({{\mathbf{a}}})$.
Fix a unit cube $P_{{{\mathbf{0}}}}$ in ${M_{m,n}}$ and for each $\Delta\in{M_{m,n}}({\mathbb{Z}})$, let $$P_\Delta:=P_{{{\mathbf{0}}}}+\Delta$$ denote the additive translation of $P_{{{\mathbf{0}}}}$ by $\Delta$. Clearly, $P_\Delta$ itself is a unit cube. Furthermore, $$\label{v111}
{M_{m,n}}=\bigcup_{\Delta\in{M_{m,n}}({\mathbb{Z}})}^{\circ} P_\Delta \, .$$ Note that the union is disjoint. Using and the fact that $${M_{m,n}}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)=\bigcup_{{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}\mathcal{W}({{\mathbf{a}}},\kappa\Psi({{\mathbf{a}}})) \, ,$$ we obtain that for each $\Delta\in{M_{m,n}}({\mathbb{Z}})$, $$\begin{aligned}
\label{v100}
|P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)|_{mn} & \le & \sum_{{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}|\mathcal{W}({{\mathbf{a}}},\kappa\Psi({{\mathbf{a}}}))\cap P_{\Delta}|_{mn} \nonumber \\[2ex]
& = & \sum_{{{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}(2\kappa\Psi({{\mathbf{a}}}))^m
\ = \ (2\kappa)^m\Sigma_\Psi\,.\end{aligned}$$ Since $\mu$ is a probability measure, it follows from that there exists a finite subset $A\subset{M_{m,n}}({\mathbb{Z}})$ such that $$\label{v106}
\mu\left(\bigcup_{\Delta\in A} P_{\Delta}\right)>1-\delta/2\,.$$ Let $N=\#A$ be the number of elements in $A$. Since $\mu$ is absolutely continuous with respect to Lebesgue measure, for every $\Delta\in A$ and any ${\varepsilon}_1>0$, there exists ${\varepsilon}_2$ such that for any measurable subset $X$ of $P_\Delta$, $$\label{v101}
|X|_{mn}<{\varepsilon}_2\quad\Rightarrow\quad \mu(X)<{\varepsilon}_1\,.$$ In view of , applying to $X=P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)$ and ${\varepsilon}_1=\delta/(2N)$ implies the existence of $${\varepsilon}_2={\varepsilon}_2(\Delta,\delta,N)>0$$ such that $$\label{v102}
\mu(P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa))<\delta/(2N)\qquad\text{if}\qquad(2\kappa)^m\Sigma_\Psi\le {\varepsilon}_2(\Delta,\delta,N)\,.$$ In particular, the second inequality in holds if $$\kappa\le \kappa_\Delta:=\frac12\left(\frac{{\varepsilon}_2(\Delta,\delta,N)}{\Sigma_\Psi}\right)^{1/m}\,.$$ Since $A$ is finite, there exists $\kappa$ satisfying and $$0<\kappa\le \min_{\Delta\in A} \kappa_\Delta\,.$$ Clearly, for such a choice of $\kappa$ the first inequality in holds for any $\Delta\in A$. Hence, by and the additivity of $\mu$ we obtain that $$\begin{aligned}
\mu({M_{m,n}}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)) & \le & \frac\delta2+
\sum_{\Delta\in A}\mu(P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa))
\\[1ex]
& \le & \frac\delta2+
\sum_{\Delta\in A}\frac\delta{2N} \ = \ \frac\delta2+N\frac\delta{2N} \ =\ \delta\,.\end{aligned}$$ The upshot of this is that $$\label{v108}
\mu(\mathcal{B}_{m,n}(\Psi,\kappa)) = 1-\mu({M_{m,n}}\setminus \mathcal{B}_{m,n}(\Psi,\kappa))\ge1-\delta\, ,$$ which completes the proof of Theorem \[theorem\_linear\_forms\].
Quantifying the dependence of $\kappa$ on $\delta$ \[explicitind\]
------------------------------------------------------------------
We now turn our attention to quantifying the dependence of $\kappa$ on $\delta$ within the context of Theorem \[theorem\_linear\_forms\]. To this end, we will make use of the $L_p$ norm. Given a Lebesgue measurable function $f:{M_{m,n}}\rightarrow{{\mathbb{R}}_+}$, a measurable subset $X$ of ${M_{m,n}}$ and $p\ge1$, we write $f\in L_p(X)$ if the Lebesgue integral $$\int_X |f|^p:=\int_{{M_{m,n}}}|f|^p\chi_X$$ exists and is finite. Here $\chi_X$ is the characteristic function of $X$. For $f\in L_p(X)$, the $L_p$ norm of $f$ on $X$ is defined by $$\|f\|_{p,X} \, := \, \left(\int_X |f|^p\right)^{1/p}\,.$$ In the case $p=\infty$, the $L_\infty$–norm on $X$ is defined as the essential maximum of $|f|$ on $X$; that is, $$\|f\|_{\infty,X}:=\inf\left\{c\in{\mathbb{R}}: |f(x)|\le c\text{ for almost all }x\in X\right\}\,.$$ If $\|f\|_{\infty,X}<\infty$, then we write $f\in L_\infty(X)$. For example, if $f$ is continuous and $X$ is a non–empty open subset of ${M_{m,n}}$, then $\|f\|_{\infty,X}$ is simply the supremum of $f$ on $X$. The following lemma gathers together two well know facts regarding the $L_p$ norm.
\[l1\] \
1. For any $p\ge1$ and any measurable subsets $X\subset Y$, $$\|f\|_{p,X}\le \|f\|_{p,Y}\,.$$\
2. $($Hölder’s inequality$)$ For any $1\le p,q\le\infty$ satisfying $\frac1p+\frac1q=1$, $$\left|\int_Xfg\right|\le\|f\|_{p,X}\|g\|_{q,X}\,.$$
The next lemma is a corollary of Lemma \[l1\].
\[lemma\_passage\_Lebesgue\_to\_nu\_via\_v\] Let $p>1$ and $\mu$ be a probability measure on ${M_{m,n}}$ with density $f$. Let $X$ be a Lebesgue measurable subset of ${M_{m,n}}$. If $f\in L_p(X)$, then $$\mu(X)\le \|f\|_{p,X}|X|_{mn}^{1-1/p}\,.$$
By definition, we have that $$\mu(X)=\int_X f \,.$$ Define $q$ by the equation $\tfrac1p+\tfrac1q=1$. Then by Hölder’s inequality, we have that $$\mu(X)=\int_X f\times 1 \ \le \ \|f\|_{p,X}\|1\|_{q,X} \ =\ \|f\|_{p,X}\left(\int_X1^q \right)^{1/q}
\ \le \ \|f\|_{p,X}|X|_{mn}^{1-1/p}$$ as required.
We are now in the position to provide an effective version of Theorem \[theorem\_linear\_forms\]. Let $P_{{{\mathbf{0}}}}$ and $A$ be the same as in §\[2.1\]. In particular, assume that holds. Furthermore, assume that there exists some $p>1$ such that for every $\Delta\in A$, the density $f$ of $\mu$ has finite $L_p$ norm on $P_\Delta$ .
Let $\kappa$ be such that is satisfied. In this case, holds for every $P_\Delta$ with $\Delta\in A$. By Lemmas \[l1\] and \[lemma\_passage\_Lebesgue\_to\_nu\_via\_v\], $$\mu(P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa))\le\|f\|_{p,P_\Delta}\cdot|P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)|_{mn}^{1-1/p}
\,.$$ Using , we obtain that $$\label{v107}
\mu(P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa))\le\|f\|_{p,P_\Delta}\cdot\Big((2\kappa)^m\Sigma_\Psi\Big)^{1-1/p}
\,$$ where $\Sigma_\Psi$ is given by . It follows that $$\begin{aligned}
\mu({M_{m,n}}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)) & \le & \frac\delta2+
\sum_{\Delta\in A}\mu(P_{\Delta}\setminus \mathcal{B}_{m,n}(\Psi,\kappa)) \\[2ex]
& \le & \frac\delta2+
\Big((2\kappa)^m\Sigma_\Psi\Big)^{1-1/p} \Sigma_f \ \le \ \delta\end{aligned}$$ if $$\kappa\le \frac12\left(\Sigma_\Psi^{-1}\left(\frac{\delta}{2\Sigma_f}\right)^{\frac p{p-1}}\right)^{1/m}\,,$$ where $$\label{vb300}
\Sigma_f:=\sum_{\Delta\in A}\|f\|_{p,P_\Delta} \, .$$ Since $A$ is finite, the quantity $\Sigma_f$ is also finite. The upshot of the above discussion is the following statement.
\[theorem\_linear\_forms+\] Let $m,n\in{\mathbb{N}}$, $\mu$, $\Psi$ be as in Theorem \[theorem\_linear\_forms\], let $M_\Psi$ be given by and let $f$ denote the density of $\mu$. Furthermore, let $P_{{{\mathbf{0}}}}$ be any unit cube in ${M_{m,n}}$ and $A$ be any finite subset of ${M_{m,n}}({\mathbb{Z}})$ satisfying . Assume there exists $p>1$ such that $f\in L_p(P_\Delta)$ for any $\Delta\in A$ and also assume that the quantity $\Sigma_f$ is given by . Then, for any $\delta\in(0,1)$, inequality holds with $$\label{theorem_linear_forms_def_kappa_one}
\kappa:=\frac12\min\left\{\frac{1}{M_\Psi},\ \left(\Sigma_\Psi^{-1}\left(\frac{\delta}{2\Sigma_f}\right)^{\frac p{p-1}}\right)^{1/m}
\right\}\,.$$ In this formula, the quotient $p/(p-1)$ should be taken as equal to 1 when $p=\infty$.
In the case when $\Psi$ is even, that is $\Psi(-{{\mathbf{a}}})=\Psi({{\mathbf{a}}})$ for all ${{\mathbf{a}}}\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}$, one can improve formula for $\kappa$ by replacing $\Sigma_\Psi$ with $\tfrac12\Sigma_\Psi$. This is an obvious consequence of the fact that in this case the sets $\mathcal{W}({{\mathbf{a}}},\kappa\Psi({{\mathbf{a}}}))$ and $\mathcal{W}(-{{\mathbf{a}}},\kappa\Psi(-{{\mathbf{a}}}))$ coincide and therefore do not have to be counted twice within the proof.
There are various simplifications and specialisations of Theorem \[theorem\_linear\_forms+\] when we have extra information regarding the measure $\mu$. The following is a natural corollary which is particularly relevant for probability measures $\mu$ with bounded distribution $f$ and mean value about the origin.
\[cor2\] Let $m,n\in{\mathbb{N}}$, $\mu$, $\Psi$, $M_\Psi$ be as in Theorem \[theorem\_linear\_forms\] and let the density $f$ of $\mu$ be bounded above by a constant $K>0$. Furthermore, let $T$ be the smallest positive integer such that $$\label{theorem_one_defN}
\mu\left([-T,T)^{mn}\right)\ge 1-\delta/2.$$ Then, for any $\delta\in(0,1)$, inequality holds with $$\label{v110}
\kappa:=\frac12\min\left\{\frac{1}{M_\Psi},\ \left(\frac{\delta}{2K(2T)^{mn}\Sigma_\Psi}\right)^{1/m}
\right\}\,.$$
With respect to Theorem \[theorem\_linear\_forms+\], let $p=\infty$ and $A$ be the collection of cubes $P_{\Delta}$ that exactly tiles $[-T,T)^{mn}$. Then, $\#A=(2T)^{mn}$ and thus $
\Sigma_f\le K(2T)^{mn}
$. Now, trivially follows from .
Numerical examples
------------------
In what follows, we will use the standard Gaussian error function $$\operatorname{erf}(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}\e^{-t^2/2}\dd t\,.$$ It is readily verified that the function $\operatorname{erf}$ is continuous, strictly increasing and that $$\begin{aligned}
\lim_{x\rightarrow-\infty}\operatorname{erf}(x)=0\qquad\text{and}\qquad
\lim_{x\rightarrow+\infty}\operatorname{erf}(x)=1.\end{aligned}$$ As usual, for $0<y<1$, define $\operatorname{erf^{-1}}(y)$ to be the unique value $x\in{\mathbb{R}}$ such that $\operatorname{erf}(x)=y$. Furthermore, define formally $\operatorname{erf^{-1}}(0):=-\infty$ and $\operatorname{erf^{-1}}(1):=+\infty$.
Consider now Corollary \[cor2\] in the case when $m=n=1$ and when $\mu$ follows the standard Gaussian distribution ${\mathcal{N}}(0,1)$. It can then be verified that Corollary \[cor2\] implies that inequality holds with $$\label{theorem_one_example_one_kappa_one}
\kappa=\frac{\delta\sqrt{2\pi}}{8\cdot N\cdot \Sigma_\Psi} \, ,$$ where $N:=\lceil\operatorname{erf^{-1}}\left(1-\delta/4\right)\rceil$. Here $\lceil x\rceil$ is the “ceiling” of $x$, that is the smallest integer that is bigger than or equal to $x\in{\mathbb{R}}$. We now consider explicit approximating functions. First, let $\Psi$ be the function given by $\Psi(q)=0$ if $q\leq 0$, $$\Psi(q):=\frac{1}{2 q \cdot\log^2q} \quad {\rm if} \quad q \ge 2 \quad {\rm and } \quad \Psi(1):=1/2 \, .$$ Then $\Sigma_\Psi<1.555$ and on making use of we obtain the following table for values of $N$ and $\kappa$ :
$\delta$ $0.5$ $0.25$ $0.1$ $0.01$ $10^{-3}$ $10^{-5}$
---------- -------- --------- ---------- ------------------ ------------------ ------------------
$N$ 2 2 3 4 4 5
$\kappa$ $0.05$ $0.025$ $0.0067$ $5\cdot 10^{-4}$ $5\cdot 10^{-5}$ $4\cdot 10^{-7}$
It follows for instance from this set of data that for $99\%$ of the values of the random variable $x$ with normal distribution $\mathcal{N}(0,1)$, one has that $$\|q x \| \ > \ \frac{1}{2000}\cdot\Psi(q) \qquad {\rm \ for \ all \ } \ q \in {\mathbb{N}}.$$
In the next example, we fix a $Q\in{\mathbb{N}}$ and consider the approximating function $\Psi$ given by $$\Psi(q):=\begin{cases}
\frac{1}{Q} & \text{ if } 1\le q\le Q,\\
0 & \text{ otherwise}.
\end{cases}$$ Then $\Sigma_\Psi=1$ and one can readily verify that
1. for at least $75\%$ of the values of the random variable $x$ with normal distribution $\mathcal{N}(0,1)$, one has that $$\| qx \| > \frac{1}{13 Q} \qquad {\rm for \ all \ } \ q \in [-Q,Q],\, q\neq 0,$$\
2. for at least $90\%$ of the values of the random variable $x$ with normal distribution $\mathcal{N}(0,1)$, one has that $$\| qx \| > \frac{1}{50 Q} \qquad {\rm for \ all \ } \ q \in [-Q,Q],\, q\neq 0 .$$
Diophantine approximation on manifolds
======================================
The aim is to establish an analogue of Theorem \[theorem\_linear\_forms+\] for submanifolds ${\mathcal M}$ of ${\mathbb{R}}^n$. More precisely, we consider the set ${\mathcal{B}}_{n}(\Psi,\kappa)\cap{\mathcal M}$, where $${\mathcal{B}}_{n}(\Psi,\kappa):={\mathcal{B}}_{1,n}(\Psi,\kappa)\,.$$ The fact that the points of interest are of dependent variables, reflecting the fact that they lie on $ {\mathcal M}$, introduces major difficulties in attempting to describe the measure theoretic structure of ${\mathcal{B}}_{n}(\Psi,\kappa)\cap{\mathcal M}$.
[*Non–degenerate manifolds.* ]{} In order to make any reasonable progress with the above problems it is not unreasonable to assume that the manifolds $ {\mathcal M}$ under consideration are [*non–degenerate*]{}. Essentially, these are smooth submanifolds of ${\mathbb{R}}^n$ which are sufficiently curved so as to deviate from any hyperplane. Formally, a manifold ${\mathcal M}$ of dimension $d$ embedded in ${\mathbb{R}}^n$ is said to be *non–degenerate* if it arises from a non–degenerate map ${{\mathbf{f}}}: {{\mathbf{U}}}\to {\mathbb{R}}^n$ where ${{\mathbf{U}}}$ is an open subset of ${\mathbb{R}}^d$ and ${\mathcal M}:={{\mathbf{f}}}({{\mathbf{U}}})$. The map ${{\mathbf{f}}}: {{\mathbf{U}}}\to
{\mathbb{R}}^n,{{\mathbf{x}}}\mapsto {{\mathbf{f}}}({{\mathbf{x}}})=(f_1({{\mathbf{x}}}),\dots,f_n({{\mathbf{x}}}))$ is said to be *$l$–non–degenerate at* ${{\mathbf{x}}}\in {{\mathbf{U}}}$, where $l\in{\mathbb{N}}$, if ${{\mathbf{f}}}$ is $l$ times continuously differentiable on some sufficiently small ball centred at ${{\mathbf{x}}}$ and the partial derivatives of ${{\mathbf{f}}}$ at ${{\mathbf{x}}}$ of orders up to $l$ span ${\mathbb{R}}^n$. The map ${{\mathbf{f}}}$ is *non–degenerate* at ${{\mathbf{x}}}$ if it is $l$–non–degenerate at ${{\mathbf{x}}}$ for some $l\in{\mathbb{N}}$. As is well known, any real connected analytic manifold not contained in any hyperplane of ${\mathbb{R}}^n$ is non–degenerate at every point [@Kleinbock-Margulis-98:MR1652916].
Observe that if the dimension of the manifold ${\mathcal M}$ is strictly less than $n$ then we have that $|{\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M}|_n=0$ irrespective of the approximating function $\Psi$ and $\kappa$. Thus, when referring to the Lebesgue measure of the set $ {\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M}$ it is always with reference to the induced Lebesgue measure on ${\mathcal M}$. More generally, given a subset $S$ of ${\mathcal M}$ we shall write $|S|_{{\mathcal M}} $ for the measure of $S$ with respect to the induced Lebesgue measure on ${\mathcal M}$. Without loss of generality, we will assume that $|{\mathcal M}|_{\mathcal M}=1$ as otherwise the measure can be re–normalized accordingly.
The following statement is a straightforward consequence of the main result of Bernik, Kleinbock and Margulis in [@Bernik-Kleinbock-Margulis-01:MR1829381].
Let ${\mathcal M}$ be a non–degenerate submanifold of ${\mathbb{R}}^n$. Let $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$ be monotonically decreasing in each variable and such that $$\label{def_S_Psi}
\Sigma_\Psi:=\sum_{{{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}}\Psi({{\mathbf{q}}})<\infty\,.$$ Then, for any $\delta\in(0,1)$, there is a constant $\kappa>0$ depending on ${\mathcal M}$, $\Sigma_\Psi$ and $\delta$ only such that
$$\label{theorem_linear_forms_ie_statement_v2}
|{\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M}|_{\mathcal M}\ge 1-\delta.$$
Theorem BKM holds for arbitrary probability measures supported on ${\mathcal M}$ that are absolutely continuous with respect to the induced Lebesgue measure on ${\mathcal M}$, thus giving an analogue of Theorem \[theorem\_linear\_forms+\] for manifolds. As in the case of Theorem \[theorem\_linear\_forms\], the more general result follows from the Lebesgue statement.
It is worth pointing out that the main result in [@Bernik-Kleinbock-Margulis-01:MR1829381] actually implies that the union $\bigcup_{\kappa>0}{\mathcal{B}}_{n}(\Psi,\kappa)\cap{\mathcal M}$ has full measure on ${\mathcal M}$. Theorem BKM as stated above follows from [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 1.1][^3] on using the continuity of measures. Our main goal is to quantify the dependence of $\kappa$ on $\delta$. Theorem \[theorem\_expicit\_1\_1\] of §\[6\] below explicitly quantifies this dependence. However, the statement is rather technical and we prefer to state for now a cleaner result that shows that the dependency between $\kappa$ and $\delta$ is polynomial.
\[soft\] Let $l\in{\mathbb{N}}$ and let ${\mathcal M}$ be a compact $d$–dimensional $C^{l+1}$ submanifold of ${\mathbb{R}}^n$ that is $l$–non–degenerate at every point. Let $\mu$ be a probability measure supported on ${\mathcal M}$ absolutely continuous with respect to $|\, .\, |_{{\mathcal M}} $. Let $\Psi:{\mathbb{Z}}^n\to{{\mathbb{R}}_+}$ be a monotonically decreasing function in each variable satisfying . Then there exist positive constants $\kappa_0,C_0,C_1$ depending on $\Psi$ and ${\mathcal M}$ only such that for any $0<\delta<1$, the inequality $$\label{theorem_linear_forms_ie_statement_v2++}
\mu({\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M})\ge 1-\delta$$ holds with $$\label{kappavb}
\kappa:=\min\left\{\kappa_0,\ C_0\Sigma_\Psi^{-1}\delta,\ C_1\delta^{d(n+1)(2l-1)}\right\}\,.$$
Diophantine approximation on manifolds and wireless technology
==============================================================
In short, interference alignment is a linear precoding technique that attempts to align signals in time, frequency, or space. The following exposition is an attempt to illustrate at a basic level the role of Diophantine approximation in implementing this technique. We stress that this section is not meant for the “electronics” experts. We consider two examples. The first basic example brings into play the theory of Diophantine approximation while the second slightly more complicated example also brings into play the manifold theory.
[EXAMPLE]{} 1. There are two people (*users*) $S_1$ and $S_2$ who wish to send (*transmit*) a message (*signal*) $u \in \{ 0,1\} $ and $v \in \{ 0,1\}$ respectively along a single communication channel (could be a cable or radio channel) to a person (*receiver*) $R$. Suppose there is a certain degree of fading (*channel coefficients*) associated with the messages during transmission along the channel. This for instance could be dependent on the distance of the users to the receiver and in the case of a radio channel, the reflection caused by obstacles such as buildings in the path of the signal. It is worth stressing that this aspect of “fading” associated with a signal should not be confused with the more familiar aspect of a signal being corrupted by “noise” that will be discussed a little later. Let $h_1$ and $h_2$ denote the fading factors associated with the messages being sent by $S_1$ and $S_2$ respectively. These are strictly positive numbers and assume their sum is one. Also, assume that the channel is additive. That is to say that $R$ receives the message: $$\label{nonoise}
y = h_1 x_1 + h_2 x_2 \qquad {\rm where \ } \ \ x_1= u \quad {\rm and } \quad x_2= v \, .$$ Specifically, the outcomes of $y$ are $$\label{nonoiseout}
y=\begin{cases}
0 &\text{ if } \quad u=v=0 \\[1ex]
h_1 &\text{ if }\quad u=0 \text{ and } v=1\\[1ex]
h_2 &\text{ if } \quad u=1 \text{ and } v=0\\[1ex]
1= h_1+h_2 &\text{ if } \quad u=v=1 \, ,
\end{cases}$$ and if $h_1 \neq h_2$, the receiver is obviously able to recover the messages $u$ and $v$. Moreover, the greater the mutual separation of the above four outcomes in the unit interval $I=[0,1]$, the better the tolerance for error (*noise*) during the transmission of the signal. The noise can be a combinations of various factors but often the largest contributing factor is the interference caused by other communication channels. If $z$ denotes the noise, then instead of , in practice $R$ receives the message: $$\label{withnoise}
y = h_1 x_1 + h_2 x_2 + z \qquad {\rm where \ } \ \ x_1= u \quad {\rm and } \quad x_2= v \, .$$
Now let $d$ denote the minimum distance between the four outcomes of $y\in I$ which are explicitly given by . Then as long as the absolute value $|z|$ of the noise is strictly less than $d/2$, the receiver is able to recover the messages $u$ and $v$. This is simply due to the fact that intervals of radius $d/2$ centered at the four outcomes of $y$ are disjoint. In this basic example, it is easy to see that the maximum separation between the four outcomes is attained when $h_1 = 1/3$ and $h_2 = 2/3$. In this case $d = 1/3$, and we are able to recover the messages $u$ and $v$ as long as $|z| < 1/6$. The upshot is that the closer the real numbers $h_1$ and $h_2$ are to $1/3$ and $2/3$ the better the tolerance for noise. Hence, at the most fundamental level we are interested in the simultaneous approximation property of real numbers by rational numbers. In practice, it is the probabilistic aspect of the approximation property that is important – knowing that the numbers $h_1$ and $h_2$ lie within a ‘desirable’ neighbourhood of the points $1/3$ and $2/3$ with reasonably high probability is key. This naturally brings into play the theory of metric Diophantine approximation.
(0,0)
(-3,0)[(1,0)[6]{}]{} (-3,-0.1)[(0,1)[0.2]{}]{} (3,-0.1)[(0,1)[0.2]{}]{} (-1,-0.1)[(0,1)[0.2]{}]{} (1,-0.1)[(0,1)[0.2]{}]{} (-1.2,-0.1)[(0,1)[0.2]{}]{} (1.2,-0.1)[(0,1)[0.2]{}]{}
(-3.025,-0.3)[0]{} (2.975,-0.3)[1]{} (-1.3,0.2)[$h_1$]{} (-1.065,-0.35)[$\frac{1}{3}$]{} (1.15,0.2)[$h_2$]{} (0.95,-0.35)[$\frac{2}{3}$]{}
(-3,0.01)[(1,0)[0.5]{}]{} (-3,-0.01)[(1,0)[0.5]{}]{}
(-1.5,0.01)[(1,0)[1]{}]{} (-1.5,-0.01)[(1,0)[1]{}]{}
(0.5,-0.01)[(1,0)[1]{}]{} (0.5,0.01)[(1,0)[1]{}]{}
(2.5,-0.01)[(1,0)[0.5]{}]{} (2.5,0.01)[(1,0)[0.5]{}]{}
(-2.55,-0.05)[**)**]{}
(-0.55,-0.06)[**)**]{} (-1.52,-0.06)[**(**]{}
(1.45,-0.06)[**)**]{} (0.48,-0.06)[**(**]{}
(2.48,-0.06)[**(**]{}
(-3,0.2)[(1,0)[0.5]{}]{} (-2.5,0.2)[(-1,0)[0.5]{}]{}
(-2.8,0.3)[$z$]{}
Note that from a probabilistic point of view, the chances that $h_1= h_2$ is zero and is therefore insignificant. Furthermore, within the context of this basic example, by weighting (*precoding*) the messages $u$ and $v$ appropriately before the transmission stage it is possible to ensure optimal separation ($d = 1/3$) at the receiver regardless of the values of $h_1$ and $h_2$. Indeed, suppose $x_1= \frac13 h_1^{-1} u $ and $ y_2 = \frac23 h_2^{-1} v $ are transmitted instead of $u $ and $v$. Then, without taking noise into consideration, $R$ receives the message $$\label{codingeg1}
y = h_1 x_1 + h_2 x_2 = \textstyle{\frac13} u + \textstyle{ \frac23} v$$ and so the specifics outcomes are $$\label{codingeg1out}
y=\begin{cases}
0 &\text{ if } \quad u=v=0 \\[1ex]
1/3 &\text{ if }\quad u=0 \text{ and } v=1\\[1ex]
2/3 &\text{ if } \quad u=1 \text{ and } v=0\\[1ex]
1 &\text{ if } \quad u=v=1 \, .
\end{cases}$$
[EXAMPLE]{} 2. There are two users $S_1$ and $S_2$ as before but this time there are also two receivers $R_1$ and $R_2$. Suppose $S_1$ wishes to simultaneously transmit independent signals $u_1$ and $v_1$ as a single signal, say $x_1=u_1 + v_1$ where $u_1$ is intended for $R_1$ and $v_1$ for $R_2$. Similarly, suppose $S_2$ wishes to simultaneously transmit independent signals $u_2$ and $v_2$ as a single signal, say $x_2=u_2 + v_2$ where $u_2$ is intended for $R_1$ and $v_2$ for $R_2$. As in the first example, for the sake of simplicity, we can assume that the signals $u_1, u_2, v_1, v_2 \in \{ 0,1\} $. Now let $h_{11}$ and $h_{21}$ denote the channel coefficients associated with signals being sent by $S_1$ to $R_1$ and $ R_2 $ respectively. Similarly, let $h_{12}$ and $h_{22}$ denote the channel coefficients associated with signals being sent by $S_2$ to $R_1$ and $R_2$. Assume that the channel is additive and let $y_1$ (respectively $y_2$) denote the signal at receiver $R_1 $ (respectively $R_2$). Thus, $$\begin{aligned}
\label{2nonoise}
y_1 & = & h_{11} x_1 + h_{12} x_2 \\ [1ex]
y_2 & = & h_{21} x_1 + h_{22} x_2 \label{21nonoise} \,\end{aligned}$$ where $$\label{2nonoiseChoice1}
x_1= u_1 + v_1 \quad {\rm and } \quad x_2= u_2+ v_2 \, .$$
####
(2, 2)(-0.75,-0.75)
(-1.5,0.8)[(0,1)[0.25]{}]{} (-1.5,1.05)[(-1,1)[0.2]{}]{} (-1.5,1.05)[(1,1)[0.2]{}]{} (-1.5,0.8)[(-1,0)[0.2]{}]{} (-1.1,0.8)[$x_1$]{}
(1.4,0.8)[(0,1)[0.25]{}]{} (1.4,1.05)[(-1,1)[0.2]{}]{} (1.4,1.05)[(1,1)[0.2]{}]{} (1.4,0.8)[(1,0)[0.2]{}]{} (0.9,0.8)[$y_1$]{}
(-1.5,-1)[(0,1)[0.25]{}]{} (-1.5,-0.75)[(-1,1)[0.2]{}]{} (-1.5,-0.75)[(1,1)[0.2]{}]{} (-1.5,-1)[(-1,0)[0.2]{}]{} (-1.1,-0.9)[$x_2$]{}
(1.4,-1)[(0,1)[0.25]{}]{} (1.4,-0.75)[(-1,1)[0.2]{}]{} (1.4,-0.75)[(1,1)[0.2]{}]{} (1.4,-1)[(1,0)[0.2]{}]{} (0.9,-0.9)[$y_2$]{}
(0,0)[(1,1)[0.71]{}]{} (0,0)[(-1,-1)[0.71]{}]{} (0,0)[(-1,1)[0.71]{}]{} (0,0)[(1,-1)[0.71]{}]{}
(-0.71,0.9)[(1,-0)[1.4]{}]{} (-0.71,-0.9)[(1,-0)[1.4]{}]{}
(-2.1,0.9) (-2.1,-0.9) (1.7,0.9) (1.7,-0.9)
(0,1.05) (0,-1.15) (0.4,0.22) (0.4,-0.35)
(-2.9,0.8)[[$u_1, v_1$]{}]{} (-3.0,-0.9)[[$u_2, v_2$]{}]{}
####
Recall, that $R_1$ (respectively $R_2$) only cares about recovering the signals $ u_1 $ and $u_2 $ (respectively $ v_1 $ and $v_2 $) from $y_1$ (respectively $y_2$). For the moment, let us just concentrate on the signal received by $R_1$; namely $$\label{2nonoisee}
y_1 = h_{11} u_1 + h_{12} u_2 + h_{11} v_1 + h_{12} v_2 \, .$$ It is easily seen that this corresponds to a received signal in Example 1 modified to incorporate four users and one receiver. This time there are potentially 16 different outcomes. In short, the more users, the more outcomes and therefore the smaller the mutual separation between them and in turn the smaller the tolerance for noise. Now there is one aspect of the setup in this example that we have not yet exploited. The receiver $R_1$ is not interested in the signals $ v_1 $ and $v_2 $. So if they could be deliberately aligned via precoding into a single component $v_1 + v_2 $, then $y_1$ would look like a received signal associated with just 3 users rather than 4. With this in mind, suppose instead of transmitting $x_1$ and $x_2$ given by , $S_1$ and $S_2$ transmit the signals $$\label{2nonoiseChoice2}
x_1= h_{22} u_1 + h_{12}v_1 \quad {\rm and } \quad x_2= h_{21} u_2+ h_{11} v_2 \,$$ respectively. Then, it can be verified that the received signals given by and can be written as $$\begin{aligned}
\label{2nonoise22}
y_1 & = & (h_{11} h_{22}) u_1 + (h_{21} h_{12}) u_2 + (h_{11} h_{12}) (v_1 + v_2) \\ [1ex]
y_2 & = & (h_{21} h_{12}) v_1 + (h_{11} h_{22}) v_2 + (h_{21} h_{22}) (u_1 + u_2) \, .\end{aligned}$$
In other words, the unwanted, interfering signals at either receiver are aligned to a one dimensional subspace of four dimensional space. Notice that in the above equations the six coefficients are only of four variables, namely $h_{i,j}$, $i,j=1,2$, and thus represent dependent quantities. This, together with our findings from Example 1, naturally brings into play the manifold theory of metric Diophantine approximation.
Example 2 is a simplified version of Example 3 appearing in [@MOGMAK §III]. For a deeper and more practical understanding of the link between interference alignment and metric Diophantine approximation on manifolds the reader is urged to look at [@MOGMAK] and [@J2010 §4.7].
Preliminaries for Theorem \[soft\] {#section_preliminaries}
==================================
Localisation and parameterisation {#loc}
---------------------------------
Since ${\mathcal M}$ is non–degenerate everywhere, we can restrict ourself to considering a sufficiently small neighbourhood of an arbitrary point on ${\mathcal M}$. By compactness, ${\mathcal M}$ then can be covered with a finite subcollection of such neighbourhoods. Therefore, in view of the finiteness of the cover, the existence of $\kappa_0$, $C_0$ and $C_1$ satisfying Theorem \[soft\] globally will follow from the existence of these parameters for every neighbourhood in the finite cover : $\kappa_0$, $C_0$ and $C_1$ should be taken to be the minimum of their local values.
Now as we can work with ${\mathcal M}$ locally, we can parameterize it with some map ${{\mathbf{f}}}:{{{{\mathbf{U}}}}}\to{\mathbb{R}}^n$ defined on a ball ${{{{\mathbf{U}}}}}$ in ${\mathbb{R}}^d$, where $d=\dim{\mathcal M}$. Note that ${{\mathbf{f}}}$ must be at least $C^{2}$ in order to ensure that ${\mathcal M}$ is non–degenerate. Without loss of generality we assume that $${\mathcal M}=\{{{\mathbf{f}}}({{\mathbf{x}}}):{{\mathbf{x}}}\in {{{{\mathbf{U}}}}}\}\,.$$ Furthermore, using the Implicit Function Theorem if necessary, we can make ${{\mathbf{f}}}$ to be a Monge parametrisation, that is ${{\mathbf{f}}}({{\mathbf{x}}})=(x_1,\dots,x_d,f_{d+1}({{\mathbf{x}}}),\dots,f_n({{\mathbf{x}}}))$, where ${{\mathbf{x}}}=(x_1,\dots,x_d)$. Note that ${{\mathbf{f}}}$ can be assumed to be bi–Lipschitz on ${{{{\mathbf{U}}}}}$. This readily follows from the fact that ${{\mathbf{f}}}$ is $C^{1}$ but possibly requires a further shrinking of ${{{{\mathbf{U}}}}}$.
Let $\mathcal{B}_n(\Psi,\kappa,{\mathcal M})$ denote the orthogonal projection of ${\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M}$ onto the set of parameters ${{{{\mathbf{U}}}}}$. Thus, $$\label{def_B_nanifold}
\mathcal{B}_n(\Psi,\kappa,{\mathcal M}):=\left\{{{\mathbf{x}}} \in {{{{\mathbf{U}}}}}: \|{{\mathbf{a}}}. {{\mathbf{f}}}({{\mathbf{x}}}) \| > \kappa\Psi({{\mathbf{a}}}) \text{ ~for all~ } {{\mathbf{a}}} \in {\mathbb{Z}}^n,\ {{\mathbf{a}}}\neq{{\mathbf{0}}} \right\}.$$ The set ${\mathcal{B}}_n(\Psi,\kappa)\cap{\mathcal M}$ and its projection $\mathcal{B}_n(\Psi,\kappa,{\mathcal M})$ are related by the bi–Lipschitz map ${{\mathbf{f}}}$. Since bi–Lipschitz maps only affect the Lebesgue measure of a set by a multiplicative constant (in this case the constant will depend on ${{\mathbf{f}}}$ only), it suffices to prove Theorem \[soft\] for the project set. More precisely, Theorem \[soft\] is equivalent to showing that there exist positive constants $\kappa_0$, $C_0$ and $C_1>0$ depending on $\Psi$ and ${{\mathbf{f}}}$ only such that for any $0<\delta<1$, $$\label{theorem_linear_forms_ie_statement_v2+}
|{\mathcal{B}}_n(\Psi,\kappa,{\mathcal M})|_d\ge (1-\delta)|{{{{\mathbf{U}}}}}|_d$$ holds with $\kappa$ given by .
Auxiliary statements
--------------------
We will denote the standard $L_1$ (resp. Euclidean, infinity) norm on ${\mathbb{R}}^d$ by $\left\|\,. \, \right\|_1$ (resp. $\left\|\, . \, \right\|_2$, $\left\|\,. \, \right\|_{\infty}$). Also as before, given an $x\in{\mathbb{R}}$, $\|x\|$ will denote the distance of $x$ from the nearest integer. The notation $B\!\left({{\mathbf{x}}}, r\right)$ will refer to the Euclidean open ball of radius $r>0$ centered at ${{\mathbf{x}}}$ and ${\mathbb{S}}^{d-1}$ will denote the unit sphere in dimension $d\ge 1$ (with respect to the Euclidean norm). Furthermore, throughout $$V_d := \frac{\pi^{d/2}}{\Gamma \left(1+d/2\right)}$$ is the volume of the $d$–dimensional unit ball and $N_d$ denotes the Besicovitch covering constant.
\[besicocst\] For further details on the Besicovitch covering constant, cf. [@furedi_loeb]. We will only need in what follows the inequality $N_d\le 5^d$ satisfied by this constant.
The proof of Theorem \[soft\] involves two separate cases that take into consideration the relative size of the gradient of ${{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}$, where ${{\mathbf{q}}}=(q_1,\dots,q_n)\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}$ and ${{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}=f_1({{\mathbf{x}}})q_1+\dots+f_n({{\mathbf{x}}})q_n$ is the standard inner product of ${{\mathbf{f}}}({{\mathbf{x}}})$ and ${{\mathbf{q}}}$. The first case of ‘big gradient’ is considered within the next result and is an adaptation of [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 1.3].
In what follows, $\partial_\beta$ will denote partial derivation with respect to a multi–index $\beta=(\beta_1,\dots,\beta_d)\in{\mathbb{N}}_0^d$, where ${\mathbb{N}}_0$ will stand for the set of non–negative integers, that is ${\mathbb{N}}_0:=\{0,1,2,\dots\}$. Furthermore, $|\beta|$ we will mean the order of derivation, that is $|\beta|:=\beta_1+\dots+\beta_d$. Also, $\partial_i^k$ will denote the differential operator corresponding to the $k^{\textrm{th}}$ derivative with respect to the $i^{\textrm{th}}$ variable, that is, $\partial_i^k :=\partial^k/\partial x_i^k$.
\[theo\_explicit\_1\_3\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball of radius $r$ and ${{\mathbf{f}}}\in C^2({2{{{\mathbf{U}}}}})$, where ${2{{{\mathbf{U}}}}}$ is the ball with the same centre as ${{{{\mathbf{U}}}}}$ and radius $2r$. Let $$\label{theo_explicit_1_3_def_L}
L^*:=\sup_{|\beta|=2,\ {{\mathbf{x}}}\in 2{{{{\mathbf{U}}}}}}\|\partial_{\beta}{{\mathbf{f}}}({{\mathbf{x}}})\|_{\infty}\qquad\text{and}\qquad
L:=\max\left(L^*,\frac{1}{4r^2}\right).$$ Then, for every $\delta'>0$ and every ${{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}$, the set of ${{\mathbf{x}}}\in {{{{\mathbf{U}}}}}$ such that $$\|{{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}\|<\delta'$$ and $$\label{theo_explicit_1_3_gradient_big}
\|\nabla{{\mathbf{f}}}({{\mathbf{x}}}){{\mathbf{q}}}\|_\infty\ge\sqrt{ndL\|{{\mathbf{q}}}\|_\infty}$$ has measure at most $K_d\delta'|{{{{\mathbf{U}}}}}|_d$, where $\nabla{{\mathbf{f}}}({{\mathbf{x}}}){{\mathbf{q}}}$ is the gradient of ${{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}$ and $$\label{def_K_d}
K_d:=\frac{4^{2d+1}d^{d/2}N_d}{V_d}$$ is a constant depending on $d$ only.
The proof of Theorem \[theo\_explicit\_1\_3\] follows on appropriately applying [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 2.2]. For convenience we refer to this lemma as L2.2. We take $M$ in L2.2 to be equal to the quantity $ndL$, where $L$ is defined by . We set $\delta$ in L2.2 to be equal to $\delta'$ appearing in Theorem \[theo\_explicit\_1\_3\]. Then, in view of $\eqref{theo_explicit_1_3_def_L}$ and the fact that $n,d,\|{{\mathbf{q}}}\|_\infty\geq 1$, it follows that the hypotheses of L2.2 (namely [@Bernik-Kleinbock-Margulis-01:MR1829381 Eq(2.1a) & Eq(2.1b)]) are satisfied. Thus, the conclusion of L2.2 implies Theorem \[theo\_explicit\_1\_3\] with the constant $C_d$ in L2.2 equal to $K_d$ appearing in Theorem \[theo\_explicit\_1\_3\]. The explicit value of $K_d$ is calculated by ‘tracking’ the values of the auxiliary constants $C'_d$, $C''_d$ and $C_d'''$ appearing in [@Bernik-Kleinbock-Margulis-01:MR1829381]. Namely[^4], $$C_d'=\frac{V_d}{2^{2d}d^{d/2}},\qquad
C_d'' = 2^{d+2},\qquad
C_d'''=\frac{C_d''}{C_d'}\,,$$ and then $$K_d=2^dC_d'''N_d=\frac{2^dC_d''N_d}{C_d'} = \frac{2^{4d+2}d^{d/2}N_d}{V_d}\cdotp$$
Next in Theorem \[theo\_explicit\_1\_4\] below we consider the case of ‘small gradient’. This is an explicit version of [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 1.4]. First we introduce auxiliary constants.
Given a $C^l$ map ${{\mathbf{f}}}:{{{{\mathbf{U}}}}}\to{\mathbb{R}}^n$ defined on a ball ${{{{\mathbf{U}}}}}$ in ${\mathbb{R}}^d$, the supremum of $s\in{\mathbb{R}}$ such that for any ${{\mathbf{x}}}\in {{{{\mathbf{U}}}}}$ and any ${{\mathbf{v}}}\in{\mathbb{S}}^{n-1}$ there exists an integer $k$, $0<k\le l$, and a unit vector ${{\mathbf{u}}}\in{\mathbb{S}}^{d-1}$ satisfying $$\label{def_s_lb}
\left|\frac{\partial^k ({{\mathbf{f}}}\cdot {{\mathbf{v}}})}{\partial {{\mathbf{u}}}^k}({{\mathbf{x}}})\right|\ge s$$ will be called *the measure of $l$–non–degeneracy of $({{\mathbf{f}}},{{{{\mathbf{U}}}}})$* and will be denoted by $s(l;{{\mathbf{f}}},{{{{\mathbf{U}}}}})$. Here and elsewhere for a unit vector ${{\mathbf{u}}}\in{\mathbb{S}}^{d-1}$, $\partial^k/\partial{{\mathbf{u}}}^k $ will denote the derivative in direction ${{\mathbf{u}}}$ of order $k$.
As in Theorem \[theo\_explicit\_1\_3\], the radius of the ball ${{{{\mathbf{U}}}}}$ will be denoted by $r$. Throughout, we let ${{\mathbf{x}}}_0$ denote the centre of ${{{{\mathbf{U}}}}}$. Also, given a real number $\lambda > 0$, we let $\lambda {{{{\mathbf{U}}}}}$ denote the scaled ball of radius $\lambda \, r$ and with the same centre ${{\mathbf{x}}}_0$ as that of ${{{{\mathbf{U}}}}}$. With this in mind, consider the balls $$\label{def_tcU}
\begin{aligned}
{{{{\mathbf{U}}}}^+}&:=3^{d+1}{{{{\mathbf{U}}}}},\\
{\tilde{{{\mathbf{U}}}}}\, \, ~ &:=3^{n+1}{{{{\mathbf{U}}}}},\\
{\tilde{{{\mathbf{U}}}}^+}&:=3^{n+d+2}{{{{\mathbf{U}}}}}.
\end{aligned}$$ For technical reasons, that will soon become apparent, in order to deal with the ‘small gradient’ case we make the following assumption on the map ${{\mathbf{f}}}:{{{{\mathbf{U}}}}}\to{\mathbb{R}}^n$.
[**Assumption 1.** ]{} [ *The map ${{\mathbf{f}}}=(f_1,\dots,f_n)$ is an $n$–tuple of $C^{l+1}$ functions defined on the closure of ${\tilde{{{\mathbf{U}}}}^+}$ which is $l$–non–degenerate everywhere on the closure of ${\tilde{{{\mathbf{U}}}}^+}$.*]{}
*Remark.* In view of the discussion of §\[loc\], there is no loss of generality in imposing Assumption 1 within the context of Theorem \[soft\].
We denote by $s_0$ the measure of non–degeneracy of ${{\mathbf{f}}}$ on ${\tilde{{{\mathbf{U}}}}^+}$. Note that Assumption 1 ensures that $$\label{def_s0}
s_0:=s(l;{{\mathbf{f}}},{\tilde{{{\mathbf{U}}}}^+}) > 0.$$ Also, notice that it ensures the existence of a constant $M\ge 1$ such that for all $k\le l+1$ and all ${{\mathbf{u}}}_1, \dots , {{\mathbf{u}}}_k\in{\mathbb{S}}^{d-1}$, $$\label{eqn:def_N}
\underset{{{\mathbf{x}}}\in {\tilde{{{\mathbf{U}}}}^+}}{\sup} \left\| \frac{\partial^k {{\mathbf{f}}}}{\partial {{\mathbf{u}}}_1 \dots \partial{{\mathbf{u}}}_k}({{\mathbf{x}}}) \right\|_{2} \le M\,,$$ where $\partial{{\mathbf{u}}}_i$ means differentiation in direction ${{\mathbf{u}}}_i$. Note that the left–hand side of is the length of the projection of $\partial^k{{\mathbf{f}}}({{\mathbf{x}}})/\partial{{\mathbf{u}}}^k$ on the line passing through ${{\mathbf{v}}}$ and hence it is no bigger than $M$. This implies that $$s_0 \ \leq \ M \, .$$
Without loss of generality, we will assume that the radius $r$ of the ball ${{{{\mathbf{U}}}}}$ satisfies $$\label{slv2}
r\le \min\left\{\frac{s_0 \cdot \sigma(l,d)}{2\cdot 3^{n+d+2}\sqrt{d}M}, \frac{\eta s_0}{4\cdot10^73^{n+d+2}\, d M l^{l+2}(l+1)!}\right\},$$ where $$\label{def_r_V}
\eta:= \min\left\{\frac{1}{16} , \left( \frac{V_d}{2^{d+2}dl(l+1)^{1/l}5^d}\right)^{d(2l-1)(2l-2)}\right\}$$ and where $$\label{def_sigma_l_d}
\sigma(l,d):= \frac{1}{2^{3l(d-1)/2}}\cdot \phi\left(\left( \sqrt{2}\cdot(2l)^{2+l(l-1)/2}\cdot(l+1)!\right)^{-1}, 2, l \right)^{d-1}$$ with the quantity $\phi(\omega, B, k)$ defined as $$\label{def_phi_deltaBk}
\phi(\omega, B, k):= \frac{\omega^{k(k-1)/2}}{2\sqrt{2}\cdot B^k\cdot (k+1)!}$$ for any integer $k\ge 1$ and any real numbers $\omega, B>0$.
Furthermore, define the following constants determined by ${{\mathbf{f}}}$ and ${{{{\mathbf{U}}}}}$: $$\label{def_rho_one_first}
\rho_1:=\frac{s_0}{4l^l(l+1)!\sqrt{d}}(2r)^l\,,$$ $$\tau:= \frac{r^l s_0}{4l^l (l+1)!},$$ and $$\label{def_rho_two_first}
\rho_2:=\frac{s_0}{4l^l(l+1)!}\left( \frac{\tau}{M}\right)^{l-1}\frac{\left(\tau\left( 1-1/\sqrt{2}\right)\right)^2}{\sqrt{\left(\frac{s_0}{4l^l (l+1)!}\left(\frac{\tau}{M}\right)^l \right)^2+\left(\tau\left(1+\frac{1}{\sqrt{2}}\right)\right)^2}}\,\cdotp$$ Finally, let $$\label{deffinalerho_first}
\rho:=\frac{\rho_1\rho_2}{\sqrt{\rho_1^2+(\rho_2+2 M^2)^2}}\cdotp$$
\[theo\_explicit\_1\_4\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Then, for any $0<\delta'\le{}1$, any $n$-tuple ${{\mathbf{T}}}=(T_1, \dots, T_n)$ of real numbers $\ge{}1$ and any $K>0$ satisfying $$\label{theo_explicit_1_4_restrictions}
\frac{\delta' K T_1\cdot\dots\cdot T_n}{\max_{i=1,\dots,n}T_i}\le 1,$$ define the set $A(\delta',K,{{\mathbf{T}}})$ to be $$\label{theo_explicit_1_4_set}
A(\delta',K, {{\mathbf{T}}}):=\left\{
{{\mathbf{x}}}\in {{{{\mathbf{U}}}}}: \exists \ {{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\} \text{ such that }
\begin{cases}
&\|{{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}\|<\delta'\\
&\|\nabla{{\mathbf{f}}}({{\mathbf{x}}}){}{{\mathbf{q}}}\|_\infty<K\\
&|q_i|<T_i,i=1,\dots,n
\end{cases}
\right\}.$$ Then $$\left| A(\delta', K, {{\mathbf{T}}})\right|_d \le E\left(\sqrt{n+d+1}\cdot\varepsilon_1\right)^{1/d(2l-1)}|{{{{\mathbf{U}}}}}|_d \, ,$$ where $$\label{theo_explicit_1_4_def_e1}
\varepsilon_1:=\max\left(\delta',\left(\frac{\delta' K T_1\cdot\dots\cdot T_n}{\max\limits_{1\le i \le n}T_i}\right)^{\frac{1}{n+1}}\right)\,$$ and $$\label{theo_explicit_1_4_def_E}
E:=C(n+1)(3^dN_d)^{n+1}\rho^{-1/d(2l-1)}\,,$$
in which $\rho$ is given by and $C$ is the constant explicitly given by below.
At first glance the statement of Theorem \[theo\_explicit\_1\_4\] looks very similar to [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 1.4]. We stress that the key difference is that in our statement the constants are made fully explicit. The proof of Theorem \[theo\_explicit\_1\_4\] is rather involved and will be the subject of §\[xmas\].
A strengthening and proof of Theorem \[soft\] \[6\]
===================================================
In view of the discussion of §\[loc\], Theorem \[soft\] will follow immediately on establishing a stronger result (Theorem \[theorem\_expicit\_1\_1\] below), which explicitly characterizes the dependence on $\Psi$ and ${\mathcal M}$ of the constants $\kappa_0$, $C_0$ and $C_1$ appearing within the statement of Theorem \[soft\]. In the case that the function ${{\mathbf{f}}}$ defining the manifold under consideration is explicitly given, the values of these constants may be improved by following the methodology of the proof of Theorem \[theorem\_expicit\_1\_1\] as many computations will then be made simpler.
Let $$\label{slv3}
C_{\Psi}:=\sup_{{{\mathbf{q}}}=(q_1,\dots,q_n)\in{\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}}\Psi({{\mathbf{q}}})\prod_{i=1}^nq_i^+,\qquad\text{where }q_i^+:=\max\{1,|q_i|\}\,.$$ It is a well known fact that, under the assumption that $\Psi$ is monotonically decreasing in each variable, relation implies that $0<C_\Psi<\infty$. Also define the constant $$S_n:=\sum_{{{\mathbf{t}}}\in {\mathbb{Z}}^n}2^{-\frac{\|{{\mathbf{t}}}\|_{\infty}}{2d(2l-1)(n+1)}},$$ which is clearly finite and positive as the sum converges.
\[theorem\_expicit\_1\_1\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball whose radius satisfies and let ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Let $\Sigma_{\Psi}$, $L$, $K_d$ and $E$ be given by, , and respectively and let $$K_0=E\left(\sqrt{n+d+1}\cdot
\left(C_{\Psi}2^{n-1/2}\sqrt{ndL}\right)^{\frac{1}{n+1}}\right)^{1/d(2l-1)}\,.$$
Given any $\delta>0$, let $$\kappa:=\min\left\{\frac{1}{C_{\Psi}2^{n-1/2}\sqrt{ndL}},\ \frac{\delta}{2K_d \Sigma_{\Psi}},\left(\frac{\delta}{2K_0S_n}\right)^{d(n+1)(2l-1)}\right\}.$$ Then $$\label{theorem_expicit_1_1_conclusion_Lebesgue++}
\left|\mathcal{B}_n(\Psi,\kappa,{\mathcal M})\right|_d\ge (1-\delta)|{{{{\mathbf{U}}}}}|_d.$$
Clearly the above is an explicit version of Theorem \[soft\] in the case when $\mu$ is Lebesgue measure. The arguments given in the proof of Theorem \[theorem\_linear\_forms+\] are easily adapted to deal with the general situation.
Proof of Theorem \[theorem\_expicit\_1\_1\] modulo Theorem \[theo\_explicit\_1\_4\]
-----------------------------------------------------------------------------------
For $\kappa>0$ and any ${{\mathbf{q}}}\in{\mathbb{Z}}^n$, define
$$A(\kappa;{{\mathbf{q}}}):=\left\{ {{\mathbf{x}}}\in {{{{\mathbf{U}}}}}: \|{{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}\|<\kappa \Psi({{\mathbf{q}}}) ~ \& ~\eqref{theo_explicit_1_3_gradient_big}\text{ holds}\right\}$$ and $$A^c(\kappa;{{\mathbf{q}}}):=\left\{ {{\mathbf{x}}}\in {{{{\mathbf{U}}}}}: \|{{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{q}}}\|<\kappa \Psi({{\mathbf{q}}}) ~ \& ~\eqref{theo_explicit_1_3_gradient_big}\text{ does not hold}\right\}\,.$$ Clearly it suffices to prove that $$\label{theorem_expicit_1_1_conclusion_Lebesgue}
\left|\bigcup_{{{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}}A(\kappa;{{\mathbf{q}}})\right|_d\le \frac\delta2|{{{{\mathbf{U}}}}}|_d\qquad\text{and}\qquad
\left|\bigcup_{{{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}}A^c(\kappa;{{\mathbf{q}}})\right|_d\le \frac\delta2|{{{{\mathbf{U}}}}}|_d\,.$$
By Theorem \[theo\_explicit\_1\_3\] with $\delta'=\kappa\Psi({{\mathbf{q}}})$, we immediately have that $|A(\kappa;{{\mathbf{q}}})|_d\le K_d\kappa\Psi({{\mathbf{q}}})|{{{{\mathbf{U}}}}}|_d$. Then, summing over all ${{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}$ gives $$\label{theorem_expicit_1_1_first_case_result}
\left|\bigcup_{{{\mathbf{q}}}\in{\mathbb{Z}}^n\setminus\{0\}}A(\kappa;{{\mathbf{q}}})\right|_d\le K_d\Sigma_\Psi\kappa|{{{{\mathbf{U}}}}}|_d\le\frac{\delta}{2}\,|{{{{\mathbf{U}}}}}|_d.$$ Now to establish the second inequality in , given an $n$–tuple ${{\mathbf{t}}}=(t_1,\dots,t_n)\in{\mathbb{N}}_0^n$, define the set $$\label{theorem_expicit_1_1_union}
A^c_{{{\mathbf{t}}}}:=\bigcup_{\substack{{{\mathbf{q}}}=(q_1,\dots,q_n)\in {\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}\\[1ex] 2^{t_i}\le q_i^+< 2^{t_i+1}}} \!\!\! A^c(\kappa;{{\mathbf{q}}})\,,$$ where $q_i^+=\max\{1,|q_i|\}$. Observe that $$\label{theorem_expicit_1_1_union_two}
\bigcup_{{{\mathbf{q}}}\in {\mathbb{Z}}^n}A^c(\kappa;{{\mathbf{q}}})=\bigcup_{{{\mathbf{t}}}\in{\mathbb{N}}_0^n}A^c_{{{\mathbf{t}}}}\,.$$ By and the monotonicity of $\Psi$ in each variable, for every ${{\mathbf{q}}}=(q_1,\dots,q_n)\in {\mathbb{Z}}^n\setminus\{{{\mathbf{0}}}\}$ satisfying the inequalities $2^{t_i}\le q_i^+< 2^{t_i+1}$, we have that $$\Psi({{\mathbf{q}}})\le C_\Psi \left(\prod_{i=1}^nq_i^+\right)^{-1}\le C_\Psi \prod_{i=1}^n2^{-t_i}=C_\Psi 2^{-\sum_{i=1}^nt_i}$$ and $$\|{{\mathbf{q}}}\|_\infty\le 2^{\max_it_i+1}\,.$$ Now let $$\delta'=\kappa C_{\Psi}2^{-\sum_{i=1}^n t_i},\quad K=\sqrt{ndL 2^{\max_it_i+1}}\quad\text{and}\quad T_i=2^{t_i+1}\ (1\le i\le n)\,.$$ Then, $A^c_{{{\mathbf{t}}}}$ is easily seen to be contained in the set $A(\delta', K, {{\mathbf{T}}})$ defined within Theorem \[theo\_explicit\_1\_4\]. Clearly $T_1, \dots, T_n\ge 1$ and $K>0$. Since $\kappa<C_{\Psi}^{-1}$, we have that $0<\delta'<1$. Finally, is satisfied, since $$\begin{aligned}
\nonumber \frac{\delta' K T_1\cdot\dots\cdot T_n}{\max_{i=1,\dots,n}T_i}
&=\frac{\kappa C_{\Psi}2^{-\sum_{i=1}^n t_i}\sqrt{ndL 2^{\max_it_i+1}} \prod_i2^{t_i+1}}{2^{\max_i t_i+1}}\\[1ex]
\label{slv4}&=\frac{\kappa C_{\Psi}2^{n}\sqrt{ndL}}{2^{(\max_i t_i+1)/2}}
\; = \; \frac{\kappa C_{\Psi}2^{n-1/2}\sqrt{ndL}}{2^{\|{{\mathbf{t}}}\|_\infty/2}} \; < \;1\,,\end{aligned}$$ where the last inequality follows from the definition of $\kappa$. Therefore, Theorem \[theo\_explicit\_1\_4\] is applicable and it follows that $$|A^c_{{{\mathbf{t}}}}|_d\le |A(\delta', K, {{\mathbf{T}}})|_d\le
E\left(\sqrt{n+d+1}\cdot\varepsilon_1\right)^{1/d(2l-1)}|{{{{\mathbf{U}}}}}|_d\,,$$ where $E$ is given by and where, from , the definition of $\delta'$ and the fact that $\kappa C_{\Psi}<1$, $$\varepsilon_1=\max\left(\kappa C_{\Psi}2^{-\sum_{i=1}^n t_i},\left(\frac{\kappa C_{\Psi}2^{n-1}\sqrt{ndL}}{2^{\|{{\mathbf{t}}}\|_\infty/2}}\right)^{\frac{1}{n+1}}\right)=
\left(\frac{\kappa C_{\Psi}2^{n-1}\sqrt{ndL}}{2^{\|{{\mathbf{t}}}\|_\infty/2}}\right)^{\frac{1}{n+1}}.$$ Then, using and summing over all ${{\mathbf{t}}}\in{\mathbb{N}}_0^n$, we find that $$\begin{aligned}
\left|\bigcup_{{{\mathbf{q}}}\in {\mathbb{Z}}^n}A^c(\kappa;{{\mathbf{q}}})\right|_d & \le &
\sum_{{{\mathbf{t}}}\in{\mathbb{N}}_0^n}E\left(\sqrt{n+d+1}\cdot
\left(\frac{\kappa C_{\Psi}2^{n-1}\sqrt{ndL}}{2^{\|{{\mathbf{t}}}\|_\infty/2}}\right)^{\frac{1}{n+1}}\right)^{1/d(2l-1)}\!\!\! |{{{{\mathbf{U}}}}}|_d \\[1ex]
& = & K_0S_n\kappa^{1/d(n+1)(2l-1)} |{{{{\mathbf{U}}}}}|_d \ \le \ \frac\delta2|{{{{\mathbf{U}}}}}|_d\, ,\end{aligned}$$ where the latter inequality follows from the definition of $\kappa$. This establishes the secound inequality in and thus completes the proof of Theorem \[theorem\_expicit\_1\_1\] modulo Theorem \[theo\_explicit\_1\_4\].
Proof of Theorem \[theo\_explicit\_1\_4\] \[xmas\] {#section_quantitative_BKM}
==================================================
To establish Theorem \[theo\_explicit\_1\_4\], we will follow the basic strategy set out in the proof of [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 1.4]. We stress that non–trivial modifications and additions are required to make the constants explicit. To begin with, we state a simplified form of [@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 6.2] and, to this end, various notions are now introduced.
Given a finite dimensional real vector space $W$, $\nu$ will denote a submultiplicative function on the exterior algebra $\bigwedge W$; that is, $\nu$ is a continuous function from $\bigwedge W$ to ${{\mathbb{R}}_+}$ such that $$\label{newnew}
\nu(t{{\mathbf{w}}})=|t| \, \nu({{\mathbf{w}}}) \quad \rm{ and} \quad \nu({{\mathbf{u}}}\wedge{{\mathbf{w}}})\le \nu({{\mathbf{u}}})\nu({{\mathbf{w}}})$$ for any $t\in{\mathbb{R}}$ and ${{\mathbf{u}}},{{\mathbf{w}}}\in\bigwedge W$. Given a discrete subgroup $\Lambda$ of $W$ of rank $k\ge 1$, let $\nu(\Lambda):=\nu({{\mathbf{v}}}_1\wedge\dots\wedge{{\mathbf{v}}}_k)$, where ${{\mathbf{v}}}_1,\dots,{{\mathbf{v}}}_k$ is a basis of $\Lambda$ (this definition makes sense from the first equation in ). Also, ${\mathcal{L}}(\Lambda)$ will denote the set of all non–zero primitive subgroups of $\Lambda$. Furthermore, given $C,\alpha>0$ and $V\subset{\mathbb{R}}^d$, a function $f: {{\mathbf{x}}}\in V\mapsto{}f({{\mathbf{x}}})\in {\mathbb{R}}$ is said to be $(C,\alpha)$–good on $V$ if for any open ball $B\subset{}V$ and any $\epsilon>0$, $$|\{ {{\mathbf{x}}}\in{}B: |f({{\mathbf{x}}})| < \epsilon \sup_{{{\mathbf{x}}}\in{}B} | f({{\mathbf{x}}}) |\}|_d \ \le\ C\epsilon^\alpha|B|_d.$$
([@Bernik-Kleinbock-Margulis-01:MR1829381 Theorem 6.2]) \[theo\_BKM\] Let $W$ be a $d+n+1$ dimensional real vector space and $\Lambda$ be a discrete subgroup of $W$ of rank $k$. Let a ball $B=B({{\mathbf{x_0}}},r)\subset{\mathbb{R}}^d$ and a map $H:{\hat{B}}\rightarrow \mathrm{GL}(W)$ be given, where ${\hat{B}}:= 3^{k}B$. Take $C,\alpha>0$, $0<\tilde{\rho}<1$ and let $\nu$ be a submultiplicative function on $\bigwedge W$. Assume that for any $\Gamma\in{\mathcal{L}}(\Lambda)$,
1. \[theo\_BKM\_condition\_Ca\] the function ${{\mathbf{x}}}\to\nu(H({{\mathbf{x}}})\Gamma)$ is $(C,\alpha)$–good on ${\hat{B}}$,\
2. \[theo\_BKM\_condition\_rho\] there exists ${{\mathbf{x}}}\in B$ such that $\nu(H({{\mathbf{x}}})\Gamma)\ge\tilde{\rho}$, and\
3. \[theo\_BKM\_condition\_finite\] for all ${{\mathbf{x}}}\in{\hat{B}}$, $\#\{\Gamma\in{\mathcal{L}}(\Lambda)\mid\nu(H({{\mathbf{x}}})\Gamma)<\tilde{\rho}\}<\infty$.
Then, for any positive $\varepsilon\le\rho$, one has $$\label{slv1}
\left|\left\{{{\mathbf{x}}}\in B:\exists~ {{\mathbf{v}}}\in\Lambda\setminus\{0\} \text{ such that }\nu(H({{\mathbf{x}}}){{\mathbf{v}}})<\varepsilon\right\}\right|_d~\le~ k(3^dN_d)^kC\left(\frac{\varepsilon}{\rho}\right)^{\alpha}|B|_d.$$
Theorem \[theo\_explicit\_1\_4\] is now deduced from Theorem \[theo\_BKM\] in the following manner. With respect to the parameters appearing in Theorem \[theo\_BKM\], we let $$W= {\mathbb{R}}^{n+d+1}$$ and $$\nu_{*} \mbox{ be the submultiplicative function introduced in~\cite[\S7]{Bernik-Kleinbock-Margulis-01:MR1829381}}.$$
There is nothing to gain in formally recalling the definition of $\nu_{*}$. All we need to know is that $\nu_{*}$ as given in [@Bernik-Kleinbock-Margulis-01:MR1829381] has the property that $$\label{property_nu}
\nu_{*}({{\mathbf{w}}})\le\|{{\mathbf{w}}}\|_2 \qquad \forall \ \ {{\mathbf{w}}}\in \textstyle{\bigwedge W}$$ and that its restriction to $W$ coincides with the Euclidean norm. Next, the discrete subgroup $\Lambda$ appearing in Theorem \[theo\_BKM\] is defined as $$\label{def_Lambda}
\Lambda:=\left\{\begin{pmatrix}p\\{{\mathbf{0}}}\\{{\mathbf{q}}}\end{pmatrix}\in {\mathbb{R}}^{n+d+1} : p\in{\mathbb{Z}}, {{\mathbf{q}}} \in {\mathbb{Z}}^n \right\}.$$ Note that it has rank $k=n+1$, therefore the ball ${\hat{B}}$ appearing in the statement of Theorem \[theo\_BKM\] coincides with the ball ${\tilde{{{\mathbf{U}}}}}$ defined by . Finally, we let the map $H$ send ${{\mathbf{x}}}\in {\tilde{{{\mathbf{U}}}}}$ to the product of matrices $$\label{def_H}
H({{{\mathbf{x}}}}):=DU_{{{\mathbf{x}}}},$$ where $$\label{def_Ux}
U_{{{\mathbf{x}}}}:=\begin{pmatrix}1&0&{{\mathbf{f}}}( {{\mathbf{x}}})\\0&I_d&\nabla{{\mathbf{f}}}({{\mathbf{x}}})\\0&0&I_n\end{pmatrix}\in \mathrm{SL}_{n+d+1}({\mathbb{R}})$$ and $D$ is the diagonal matrix $$\label{def_D}
D:={\operatorname{diag}}\Big(\frac{\varepsilon_1}{\delta'},\underbrace{\frac{\varepsilon_1}{K},\dots,\frac{\varepsilon_1}{K}}_{d \textrm{ times}},
\frac{\varepsilon_1}{T_1},\dots,\frac{\varepsilon_1}{T_n}\Big)$$ defined via the constants $\delta'$, $K$, $T_1,\dots,T_n$, and $\varepsilon_1$ appearing in Theorem \[theo\_explicit\_1\_4\].
With the above choice of parameters, on using , it is easily verified that the set $A(\delta',K, {{\mathbf{T}}})$ defined by within the context of Theorem \[theo\_explicit\_1\_4\] is contained in the set on the left–hand side of with $\varepsilon:=\varepsilon_1\sqrt{n+d+1}$. The upshot is that Theorem \[theo\_explicit\_1\_4\] follows from Theorem \[theo\_BKM\] on verifying conditions (i), (ii) and (iii) therein with appropriate constants $C$, $\alpha$ and $\rho$. With this in mind, we note that condition (iii) is already established in [@Bernik-Kleinbock-Margulis-01:MR1829381 §7] for any $\tilde{\rho}\le1$. In §\[cond1-3\] below, we will verify the remaining conditions (i) & (ii) with the following explicit constants : $$\label{def_C}
C:=\left(\frac{d(n+2)(d(n+1)+2)}{2}\right)^{\alpha/2}\max\left(C_1^*,2C_{d,l}\right),$$ where $$C_1^*:=\max\left(\frac{2 M}{s_0 \cdot\sigma(l,d)} ,\, \frac{2^{d+2}}{V_d} \cdot dl(l+1)\cdot\frac{M}{s_0}\cdot\left(\frac{2l^l+1}{\sigma(l,d)}\right)^{1/l}\right)
\label{def_C_3.5}$$ (here, $\sigma(l,d)$ is the quantity defined in ), $$\label{def_Cdl}
C_{d,l}:= \frac{2^{d+1}dl(l+1)^{1/l}}{V_d},$$ $$\label{def_alpha}
\alpha:= \frac{1}{d(2l-1)}$$ and $\tilde{\rho} = \rho$ as defined by (note that $\rho<1$). This will establish Theorem \[theo\_explicit\_1\_4\].
Verifying conditions (i) & (ii) of Theorem \[theo\_BKM\]\[cond1-3\]
===================================================================
Unless stated otherwise, throughout this section, $\Lambda$ will be the discrete subgroup given by and $\Gamma\in{\mathcal{L}}(\Lambda)$ will be a primitive subgroup of $\Lambda$. Verifying condition (i) of Theorem \[theo\_BKM\] is based on two separate cases : one when the rank of $\Gamma$ is one and the other case of rank $\ge 2$.
Rank one case of condition (i) \[900\]
---------------------------------------
The key to verifying condition (i) in the case that $\Gamma$ is of rank one is the following explicit version of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 3.4]. Notice that it and its corollary are themselves independent of rank and indeed $\Gamma$.
\[proposition\_explicit\_3\_4\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball, $\mathcal{F}\subset \mathcal{C}^{l+1}\left( {\tilde{{{\mathbf{U}}}}^+}\right)$ be a family of real valued functions and $\lambda$ and $\gamma$ be positive real numbers such that :
- the set $\left\{ \nabla{f}\, : \, f\in\mathcal{F}\right\}$ is compact in $\mathcal{C}^{l-1}\left( {\tilde{{{\mathbf{U}}}}^+}\right)$,\
- $\sup\limits_{f\in\mathcal{F}}\underset{{{\mathbf{x}}}\in {\tilde{{{\mathbf{U}}}}^+}}{\sup}\left|\partial_{{{\mathbf{\beta}}}}f({{\mathbf{x}}})\right| \le \lambda$ for any multi–index ${{\mathbf{\beta}}} \in {\mathbb{N}}_0^d$ with $1\le \left|{{\mathbf{\beta}}}\right| \le l+1$,\
- $\underset{f\in\mathcal{F}}{\inf} \;\underset{{{\mathbf{u}}} \in {\mathbb{S}}^{d-1}}{\sup} \; \underset{1\le k \le l}{\sup} \;\left| \frac{\partial^k f}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0) \right| \ge \gamma$, where ${{\mathbf{x}}}_0$ is the centre of ${{{{\mathbf{U}}}}}$,\
- $
\displaystyle\frac{\gamma\cdot \sigma(l,d)}{2\cdot3^{n+d+2}\sqrt{d}\lambda}\ge r\,,
$ where $r$ is the radius of ${{{{\mathbf{U}}}}}$ as defined in and $\sigma(l,d)$ is defined in .
Then, for any $f\in\mathcal{F}$, we have that\
- $f$ is $\left(C_1, \frac{1}{dl} \right)$–good on ${\tilde{{{\mathbf{U}}}}}$,
- - $\left\|\nabla{f}\right\|_{\infty}$ is $\left(C_1, \frac{1}{d(l-1)} \right)$–good on ${\tilde{{{\mathbf{U}}}}}$,
where $$\label{C_explicit_3.4}
C_1=C_1(\gamma,\lambda):=\max\left(\frac{2\lambda}{\gamma\cdot\sigma(l,d)} , \frac{2^{d+2}}{V_d} \, dl(l+1)\frac{\lambda}{\gamma}\left(\frac{2l^l+1}{\sigma(l,d)}\right)^{1/l}\right).$$
*Remark.* Hypothesis (2) is additional to those made in [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 3.4]. In short, it is this “extra” hypothesis that yields an explicit formula for the constant $C_1$. Note that by the definition of $C_1^* $ as given by , we have that $$C_1^* =C_1(s_0,M) \, .$$
Using the explicit constant $C_1$ appearing in Proposition \[proposition\_explicit\_3\_4\], it is possible to adapt the proof of [@Bernik-Kleinbock-Margulis-01:MR1829381 Corollary 3.5] to give the following statement.
\[explicoro3.5\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. With reference to Proposition \[proposition\_explicit\_3\_4\], let $$\gamma:= s_0\qquad \textrm{and }\qquad \lambda:=M.$$ Then, for any linear combination $f=c_0+\sum_{i=1}^nc_i f_i$ with $c_0,\dots,c_n\in{\mathbb{R}}$, we have that
- $f$ is $\left(C_1^*, \frac{1}{dl} \right)$–good on ${\tilde{{{\mathbf{U}}}}}$,
- - $\|\nabla f\|_\infty$ is $\left(C_1^*, \frac{1}{d(l-1)} \right)$–good on ${\tilde{{{\mathbf{U}}}}}$.
Corollary \[explicoro3.5\] allows us to verify condition (i) of Theorem \[theo\_BKM\] in the case that $\Gamma$ is a primitive subgroup of $\Lambda$ of rank 1. Indeed, in view of and of the discussions following equations and , $\nu_*(H({{\mathbf{x}}})\Gamma)$ is the Euclidean norm of $H({{\mathbf{x}}}){{\mathbf{w}}}=DU_{{{\mathbf{x}}}}{{\mathbf{w}}}$, where ${{\mathbf{w}}}$ is a basis vector of $\Gamma$. It is readily seen that the coordinate functions of $H({{\mathbf{x}}}){{\mathbf{w}}}$ are either constants, or $f({{\mathbf{x}}})$, or $\partial f({{\mathbf{x}}})/\partial x_i$ for some $f=c_0+\sum_{i=1}^nc_i f_i$ with $c_0,\dots,c_n\in{\mathbb{R}}$. Hence, by Corollary \[explicoro3.5\] and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1 (b,d)] we obtain that the function $\|H(\cdot)\Gamma\|_\infty$ is $\left(C_1^*, \alpha\right)$–good on ${\tilde{{{\mathbf{U}}}}}$, where $\alpha$ is given by . In turn, on using [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(c)] and the fact that $$\frac{1}{\sqrt{n+d+1}}\leq\frac{\|H({{\mathbf{x}}}){{\mathbf{w}}}\|_{\infty}}{\|H({{\mathbf{x}}}){{\mathbf{w}}}\|_2}\leq 1,$$ we have that $ \nu_{*} \left(H(\, . \, )\Gamma\right) \ {\rm \ is \ } \left(C_1^*(n+d+1)^{\alpha/2},\alpha\right)-{\rm good \ on\ }\tilde{{{{\mathbf{U}}}}}.
$ It then follows from [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(d)] that $$\nu_{*} \left(H(\, . \, )\Gamma\right) \ {\rm \ is \ } \left(C,\alpha\right)-{\rm good \ on\ }\tilde{{{{\mathbf{U}}}}}.$$
In view of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1.a], it suffices to prove the corollary under the assumption that $\left\|\left(c_1, \dots, c_n\right)\right\|_2 =1$. Thus, with reference to Proposition \[proposition\_explicit\_3\_4\], define $$\mathcal{F}:= \left\{ c_0+\sum_{i=1}^{n}c_i f_i \, : \, \left\|\left(c_1, \dots, c_n\right)\right\|_2 =1\right\} \, .$$ The corollary will follow on verifying the four hypotheses of Proposition \[proposition\_explicit\_3\_4\]. Thus, hypothesis (1) is easily seen to be satisfied. Hypothesis (2) is a consequence of and of the Cauchy–Schwarz inequality while hypothesis (3) follows straightforwardly from the definition of the measure of non–degeneracy $s_0$ in and . Finally, hypothesis (4) is guaranteed by and the choices of $\gamma$ and $\lambda$.
Proof of Proposition \[proposition\_explicit\_3\_4\]
----------------------------------------------------
The proof of Proposition \[proposition\_explicit\_3\_4\] relies on the following lemma :
\[cornerstoneproofprop34\] Let $f$ be a real–valued function of class $C^k$ ($k\ge 1$) defined in a neighbourhood of ${{\mathbf{x}}}\in {\mathbb{R}}^d$ ($d\ge 1$). Assume that there exists an index $1\le i_0\le d$ and a real number $\mu>0$ such that $$\left|\frac{\partial^k f}{\partial x_{i_0}^k}({{\mathbf{x}}})\right|\ge \mu.$$ Then there exists a rotation $S : {\mathbb{R}}^d\rightarrow{\mathbb{R}}^d$ such that $$\left|\frac{\partial^k \left(f\circ S\right)}{\partial x_i^k}({{\mathbf{x}}})\right|\,\ge \, \mu\cdot \sigma(k,d)$$ for all indices $1\le i\le d$, where the quantity $\sigma(k,d)$ is defined in.
As the proof of Lemma \[cornerstoneproofprop34\] is lengthy, before given it, we show how to deduce Proposition \[proposition\_explicit\_3\_4\] from it.
*Deduction of Proposition \[proposition\_explicit\_3\_4\] from Lemma \[cornerstoneproofprop34\].* Let ${{\mathbf{x}}}_0=(v_1, v_2, \dots, v_d )$ denote the centre of ${{{{\mathbf{U}}}}}$. Hypothesis (3) of Proposition \[proposition\_explicit\_3\_4\] implies that for any $f\in\mathcal{F}$, there exists a unit vector ${{\mathbf{u}}}\in{\mathbb{S}}^{d-1}$ and an index $1\le k \le l$ such that $$\left|\frac{\partial^k f}{\partial {{\mathbf{u}}}^k}({{\mathbf{x}}}_0)\right|\ge \gamma.$$ Even if it means applying a first rotation to the coordinate system that brings the $x_1$ axis onto the line spanned by the vector ${{\mathbf{u}}}$, it may be assumed, without loss of generality, that the above inequality reads as $$\left| \partial_1^k f ({{\mathbf{x}}}_0)\right|\ge \gamma.$$ From Lemma \[cornerstoneproofprop34\], up to another rotation of the coordinate system, one can guarantee that $$\left| \partial_i^k f ({{\mathbf{x}}}_0)\right|\ge \gamma\cdot \sigma(k,d)\,:=\, C_2$$ for all indices $1\le i \le d$. Now, for a fixed index $i$, it follows from a Taylor expansion at $ {{\mathbf{x}}}_0$ that, for any ${{\mathbf{x}}}=(x_1, \dots, x_d ) \in {\tilde{{{\mathbf{U}}}}^+}$, $$\partial_i^k f\left({{\mathbf{x}}}\right) = \partial_i^k f\left({{\mathbf{x}}}_0\right) + \sum_{j=1}^{d} R_j({{\mathbf{x}}};{{\mathbf{x}}}_0) \left(x_j - v_{j}\right),$$ where, by hypothesis (2), $R_j({{\mathbf{x}}};{{\mathbf{x}}}_0)$ satisfies the inequality $$\left|R_j({{\mathbf{x}}};{{\mathbf{x}}}_0)\right| \, \le\, \underset{{{\mathbf{x}}}\in {\tilde{{{\mathbf{U}}}}^+}}{\sup}\left|\left(\partial_{j}\circ \partial_i^k\right)f({{\mathbf{x}}})\right| \le \lambda.$$ In view of hypothesis (4), we have furthermore that $$\left\| {{\mathbf{x}}} - {{\mathbf{x}}}_0\right\|_2\leq 3^{n+d+2}r\leq \frac{\gamma\cdot \sigma(l,d)}{2\sqrt{d}\lambda}\leq\frac{C_2}{2\sqrt{d}\lambda}\cdotp$$ Thus, for all indices $1\le i\le d$, $$\begin{aligned}
\left|\partial_i^k f\left({{\mathbf{x}}}\right)\right| \, &\ge \, C_2 - \sum_{i=1}^{d}\left|x_j - u_j\right| \lambda \, = \, C_2-\lambda\left\|{{\mathbf{x}}} - {{\mathbf{x}}}_0\right\|_1 \nonumber\\
&\ge \, C_2 - \lambda \sqrt{d} \left\| {{\mathbf{x}}} - {{\mathbf{x}}}_0\right\|_2 \
\ge \, \frac{C_2}{2}\cdotp\label{vb1001}\end{aligned}$$
Next, observe that any cube circumscribed about ${\tilde{{{\mathbf{U}}}}}$ lies inside of ${\tilde{{{\mathbf{U}}}}^+}$. It then follows on applying [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.3] with $A_1 =\lambda$ and $A_2= C_2/2$ that the function $f$ is $\left(C', \frac{1}{dk}\right)$–good on ${\tilde{{{\mathbf{U}}}}}$, where $$\begin{aligned}
C' &:= \frac{2^d}{V_d}dk (k+1)\left(\frac{2\lambda}{\gamma\cdot \sigma(k,d)}(k+1)\left(2k^k+1\right) \right)^{1/k} \\[1ex]
&\le\, \frac{2^{d+2}}{V_d}dk(k+1)\frac{\lambda}{\gamma}\left(\frac{2k^k+1}{\sigma(k,d)} \right)^{1/k}.\end{aligned}$$ A computation then shows that $$\begin{aligned}
C' \le\, \frac{2^{d+2}}{V_d}dl(l+1)\frac{\lambda}{\gamma}\left(\frac{2l^l+1}{\sigma(l,d)} \right)^{1/l}.\end{aligned}$$ Part (a) of Proposition \[proposition\_explicit\_3\_4\] is now a consequence of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1.d]. Regarding part (b), the proof is essentially the same as that of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 3.4.b] with the constant $C$ replaced with the explicit constant $C_1$ given by .\
$\square$
We now proceed with the proof of Lemma \[cornerstoneproofprop34\] which requires several intermediate results. The first one is rather intuitive.
\[cubeslice\] Let $C>0$ be a real number and $p\ge 1$ be an integer. Then every section of the cube $[0, C]^p$ with a $(p-1)$–dimensional subspace of ${\mathbb{R}}^p$ has a volume at most $\sqrt{2}C^{p-1}$.
See [@cubeslicing Theorem 4].
\[lemma\_intermed\] Let $k\ge 1$ denote an integer and let ${{\mathbf{w}}}:= (w_0, \dots, w_k)\in{\mathbb{R}}^{k+1}$. Let $\omega, B>0$ be real numbers. Furthermore, assume that the $k+1$ real numbers $0<t_0<\dots<t_k$ satisfy the following two assumptions :
- $\underset{0\le i\neq j\le k}{\min}\, \left|t_i - t_j\right|\,\ge\, \omega$,\
- $\underset{0\le i \le k}{\max}\, |t_i|^k\,\le\, B$.\
Then, there exist an index $0\le j \le k$ such that $$\left|\sum_{i=0}^{k} w_i t_j^i\right|\,\geq\, \phi(\omega, B; k)\cdot\left\|{{\mathbf{w}}}\right\|_2,$$ where $\phi(\omega, B; k)$ is the quantity defined in .
The following notation will be used in the course of the proof of Lemma \[lemma\_intermed\] : given a point ${{\mathbf{x}}}\in {\mathbb{R}}^n$ and a set $A \subset{\mathbb{R}}^n $, $\textrm{dist}({{\mathbf{x}}},A)$ will denote the quantity $$\label{def_dist}
\textrm{dist}({{\mathbf{x}}},A) := \inf \{ \|{{\mathbf{x}}}-{{\mathbf{a}}}\|_2 : {{\mathbf{a}}} \in A \}.$$
Let ${{\mathbf{X}}}:=({{\mathbf{x}}}_1, \cdots, {{\mathbf{x}}}_{k+1})\in M_{k+1, k+1}$ denote the matrix defined by the following $k+1$ column vectors in ${\mathbb{R}}^{k+1}$ : $$\begin{aligned}
{{\mathbf{x}}}_1\, &:=\, (1, t_0, \dots, t_0^k)^T, \\[1ex]
&\vdots\\[1ex]
{{\mathbf{x}}}_{k+1}\, &:=\, (1, t_k, \dots, t_k^k)^T.\end{aligned}$$ Together with the origin, these points form a simplex $\mathcal{S}({{\mathbf{X}}})$ in ${\mathbb{R}}^{k+1}$ whose volume $\left|\mathcal{S}({{\mathbf{X}}})\right|_{k+1}$ satisfies the well–known equation $$\left|\mathcal{S}({{\mathbf{X}}})\right|_{k+1}\,=\,\frac{1}{(k+1)!}\det \begin{pmatrix}{{\mathbf{x}}}_1 & \dots & {{\mathbf{x}}}_{k+1} & {{\mathbf{0}}} \\ 1&\dots&1&1\end{pmatrix}.$$ The formula for the determinant of a Vandermonde matrix together with hypothesis (1) then yields the inequality $$\left|\mathcal{S}({{\mathbf{X}}})\right|_{k+1}\, \ge \, \frac{\omega^{k(k-1)/2}}{(k+1)!}\cdotp$$ Note that hypothesis (2) implies that all the vectors ${{\mathbf{x}}}_1, \cdots, {{\mathbf{x}}}_{k+1}$ lie in the hypercube $\mathcal{B}:=[0,B]^{k+1}$. As a consequence, the volume of the section of the simplex $\mathcal{S}({{\mathbf{X}}})$ with any hyperplane does not exceed the volume of the section of $\mathcal{S}({{\mathbf{X}}})$ with $\mathcal{B}$ which, from Lemma \[cubeslice\], is at most $\sqrt{2}B^k$. Also, given a hyperplane $\mathcal{P}$, it should be clear that $$\left|\mathcal{S}({{\mathbf{X}}})\right|_{k+1}\,\le \, 2\cdot\underset{1\le j \le k+1}{\max}\textrm{dist}\left({{\mathbf{x}}}_j, \mathcal{P}\right)\cdot\left|\mathcal{P}\cap\mathcal{S}({{\mathbf{X}}})\right|_{k}.$$ The upshot of this discussion is that the following inequality holds : $$\begin{aligned}
\label{maxdist}
\underset{1\le j \le k}{\max}\, \textrm{dist}\left({{\mathbf{x}}}_j, \mathcal{P}\right) \, \ge \, \frac{\omega^{k(k-1)/2}}{2\sqrt{2}B^k(k+1)!}:= \phi(\omega, B, k).\end{aligned}$$ Consider now the hyperplane $\mathcal{P}={{\mathbf{w}}}^{\perp}$ and let $j$ be one of the indices realizing the maximum in . The conclusion of the lemma is then a direct consequence of the equation $$\textrm{dist}\left({{\mathbf{x}}}_j, {{\mathbf{w}}}^{\perp}\right)\,=\,\frac{\left|\sum_{i=0}^{k}w_i t_j^i\right|}{\left\|{{\mathbf{w}}}\right\|_2}\cdotp$$
The next result contains the main substance of the proof of Lemma \[cornerstoneproofprop34\].
\[mainsubslemmacornerstone\] Let $f$ be a real valued function of class $C^k$ ($k\ge 1$) defined in a neighbourhood of $(x_0, y_0)\in{\mathbb{R}}^2$. Let $c>0$ be a real number such that $$\left|\frac{\partial^k f}{\partial x^k}(x_0, y_0)\right|\ge c.$$ Then, there exist two orthonormal vectors ${{\mathbf{u}}}, {{\mathbf{v}}} \in {\mathbb{S}}^{1}$ such that $$\begin{aligned}
\min\left\{\left|\frac{\partial^k f}{\partial {{\mathbf{u}}}^k}(x_0, y_0)\right|, \left|\frac{\partial^k f}{\partial {{\mathbf{v}}}^k}(x_0, y_0)\right| \right\}\, &\ge \, \frac{c}{2^{3k/2}}\cdot \phi\left(\left( \sqrt{2}(2k)^{2+k(k-1)/2}(k+1)!\right)^{-1}, 2, k \right)\\[1ex]
& = \, c\cdot \sigma(k,2).\end{aligned}$$
Set $${{\mathbf{w}}}= (w_0, \dots, w_k):= \left(\binom{k}{j} \frac{\partial^k f}{\partial x^{k-j}\partial y^j}(x_0, y_0)\right)_{0\le j\le k}\in {\mathbb{R}}^{k+1}.$$ It readily follows from the assumptions of the lemma that $$\left\|{{\mathbf{w}}}\right\|_2\ge c.$$ Let $\lambda>0$ be a real number such that, for all indices $0\le j\le k$, $$\left|\frac{\partial^k f}{\partial x^{k-j}\partial y^j}(x_0, y_0)\right|\le \lambda.$$ We thus have the inequality $$\begin{aligned}
\label{lambdaauxiliarum}
\left\|{{\mathbf{w}}}\right\|_2 \le 2^k (k+1)\lambda.\end{aligned}$$
Define now $k+1$ real numbers $t_0, \dots, t_k$ as follows : $$t_i := \frac{1}{2}+\frac{i}{2k},$$ where $i=0,\dots, k$.
With the choices of the parameters $\omega:=1/(2k)$ and $B=1$, Lemma \[lemma\_intermed\] applied to the vector ${{\mathbf{w}}}$ and to the system of points $(t_i)_{0\le i \le k}$ yields the existence of a point $t_j\in [1/2, 1]$ such that $$\begin{aligned}
\label{linterinproof}
\left|\sum_{i=0}^{k}w_i t_j^i\right|\, \ge \, \left\|{{\mathbf{w}}}\right\|_2\cdot \phi\left( \frac{1}{2k}, 1, k\right).\end{aligned}$$
Let $$\begin{aligned}
\label{linterinproofbis}
\epsilon := \frac{1}{2k}\cdot \phi\left( \frac{1}{2k}, 1, k\right)\, \le \, \frac{1}{2k}\end{aligned}$$ denote a constant and $$g : t\in [0,1 ]\mapsto \sum_{i=0}^{k}w_i t^i$$ a function. Note that for all $t\in [0, 1]$, $$\left|�g'(t)\right| = \left|�\sum_{i=1}^{k}w_i i t^{i-1}\right|\, \le \, 2^k\lambda \cdot \frac{k(k+1)}{2}\, = \, 2^{k-1}\lambda k(k+1).$$
This implies that for all $t\in [t_j-\epsilon, t_j+\epsilon]$, where $t_j$ is the constant appearing in , the following inequalities hold : $$\begin{aligned}
\left|g(t)\right| = \left|\sum_{i=0}^{k}w_i t^i�\right| &\ge\left|g(t_j)\right| - \left|g(t)-g(t_j)\right|\\[1ex]
&\ge\left|g(t_j)\right| -\epsilon \cdot 2^{k-1}\lambda k(k+1)\\[1ex]
&\underset{\eqref{linterinproof} \& \eqref{linterinproofbis}}{\ge} \phi\left( \frac{1}{2k}, 1, k\right)\cdot \left( \left\|{{\mathbf{w}}}\right\|_2 - 2^{k-2}\lambda (k+1)\right)\\[1ex]
&\underset{\eqref{lambdaauxiliarum}}{\ge} \frac{\left\|{{\mathbf{w}}}\right\|_2}{2}\cdot \phi\left( \frac{1}{2k}, 1, k\right).\end{aligned}$$
Consider now the image $[a, b]\subset [1, 2]$ of the interval $[t_j-\epsilon, t_j+\epsilon]\cap [1/2,1]$ under the map $t\mapsto 1/t$. It is then readily verified that $$|b-a|\,\ge \,\epsilon.$$ With the choices of the parameters $$\omega := \frac{\epsilon}{k}\, =\, \frac{1}{2k^2}\cdot\phi\left(\frac{1}{2k}, 1, k \right)$$ and $B=2$, apply once more Lemma \[lemma\_intermed\], this time to the vector $\left( (-1)^i w_i \right)_{0\le i \le k}$ and to the set of points $$\tilde{t}_i = a+\frac{b-a}{k}\cdot i, \qquad 0\le i \le k.$$ This yields the existence of $\tilde{t}_j\in [a, b]$ such that $$\left|\sum_{i=0}^{k} (-1)^i w_i \tilde{t}_j^i�\right|\, \ge \, \left\|{{\mathbf{w}}}\right\|_2\cdot \phi\left(\frac{\epsilon}{k}, 2, k\right).$$
The upshot of this is that, when considering the point $s:= 1/\tilde{t}_j$, the following two inequalities hold simultaneously : $$\begin{aligned}
\left|\sum_{i=0}^{k}w_i s^i�\right|\, &\ge \, \frac{\left\|{{\mathbf{w}}}\right\|_2}{2}\cdot \phi\left( \frac{1}{2k}, 1, k\right)\end{aligned}$$ and $$\begin{aligned}
\left|\sum_{i=0}^{k} (-1)^i w_i s^{-i}�\right|\, &\ge \, \left\|{{\mathbf{w}}}\right\|_2\cdot \phi\left(\frac{1}{2k^2}\cdot\phi\left(\frac{1}{2k}, 1, k\right), 2, k\right).\end{aligned}$$ Since $s\in [1/2,1]$, it is easily seen that one can find a unit vector $(u_1, u_2)\in{\mathbb{S}}^1$ such that $s=u_2/u_1$ and $u_1, u_2\in [1/(2\sqrt{2}), 1]$. Let ${{\mathbf{u}}}\in{\mathbb{S}}^1$ and ${{\mathbf{v}}}\in{\mathbb{S}}^1$ denote the two orthonormal vectors defined as ${{\mathbf{u}}}:=(u_1, u_2)$ and ${{\mathbf{v}}}:=(u_2, -u_1)$.
Note then that $$\begin{aligned}
\left|\sum_{i=0}^{k}w_i u_1^{k-i}u_2^i�\right|\, = \, u_1^k\left|\sum_{i=0}^{k}w_is^i\right|\, &\ge \, \frac{1}{2^{1+3k/2}}\cdot\left\|{{\mathbf{w}}}\right\|_2\cdot \phi\left(\frac{1}{2k}, 1, k\right)\\[1ex]
&\ge \, \frac{c}{2^{1+3k/2}}\cdot \phi\left(\frac{1}{2k}, 1, k\right)\end{aligned}$$ and, similarly, $$\begin{aligned}
\left|\sum_{i=0}^{k}(-1)^iw_i u_2^{k-i}u_1^i�\right|\, = \, u_2^k\left|\sum_{i=0}^{k}(-1)^i w_i s^{-i}\right|\, &\ge \, \frac{c}{2^{3k/2}}\cdot\phi\left(\frac{1}{2k^2}\cdot\phi\left(\frac{1}{2k}, 1, k\right), 2, k\right).\end{aligned}$$ Since, from the definition of the vector ${{\mathbf{w}}}$, $$\frac{\partial^k f}{\partial {{\mathbf{u}}}^k}(x_0,y_0)\,=\,\sum_{i=0}^{k}w_i u_1^{k-i}u_2^i�$$ and $$\frac{\partial^k f}{\partial {{\mathbf{v}}}^k}(x_0,y_0)\,=\, \sum_{i=0}^{k}(-1)^iw_i u_2^{k-i}u_1^i,$$ this completes the proof of the lemma from the definition of $\phi$ in .
We now have all the ingredients at our disposal to prove Lemma \[cornerstoneproofprop34\].
*Proof of Lemma \[cornerstoneproofprop34\].* Denote the coordinates of the vector ${{\mathbf{x}}}\in{\mathbb{R}}^d$ as ${{\mathbf{x}}} = (x_1, \dots, x_d)$. Even if it means relabeling the axes, assume furthermore without loss of generality that $i_0=1$ in the statement of the lemma. The proof then goes by induction on $d\ge 1$, the conclusion being trivial when $d=1$. When $d=2$, Lemma \[cornerstoneproofprop34\] reduces to Lemma \[mainsubslemmacornerstone\]. Assume therefore that $d\ge 3$. It then readily follows from the induction hypothesis applied to the function $(x_1, \dots, x_{d-1})\in{\mathbb{R}}^{d-1}\mapsto f(x_1, \dots, x_{d-1}, x_d)$ that there exists a rotation $S_1 : {\mathbb{R}}^d \rightarrow {\mathbb{R}}^d$ such that $$\left|\frac{\partial^k (f\circ S_1)}{\partial x_i^k}({{\mathbf{x}}})\right|\,\ge \, \mu\cdot \sigma(k, d-1).$$ Consider now the function $(x_1, x_d)\in{\mathbb{R}}^2\mapsto f(x_1, \dots, x_{d-1}, x_d)$. Applying Lemma \[mainsubslemmacornerstone\] to this function with $c=\mu\cdot\sigma(k,d-1)$ therein provides the existence of a rotation $S_2 : {\mathbb{R}}^d\mapsto {\mathbb{R}}^d$ acting on the plane $(x_1, x_d)$ and leaving its orthogonal unchanged such that $$\min \left\{\frac{\partial (f\circ S_1 \circ S_2)}{\partial x_1^k}({{\mathbf{x}}}), \, \frac{\partial (f\circ S_1 \circ S_2)}{\partial x_d^k}({{\mathbf{x}}}) \right\}\,\ge \, \mu\cdot \sigma(k,d-1)\cdot\sigma(k,2)\,=\,\mu\cdot\sigma(k,d).$$ The lemma follows upon setting $S=S_1\circ S_2$. $\square$
Higher rank case of condition (i)
---------------------------------
The key to verifying condition (i) of Theorem \[theo\_BKM\] in the case when $\Gamma$ is of rank greater than one is Proposition \[proposition\_explicit\_4\_1\] below. In short, it is an explicit version of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1] in the particular case when the set $\mathcal{G}$ appearing therein is given by $$\label{defg}
\mathcal{G}:=\left\{\left({{\mathbf{u}}}_1 \cdot {{\mathbf{f}}} , {{\mathbf{u}}}_2 \cdot {{\mathbf{f}}}+u_0\right) \; : \; u_0\in{\mathbb{R}},\, {{\mathbf{u}}}_1, {{\mathbf{u}}}_2\in{\mathbb{R}}^n, \, {{\mathbf{u}}}_1\perp{{\mathbf{u}}}_2 \right\}.$$ The statement is concerned with the *skew gradient* of a map as defined in [@Bernik-Kleinbock-Margulis-01:MR1829381 §4]. We recall the definition. Let ${{\mathbf{g}}}=(g_1,g_2):{\tilde{{{\mathbf{U}}}}^+}\rightarrow{\mathbb{R}}^2$ be a differentiable function. The skew gradient $\widetilde{\nabla}{{\mathbf{g}}}:{\tilde{{{\mathbf{U}}}}^+}\rightarrow{\mathbb{R}}^2 $ is defined by $$\widetilde{\nabla}{{\mathbf{g}}}({{\mathbf{x}}}):=g_1({{\mathbf{x}}})\nabla g_2( {{\mathbf{x}}})-g_2({{\mathbf{x}}})\nabla g_1({{\mathbf{x}}}).$$ If we write ${{\mathbf{g}}}({{\mathbf{x}}})$ in terms of polar coordinates; i.e. via the usual functions $\rho({{\mathbf{x}}})$ and $\theta({{\mathbf{x}}})$, it is then readily verified that $$\begin{aligned}
\label{latest}
\widetilde{\nabla}{{\mathbf{g}}}({{\mathbf{x}}}) = \rho^2({{\mathbf{x}}}) \nabla \theta({{\mathbf{x}}}).\end{aligned}$$ Essentially, the skew gradient measures how different the pair of functions $g_1 $ and $g_2$ are from being proportional to each other.
\[proposition\_explicit\_4\_1\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Let $\rho_2$, $C_{d,l}$ and ${\mathcal{G}}$ be given by $\eqref{def_rho_two_first} $, $ \eqref{def_Cdl}$ and $ \eqref{defg} $ respectively. Then,
- for all ${{\mathbf{g}}}\in\mathcal{G}$, $$\|\tilde{\nabla}{{\mathbf{g}}}\|_2 \quad is \quad \left(2C_{d,l}, \frac{1}{d(2l-1)} \right)-good \ on \ {\tilde{{{\mathbf{U}}}}}$$\
- for all ${{\mathbf{g}}}\in\mathcal{G}$, $$\label{inegrhogb}
\underset{{{\mathbf{x}}}\in {{{{\mathbf{U}}}}}}{\sup} \; \| \tilde{\nabla}{{\mathbf{g}}}({{\mathbf{x}}})\|_2 \, \ge \, \rho_2 \, .$$
This proposition together with Corollary \[explicoro3.5\] and the basic properties of $(C, \alpha)$–good functions given in [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1] enables us to deduce the following statement, which establishes condition (i) in the higher rank case.
Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Let $\Lambda$ be the discrete subgroup given by and $\Gamma\in{\mathcal{L}}(\Lambda)$ be a primitive subgroup of $\Lambda$. Furthermore, let $H$ be the map given by . Then, the function $$\label{ez1}
{{\mathbf{x}}} \mapsto \nu_{*} \left(H({{\mathbf{x}}})\Gamma\right)$$ is $\left(C, \alpha\right)$–good on the ball ${\tilde{{{\mathbf{U}}}}}$ with constants $C$ and $\alpha$ given by and respectively.
Let $k$ denote the rank of $\Gamma $. The case $k=1$ has already been established as a consequence of Corollary \[explicoro3.5\] in §\[900\]. Assume therefore that $ k \geq 2$. It is shown in [@Bernik-Kleinbock-Margulis-01:MR1829381 §7 Eq(7.3)] that there exist real numbers $a,b,\mu\in{\mathbb{R}}$ such that, for all ${{\mathbf{x}}}\in {\tilde{{{\mathbf{U}}}}}$, $\nu_{*} \left(H({{\mathbf{x}}} )\Gamma\right) $ given by can be expressed as the Euclidean norm of a vector ${{\mathbf{w}}} ({{\mathbf{x}}})$. Furthermore, there exists an orthonormal system of vectors of the form $\mathcal{S}=\left\{{{\mathbf{e}}}_0, {{\mathbf{e}}}_1^*, \dots, {{\mathbf{e}}}_d^*, {{\mathbf{v}}}_1, \dots, {{\mathbf{v}}}_{k-1} \right\}$ when $k\le n$ or of the form $\mathcal{S}=\left\{{{\mathbf{e}}}_0, {{\mathbf{e}}}_1^*, \dots, {{\mathbf{e}}}_d^*, {{\mathbf{v}}}_0, \dots, {{\mathbf{v}}}_{k-1} \right\}$ when $k=n+1$ such that ${{\mathbf{w}}} ({{\mathbf{x}}})$ is a linear combination of $$L_d(k):= \frac{(k+1)(dk+2)}{2}$$ skew products of elements of $\mathcal{S}$ whose coefficients are of any of the following form : $$\begin{aligned}
\label{ez_class_1} a+b{{\mathbf{f}}}\cdot {{\mathbf{v}}}_0 \qquad \\[1ex]
\label{ez_class_2} b \qquad \\[1ex]
\label{ez_class_3} b\,{{\mathbf{f}}}\cdot {{\mathbf{v}}}_i\qquad & (1\le i\le k-1)\\[1ex]
\label{ez_class_4} b\,\mu\,\partial_s({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i)\qquad & (1\le i \le k, \; 1\le s \le d)\\[1ex]
\label{ez_class_5} \mu \, X(i,s) \qquad &(1\le i\le k-1,\; 1\le s \le d)\\[1ex]
\label{ez_class_6} b\,\mu \,Y(i,j,s) \qquad &(1\leq i < j\leq k-1,\; 1\le s \le d),\end{aligned}$$ where $$X(i,s):= \left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i\right) \partial_s\left(a+b{{\mathbf{f}}}\cdot{{\mathbf{v}}}_0\right) - \left(a+b{{\mathbf{f}}}\cdot{{\mathbf{v}}}_0\right) \partial_s \left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i\right)$$ and $$Y(i,j,s):= \left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i\right) \partial_s\left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_j\right) - \left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_j\right) \partial_s\left({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i\right).$$
1. It follows from part (a) of Corollary \[explicoro3.5\] and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(a,d)] that the coordinate functions given by , and are $(C',\alpha)$–good, where $$C':= \frac{C}{\left(L_d(n+1)\cdot d\right)^{\alpha/2}}\cdotp$$
2. 3. It follows from part (b) of Corollary \[explicoro3.5\] and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(a,d)] that, when the index $i$ is fixed, the maximum over $s$ of the coordinate functions given by , that is, the quantity $\|b\, \mu\, \nabla({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i)\|_{\infty}$, is $(C',\alpha)$–good.
4. 5. It follows from Proposition \[proposition\_explicit\_4\_1\] and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(a,d)] that, for fixed indices $i$ and $j$, the Euclidean norm over $s$ of the coordinate functions given by and , that is, the quantities $\| \mu\, \tilde{\nabla}({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i, a+b{{\mathbf{f}}}\cdot{{\mathbf{v}}}_0)\|_{2}$ and $\|b\, \mu\, \tilde{\nabla}({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i, {{\mathbf{f}}}\cdot{{\mathbf{v}}}_j)\|_{2}$ respectively, are $(C',\alpha)$–good. On using the relation $$\frac{1}{\sqrt{d}}\leq\frac{\|\cdot\|_{\infty}}{\|\cdot\|_2}\leq 1$$ valid in ${\mathbb{R}}^d$ and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(c)], it follows that $\| \mu\, \tilde{\nabla}({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i, a+b{{\mathbf{f}}}\cdot{{\mathbf{v}}}_0)\|_{\infty}$ and $\|b\, \mu\, \tilde{\nabla}({{\mathbf{f}}}\cdot{{\mathbf{v}}}_i, {{\mathbf{f}}}\cdot{{\mathbf{v}}}_j)\|_{\infty}$ are $(d^{\alpha/2}C',\alpha)$–good.
The upshot of the above together with [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(b)] is that the maximum of the coordinate functions – is $(d^{\alpha/2}C',\alpha)$–good. In turn, on using the relation $$\frac{1}{\sqrt{L_d(k)}}\leq\frac{\|\cdot\|_{\infty}}{\|\cdot\|_2}\leq 1$$ valid in ${\mathbb{R}}^{L_d(k)}$ and [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.1(c)], we have that $$\nu_{*} \left(H(\, . \, )\Gamma\right) \ {\rm \ is \ } \left(C'(d\cdot L_d(k))^{\alpha/2},\alpha\right)-{\rm good}.$$ As $k\leq n+1$, the desired statement follows.
Modulo the proof of Proposition \[proposition\_explicit\_4\_1\], we have completed the task of verifying condition (i) of Theorem \[theo\_BKM\]. The proof of the proposition is rather lengthy and therefore is postponed till after we have verified condition (ii) of Theorem \[theo\_BKM\].
Verifying condition (ii) of Theorem \[theo\_BKM\] modulo Proposition \[proposition\_explicit\_4\_1\]
----------------------------------------------------------------------------------------------------
The following lemma, which although not explicitly stated, is essentially proved in [@Bernik-Kleinbock-Margulis-01:MR1829381 §7], see [@Bernik-Kleinbock-Margulis-01:MR1829381 Eq(7.5)] and onwards. The key difference is that we make use of Proposition \[proposition\_explicit\_4\_1\] in place of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1] and so are able to give explicit values of $\rho_1$ and $\rho$.
\[tech\_lemma\_one\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Let $\rho_1, \rho >0$ be given by and respectively and assume that for any ${{\mathbf{v}}} \in {\mathbb{S}}^{n-1}$ and $p\in{\mathbb{R}}$ we have that $$\label{tech_lemma_one_def_rho_one}
\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}|{{\mathbf{f}}}({{\mathbf{x}}}){{\mathbf{\cdot}}}{{\mathbf{v}}}+p|\ge\rho_1\; \text{ and } \;\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}\left\|\nabla\left({{\mathbf{f}}}({{\mathbf{x}}}){{\mathbf{\cdot}}}{{\mathbf{v}}}\right)\right\|_{\infty}\ge\rho_1 .$$ Furthermore, let $\Lambda$ be the discrete subgroup given by , $\Gamma\in{\mathcal{L}}(\Lambda)$ be a primitive subgroup of $\Lambda$ and $H$ be the map given by . Then $$\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}\nu_{*}(H({{\mathbf{x}}})\Gamma)\ge\rho .$$
The following statement immediately verifies condition (ii) of Theorem \[theo\_BKM\]. It is the above lemma without the assumptions made in (\[tech\_lemma\_one\_def\_rho\_one\]).
\[tech\_corollary\_one\] Let ${{{{\mathbf{U}}}}}\subset{\mathbb{R}}^d$ be a ball and ${{\mathbf{f}}}=(f_1,\dots,f_n)$ be an $n$–tuple of $C^{l+1}$ functions satisfying Assumption 1. Let $\Lambda$ be the discrete subgroup given by and $\Gamma\in{\mathcal{L}}(\Lambda)$ be a primitive subgroup of $\Lambda$. Furthermore, let $H$ be the map given by . Then $$\label{tech_corollary_one_result}
\sup_{{{\mathbf{x}}}\in {{{{\mathbf{U}}}}}}\nu_{*}(H({{\mathbf{x}}})\Gamma)\ge\rho,$$ where $\rho$ is given by .
The desired statement follows directly from Lemma \[tech\_lemma\_one\] on verifying the inequalities associated with (\[tech\_lemma\_one\_def\_rho\_one\]). Let ${{\mathbf{v}}} \in {\mathbb{S}}^{n-1}$. By the definition of $s_0:=s(l;{{\mathbf{f}}},{\tilde{{{\mathbf{U}}}}^+})$, there exists a ${{\mathbf{u}}} \in {\mathbb{S}}^{d-1}$ and $1\le k\le l$ such that $$\label{tech_corollary_one_lb_def_s}
\left|\frac{\partial^k({{\mathbf{f}}}\cdot{{\mathbf{v}}})}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0)\right|\ge s_0.$$ Recall, that ${{\mathbf{x}}}_0$ is the centre of ${{{{\mathbf{U}}}}}$. It follows that for any ${{\mathbf{x}}}\in{{{{\mathbf{U}}}}}$, we have that : $$\begin{aligned}
\label{minvgksx}
\left|{{\mathbf{v}}}\cdotp\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}) \right|
& = \, \left|{{\mathbf{v}}}\cdotp\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0) \right| - \left|{{\mathbf{v}}}\cdotp\left(\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0) - \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}})\right)\right| \nonumber\\[2ex]
&\ge \, s_0 - \left\| \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0) - \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}})\right\|_{2}.\end{aligned}$$ Let ${{\mathbf{s}}}'$ denote the unit vector $${{\mathbf{s}}}':= \frac{{{\mathbf{x}}} - {{\mathbf{x}}}_0}{\left\|{{\mathbf{x}}} - {{\mathbf{x}}}_0\right\|_2}\cdotp$$ By Lagrange’s Theorem, there exists ${{\mathbf{x}}}'$ between ${{\mathbf{x}}}_0$ and ${{\mathbf{x}}}$ such that $$\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}_0) \, = \, \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}) + \left\|{{\mathbf{x}}}- {{\mathbf{x}}}_0\right\|_2\frac{\partial}{\partial {{\mathbf{s}}}'}\left(\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}\right)({{\mathbf{x}}}').$$ It then follows from (\[minvgksx\]) and the definition of $M$ in that $$\left|{{\mathbf{v}}}\cdotp\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}}) \right|\, \ge \, s_0 - M\left\|{{\mathbf{x}}}- {{\mathbf{x}}}_0\right\|_{2} \, .$$ This together with the fact that $r<s_0/2M$ — a direct consequence of (\[slv2\]) —, implies that $$\label{tech_corollary_one_lb_one}
\left|\frac{\partial^k({{\mathbf{f}}}\cdot{{\mathbf{v}}})}{\partial{{\mathbf{u}}}^k}({{\mathbf{x}}})\right|\ge s_0/2 \qquad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\, .$$
The upshot is that the hypotheses of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.6] are satisfied. A straightforward application of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.6] together with implies that $$\sup_{x\in{{{{\mathbf{U}}}}}}\left|{{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}+p\right|\ge\frac{s_0/2}{2k^k(k+1)!}(2r)^k\ge\frac{s_0}{4l^l(l+1)!}(2r)^l=\rho_1\sqrt{d}\geq\rho_1$$ for any $p\in{\mathbb{R}}$. Thus the first inequality appearing in is established.
It remains to prove the second inequality in ; that is, that for any ${{\mathbf{v}}} \in {\mathbb{S}}^{n-1}$, $$\label{tech_corollary_one_aim_two}
\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}\left\|\nabla\left({{\mathbf{f}}}({{\mathbf{x}}}){{\mathbf{\cdot}}}{{\mathbf{v}}}\right)\right\|_{\infty}\ge\rho_1=\frac{s_0}{4l^l(l+1)!\sqrt{d}}(2r)^l.$$ Recall from above that for any ${{\mathbf{v}}} \in {\mathbb{S}}^{n-1}$ we can find a vector ${{\mathbf{u}}} \in {\mathbb{S}}^{d-1}$ such that and hold. Furthermore, observe that $$\label{tech_corollary_one_equality_one}
\frac{\partial({{\mathbf{f}}}\cdot{{\mathbf{v}}})}{\partial{{\mathbf{u}}}}({{\mathbf{x}}})={{\mathbf{u}}}^t\cdot\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right).$$
We proceed by considering two cases, depending on whether or not $k=1$ in .
1. Suppose $k=1$ in . Then it follows from and that $$\left|{{\mathbf{u}}}^t\cdot\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right)\right|\ge \frac{s_0}{2} \qquad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\, .$$ On applying the Cauchy–Schwartz inequality, we obtain that $$\left\|\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right)\right\|_{2}\ge \frac{s_0}{2} \qquad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\, .$$ This together with the fact that $\left\|\, . \,\right\|_{2}\le \sqrt{d}\left\|\, . \,\right\|_{\infty}$ implies the second inequality appearing in .
2. 3. Suppose $k\ge 2$ in . Consider the function $g({{\mathbf{x}}}):=\frac{\partial({{\mathbf{f}}}\cdot{{\mathbf{v}}})}{\partial{{\mathbf{u}}}}({{\mathbf{x}}})$ defined on ${{{{\mathbf{U}}}}}$. Then by , we have that $$\left|\frac{\partial^{k-1}g}{\partial{{\mathbf{u}}}^{k-1}}({{\mathbf{x}}})\right|\ge \frac{s_0}{2} \qquad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\, .$$ Thus, the hypotheses of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.6] are satisfied for the function $g({{\mathbf{x}}} )$ and a straightforward application of that lemma together with implies that $$\label{tech_corollary_one_ie_two}
\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}|g({{\mathbf{x}}})|\ge\frac{s_0}{4(l-1)^{l-1}l!}(2r)^{l-1}>\rho_1\sqrt{d}.$$ Now the Cauchy–Schwartz inequality and imply that $$\left\|\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right)\right\|_{2}\ge\left|{{\mathbf{u}}}^t\cdot\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right)\right| = |g({{\mathbf{x}}} )| \qquad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\, .$$ This together with and the fact that $\left\|\, . \,\right\|_{2}\le \sqrt{d}\left\|\, . \,\right\|_{\infty}$ imply the desired statement; namely that $$\sup_{{{\mathbf{x}}}\in{{{{\mathbf{U}}}}}}\left\|\nabla\left({{\mathbf{f}}}({{\mathbf{x}}})\cdot{{\mathbf{v}}}\right)\right\|_{\infty}\ge \rho_1 \ .$$
The upshot of §\[cond1-3\] is that we have verified conditions (i) & (ii) of Theorem \[theo\_BKM\] as desired modulo Proposition \[proposition\_explicit\_4\_1\].
Proof of Proposition \[proposition\_explicit\_4\_1\] \[xmas2\]
==============================================================
In order to prove Proposition \[proposition\_explicit\_4\_1\], we first establish an explicit version of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 4.3]. Throughout this section, the notation introduced in will be used.
\[explilem4.3\] Let $B\subset {\mathbb{R}}^d$ be a ball of radius 1 and let $B_{\infty}$ denote the hypercube circumscribed around $B$ with edges parallel to the coordinate axes. Assume further that ${{\mathbf{p}}}=(p_1, p_2)~: B \mapsto {\mathbb{R}}^2$ is a polynomial map of degree at most $l\ge 1$ such that $$\label{diamimage}
\underset{{{\mathbf{x}}}, {{\mathbf{y}}} \in B_{\infty}}{\sup} \left\|{{\mathbf{p}}}({{\mathbf{x}}}) - {{\mathbf{p}}}({{\mathbf{y}}}) \right\|_2 \, \le \, 2$$ and $$\label{distdroite}
\underset{{{\mathbf{x}}}\in B}{\sup}\; \textrm{\emph{dist}} \left(\mathcal{L}, {{\mathbf{p}}}({{\mathbf{x}}}) \right) \, \ge \, \frac{1}{8}$$ for any straight line $\mathcal{L}\subset {\mathbb{R}}^2$. Then, $$\label{a}
\underset{{{\mathbf{x}}}\in B}{\sup} \|\widetilde{\nabla}{{\mathbf{p}}}({{\mathbf{x}}}) \|_2 \, \ge \, \frac{1}{86\,016\,\sqrt{10}}\left(1+\underset{{{\mathbf{x}}}\in B}{\sup} \left\|{{\mathbf{p}}}({{\mathbf{x}}})\right\|_2 \right)$$ and $$\label{b}
\underset{{{\mathbf{x}}}\in B, \, i=1,2}{\sup} \left\|\nabla p_i({{\mathbf{x}}}) \right\|_2 \, \le \, 2l^2\sqrt{d}.$$
Regarding , if we assume that ${\sup}_{{{\mathbf{x}}} \in B} \left\|{{\mathbf{p}}}({{\mathbf{x}}})\right\|_2 > 6$, the argument used to prove [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 4.3] gives the stronger inequality in which the constant factor $1/(86\,016\sqrt{10})$ is replaced by $1/64$. Thus, without loss of generality, assume that $$\label{dec1}
{\sup}_{{{\mathbf{x}}} \in B} \left\|{{\mathbf{p}}}({{\mathbf{x}}})\right\|_2 \le 6 \, .$$ It is easily inferred from (\[distdroite\]) that there exists ${{\mathbf{x}}}_1\in \overline{B}$, the closure of $B$, such that $\left\|{{\mathbf{p}}}\left({{\mathbf{x}}}_1\right)\right\|_2 \ge 1/8$. Working in polar coordinates and choosing the straight line $\mathcal{L}_1$ joining the origin to ${{\mathbf{p}}}\left({{\mathbf{x}}}_1\right)$ to be the polar axis, let $\left(\rho\!\left({{\mathbf{x}}}\right), \theta\!\left({{\mathbf{x}}}\right)\right)$ denote the polar coordinates of a vector ${{\mathbf{x}}}\in{\mathbb{R}}^2$. Thus, $\rho({{\mathbf{p}}}({{\mathbf{x}}}_1))\ge 1/8$. Furthermore, from (\[distdroite\]), there exists ${{\mathbf{x}}}_2\in\overline{B}$ such that $\mbox{dist}\left(\mathcal{L}_1, {{\mathbf{p}}}({{\mathbf{x}}}_2)\right)\ge 1/8$ and therefore, together with (\[dec1\]), we have that $$\begin{aligned}
\label{dec2}
\left|\theta\left({{\mathbf{p}}}\left({{\mathbf{x}}}_2\right)\right) \right| \, & \ge \, \left|\sin \theta\left( {{\mathbf{p}}}\left({{\mathbf{x}}}_2\right)\right) \right| \, = \, \frac{\mbox{dist}\left(\mathcal{L}_1, {{\mathbf{p}}}({{\mathbf{x}}}_2)\right)}{\rho\left({{\mathbf{p}}}({{\mathbf{x}}}_2)\right)} \nonumber \\
& \ge \, \frac{1/8}{6} \, =\, \frac{1}{48}\cdotp\end{aligned}$$
Now let $\Delta$ be the straight line joining ${{\mathbf{p}}}({{\mathbf{x}}}_1)$ and ${{\mathbf{p}}}({{\mathbf{x}}}_2)$. Furthermore, let $\mathcal{L}_2$ denote the $x$-coordinate axis, $(x_1, y_1)$ the Cartesian coordinates of ${{\mathbf{p}}}({{\mathbf{x}}}_2)$ and $(\rho({{\mathbf{p}}}({{\mathbf{x}}}_1)),0)$ the Cartesian coordinates of ${{\mathbf{p}}}({{\mathbf{x}}}_1)$. Then the Cartesian equation of $\Delta$ is $$\Delta~: \, y_1 x -(x_1-\rho({{\mathbf{p}}}({{\mathbf{x}}}_1)))y - \rho({{\mathbf{p}}}({{\mathbf{x}}}_1))y_1 =0.$$ It follows from the choice of the points ${{\mathbf{x}}}_1$ and ${{\mathbf{x}}}_2$ together with (\[diamimage\]), (\[distdroite\]) and (\[dec1\]) that $$\frac{1}{8}\le \left|y_1\right| \le 6, \ \; \rho({{\mathbf{p}}}({{\mathbf{x}}}_1))\ge \frac{1}{8} \ \; \mbox{ and } \ \; \left|x_1 - \rho({{\mathbf{p}}}({{\mathbf{x}}}_1))\right|\le 2.$$ Therefore, the distance from the origin $O$ to $\Delta$ satisfies the inequality $$\label{dec3}
\mbox{dist}\left(\Delta, O \right) \, = \, \frac{\left|(\rho({{\mathbf{p}}}({{\mathbf{x}}}_1)) y_1\right|}{\sqrt{y_1^2+\left(x_1 - \rho({{\mathbf{p}}}({{\mathbf{x}}}_1) \right)^2}}\, \ge \, \frac{(1/8)^2}{\sqrt{6^2+2^2}}\, = \, \frac{1}{128\sqrt{10}}\cdotp$$
Let $J$ denote the straight line segment $[{{\mathbf{x}}}_1, {{\mathbf{x}}}_2]$ and let ${{\mathbf{u}}} $ be the unit vector $${{\mathbf{u}}}:= \frac{{{\mathbf{x}}}_2 - {{\mathbf{x}}}_1}{\left\|{{\mathbf{x}}}_2 - {{\mathbf{x}}}_1\right\|_2} \cdotp$$ Restricting ${{\mathbf{p}}}$ to $J$, Lagrange’s Theorem guarantees the existence of ${{\mathbf{y}}}\in \left({{\mathbf{x}}}_1, {{\mathbf{x}}}_2\right)$ such that $$\theta\left({{\mathbf{p}}}({{\mathbf{x}}}_2)\right) = \frac{\partial \theta}{\partial {{\mathbf{u}}}}\left({{\mathbf{y}}}\right)\left| J \right|.$$ It then follows via (\[latest\]), (\[dec2\]) and (\[dec3\]) that
$$\begin{aligned}
\|\tilde{\nabla}{{\mathbf{p}}}({{\mathbf{y}}})\|_2 \, &\ge \, |{{\mathbf{u}}}\cdot\tilde{\nabla}{{\mathbf{p}}}({{\mathbf{y}}}) |\, = \, \rho^2\left({{\mathbf{y}}}\right)\left|\frac{\partial \theta}{\partial {{\mathbf{u}}}}\left({{\mathbf{y}}}\right)\right|\\[3ex]
&\ge \, \mbox{dist}\left(\Delta, O\right) \frac{\left| \theta({{\mathbf{p}}}({{\mathbf{x}}}_2))\right|}{\left|J\right|}\, =\, \mbox{dist}\left(\Delta, O\right) \frac{\left| \theta({{\mathbf{p}}}({{\mathbf{x}}}_2))\right|}{\left\|{{\mathbf{x}}}_2 -{{\mathbf{x}}}_1\right\|_2}\\[3ex]
&\ge \, \frac{1}{128\sqrt{10}\times 48 \times 2}\, =\, \frac{1}{12\,288\,\sqrt{10}}\cdotp\end{aligned}$$
Thus, $$\underset{{{\mathbf{x}}}\in B}{\sup} \|\tilde{\nabla}{{\mathbf{p}}}({{\mathbf{x}}}) \|_2 \, \ge \, \frac{1}{12\,288\,\sqrt{10}} = \frac{7}{86\,016\, \sqrt{10}} \, \ge \, \frac{1}{86\,016\, \sqrt{10}}\left(1+ \underset{{{\mathbf{x}}}\in B}{\sup}\left\|{{\mathbf{p}}}({{\mathbf{x}}})\right\|_2\right).$$
This completes the proof of (\[a\]). We now turn out attention to .
Let $i\in\left\{1,2\right\}$. It may be assumed without loss of generality that $p_i\left(0, \dots, 0\right)=0$ and that the ball $B$ is centered at the origin. Then, for given $x_2, \dots, x_d$ in ${\mathbb{R}}$, consider the polynomial in one variable $p\left(x\right):= p_i\left(x, x_2, \dots, x_d \right)$, which is of degree at most $l$. It follows from (\[diamimage\]) that $$\underset{\left|x\right|\le 1}{\sup} \left|p\left(x\right) \right|\,\le\, 2.$$ Hence by Markov’s inequality for polynomials, we have that $$\underset{\left|x\right|\le 1}{\sup} \left|\frac{\textrm{d} p}{\textrm{d} x}\left(x\right) \right| \, = \, \underset{\left|x_1\right|\le 1}{\sup} \left|\frac{\partial p_i}{\partial x_1}\left(x_1, x_2, \dots, x_d\right)\right|\,\le\, 2l^2 \, .$$ This together with the fact that $\left\|\, . \,\right\|_{2}\le \sqrt{d}\left\|\, . \,\right\|_{\infty}$ implies that $$\max \left\{ \underset{{{\mathbf{x}}}\in B}{\sup} \left\|\nabla p_1({{\mathbf{x}}}) \right\|_2 , \underset{{{\mathbf{x}}}\in B}{\sup} \left\|\nabla p_2({{\mathbf{x}}}) \right\|_2 \right\} \, \le \, 2l^2\sqrt{d}$$ and therefore completes the proof of the lemma.
We now have all the ingredients in place to prove Proposition \[proposition\_explicit\_4\_1\].
The proposition is an explicit version of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1]. Within our setup in which $\mathcal{G}$ is given by , the starting point for the proof of part (a) of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1] corresponds to the existence of positive constants $\delta$, $c$, and $\alpha$ with
$$\label{tree}
0<\delta<1/8 \qquad {\rm and } \qquad 2C_{d,l}N_d\delta^{1/(d(2l-1)(2l-2))}\le 1$$
such that for every $ {{\mathbf{g}}} \in\mathcal{G} $ one has
$$\label{eq_4_5a}
\forall \, {{\mathbf{v}}}\in{\mathbb{S}}^{1} \quad \exists \, {{\mathbf{u}}}\in{\mathbb{S}}^{d-1}\quad \exists \, k\leq l \; \; :\;\; \inf_{{{\mathbf{x}}}\in{\tilde{{{\mathbf{U}}}}}}\left|{{\mathbf{v}}}\cdot\frac{\partial^k {{\mathbf{g}}}}{\partial {{\mathbf{u}}}^k}({{\mathbf{x}}})\right|\geq c$$
and $$\label{eq_4_5b}
\sup_{{{\mathbf{x}}}, {{\mathbf{y}}}\in{\tilde{{{\mathbf{U}}}}}}\|\partial_{\beta}{{\mathbf{g}}}({{\mathbf{x}}})-\partial_{\beta}{{\mathbf{g}}}({{\mathbf{y}}})\|\leq\frac{\delta c \alpha}{8 \xi l^l (l+1)!}\,=\, \frac{\delta c \alpha}{16 l^{l+2} (l+1)!\sqrt{d}}$$ for all multi–indices $\beta$ with $|\beta|=l$. Here, $\xi = 2l^2\sqrt{d}$ is the quantity in right–hand side of and the real number $\alpha$ is required to be less than the constant appearing in the right–hand side of , that is, $$\label{eq_4_5c}
\alpha\le\frac{1}{86\,016\sqrt{10}}\cdotp$$
The statements and correspond exactly to [@Bernik-Kleinbock-Margulis-01:MR1829381 Eq(4.5a) & Eq(4.5b)]) with $V$ replaced by ${\tilde{{{\mathbf{U}}}}}$.
The proof of part (a) of Proposition \[proposition\_explicit\_4\_1\] follows from the existence of the constants $\delta$, $c$, and $\alpha$ as established in the proof of part (a) of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1]. It remains for us to show that, given the definition of $r$ in , it is indeed possible to choose such constants in such a way that the relations – hold.
With this in mind, set $$\delta:=\eta,$$ where $\eta$ is defined by . It follows from the definition of $\eta$ and the well known bound $N_d\le 5^d$ for the Besicovitch constant (cf. Remark \[besicocst\] p.) that is satisfied with $\delta=\eta$. We proceed with verifying and .
Regarding , let ${{\mathbf{g}}}=\left({{\mathbf{u}}}_1 \cdot {{\mathbf{f}}}, {{\mathbf{u}}}_2 \cdot {{\mathbf{f}}}+u_0\right)\in\mathcal{G}$. Also, let ${{\mathbf{u}}}:=\left({{\mathbf{u}}}_1, {{\mathbf{u}}}_2\right)$ with ${{\mathbf{u}}}_1,{{\mathbf{u}}}_2\in{\mathbb{S}}^{n-1}$ and let ${{\mathbf{v}}}:=\left(v_1, v_2\right)\in{\mathbb{S}}^{1}$. Furthermore, let ${{\mathbf{w}}}$ denote the vector ${{\mathbf{w}}}:=v_1{{\mathbf{u}}}_1+v_2{{\mathbf{u}}}_2$. Since by the definition of $\mathcal{G}$, the vectors ${{\mathbf{u}}}_1$ and ${{\mathbf{u}}}_2$ are orthogonal, it follows that ${{\mathbf{w}}}\in{\mathbb{S}}^{n-1}$. Now observe that for any multi–index ${{\mathbf{\beta}}}$ such that $\left|{{\mathbf{\beta}}}\right|\le l$, $${{\mathbf{v}}}\cdot\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}} = {{\mathbf{w}}}\cdot\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}.$$ By the definition of $s_0=s(l; {{\mathbf{f}}}, {\tilde{{{\mathbf{U}}}}^+})$, there exists ${{\mathbf{s}}}\in{\mathbb{S}}^{d-1}$ and $1 \le k\le l$ such that $$\left|{{\mathbf{v}}}\cdotp\frac{\partial^k {{\mathbf{g}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}_0) \right|\, = \, \left|{{\mathbf{w}}}\cdotp\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}_0) \right|\, \ge \, s_0 \, .$$ As per usual, ${{\mathbf{x}}}_0$ denotes here the centre of ${{{{\mathbf{U}}}}}$. It follows that for any ${{\mathbf{x}}}\in{\tilde{{{\mathbf{U}}}}^+}$, we have that $$\begin{aligned}
\left|{{\mathbf{v}}}\cdotp\frac{\partial^k {{\mathbf{g}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}) \right|
&\ge \, \left|{{\mathbf{w}}}\cdotp\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}_0) \right| - \left|{{\mathbf{w}}}\cdotp\left(\frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}_0) - \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}})\right)\right| \nonumber\\[2ex]
&\ge \, s_0 - \left\| \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}}_0) - \frac{\partial^k {{\mathbf{f}}}}{\partial{{\mathbf{s}}}^k}({{\mathbf{x}}})\right\|_{2}.\end{aligned}$$ The same arguments as those used to prove can be employed to show that $$\left|{{\mathbf{v}}}\cdot\frac{\partial^k{{\mathbf{g}}}}{\partial{{\mathbf{s}}}^k}\left({{\mathbf{x}}}\right) \right|\, \ge \, \frac{s_0}{2} \qquad \forall \ {{\mathbf{x}}}\in{\tilde{{{\mathbf{U}}}}^+}\, .$$ This proves with $$c:= \frac{s_0}{2}\cdotp$$
We now turn our attention to . With $ {{\mathbf{g}}} $ and ${{\mathbf{u}}}$ as above, first note that for any ${{\mathbf{x}}}\in{\tilde{{{\mathbf{U}}}}^+}$ and for any multi–index ${{\mathbf{\beta}}}$ such that $|\beta|=l$, we have that $$\begin{aligned}
\label{ez_first}
\!\!\!\! \|\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}\right) & - \partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}_0\right)\|_2 = \left\|\left({{\mathbf{u}}}_1\cdot\left(\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\right),\,{{\mathbf{u}}}_2\cdot\left(\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\right)\right)\right\|_2 \, .\end{aligned}$$ Next, note that from the Cauchy–Schwarz inequality, we have that, for $i=1,2$, $$\label{ez_Cauchy--Schwartz}
\left({{\mathbf{u}}}_i\cdot\left(\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\right)\right)^2 \le \|\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\|_2^2.$$ On combining and , we find that $$\left\|\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}\right)-\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}_0\right)\right\|_2\leq \sqrt{2}\|\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\|_2.$$ Now, since ${{\mathbf{f}}}$ satisfies Assumption 1, in view of, we obtain that $$\|\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}})-\partial_{{{\mathbf{\beta}}}}{{\mathbf{f}}}({{\mathbf{x}}}_0)\|_2\leq M\sqrt{2} \left\|{{\mathbf{x}}}-{{\mathbf{x}}}_0\right\|_2 \, .$$ Hence, for any ${{\mathbf{x}}}, {{\mathbf{y}}}\in{\tilde{{{\mathbf{U}}}}^+}$ we have that $$\label{ez_g}
\left\|\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}\right)-\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{y}}}\right)\right\|_2 \,\le \, 2M \left(\left\|{{\mathbf{x}}}-{{\mathbf{x}}}_0\right\|_2+\left\|{{\mathbf{y}}}-{{\mathbf{x}}}_0\right\|_2\right).$$ In view of , we also have that $$\begin{aligned}
\label{ez_x}
\left\|{{\mathbf{x}}}-{{\mathbf{x}}}_0\right\|_2+\left\|{{\mathbf{y}}}-{{\mathbf{x}}}_0\right\|_2&\leq & 2\cdot 3^{n+d+2}\cdot r \nonumber \\[2ex] &\leq & 2\cdot 3^{n+d+2}\cdot\frac{\eta s_0}{4\cdot10^7 3^{n+d+2}\, d M l^{l+2}(l+1)!} \nonumber \\[2ex]
&= & \frac{\eta s_0}{2\cdot10^7\, d M l^{l+2}(l+1)!}\cdotp\end{aligned}$$ The upshot is that $$\label{ez_f}
\underset{{{\mathbf{x}}},{{\mathbf{y}}}\in{\tilde{{{\mathbf{U}}}}^+}}{\sup} \left\|\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{x}}}\right)-\partial_{{{\mathbf{\beta}}}}{{\mathbf{g}}}\left({{\mathbf{y}}}\right)\right\|_2\, \le \frac{\eta s_0}{10^7\, d l^{l+2}(l+1)!}$$ for any multi–index ${{\mathbf{\beta}}}$ with $\left|{{\mathbf{\beta}}}\right|=l$. This proves with $$\alpha:=\frac{32}{10^7\sqrt{d}}\, ,$$ which clearly satisfies .
To prove part (b) of Proposition \[proposition\_explicit\_4\_1\], we closely follow the proof of part (b) of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1]. The new ingredient in our proof is the calculation of explicit constants at appropriate places. With this in mind, let ${{\mathbf{g}}} = (g_1,g_2) \in{\mathcal{G}}$ and take $B$ appearing at the start of the proof of [@Bernik-Kleinbock-Margulis-01:MR1829381 Proposition 4.1(b)] to be ${{{{\mathbf{U}}}}}$, so that $\hat B=\frac{1}{2} {{\mathbf{U}}} $. We claim that there exists a point ${{\mathbf{y}}}\in \hat B$ such that $$\label{ez_gp}
\left\|{{\mathbf{g}}}({{\mathbf{y}}})\right\|_2 \, \ge \, \tau:=\frac{r^l s_0}{4l^l(l+1)!}\cdotp$$ To see that this is so, take ${{\mathbf{v}}}:=(1,0)\in{\mathbb{S}}^1$. In view of , there exists a vector ${{\mathbf{u}}}\in{\mathbb{S}}^{d-1}$ and $ 1 \leq k\leq l$ such that $$\left|{{\mathbf{v}}}\cdot\frac{\partial^k{{\mathbf{g}}}}{\partial{{\mathbf{u}}}^k}\left({{\mathbf{x}}}\right)\right| \; =\; \left|\frac{\partial^k g_1}{\partial{{\mathbf{u}}}^k}\left({{\mathbf{x}}}\right) \right| \; \ge \; c:=\frac{s_0}{2} \quad \forall \ {{\mathbf{x}}}\in {{{{\mathbf{U}}}}}\, .$$ Thus, on applying [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.6] to the function $g_1$ and the ball $\hat B$, we obtain that $$\underset{{{\mathbf{x}}}, {{\mathbf{y}}}\in\hat B}{\sup} \left|g_1({{\mathbf{x}}}) - g_1({{\mathbf{y}}}) \right| \, \ge \, \frac{r^l c}{l^l(l+1)!}=\frac{r^l s_0}{2l^l(l+1)!}\cdotp$$ This implies the existence of a point ${{\mathbf{y}}}\in \hat B$ such that $$\label{ez_a}
\left\|{{\mathbf{g}}}({{\mathbf{y}}})\right\|_2 \, \ge \, \left|g_1\left({{\mathbf{y}}}\right)\right|\,\ge \, \tau$$ as claimed. Next, observe that for any ${{\mathbf{w}}} \in{\mathbb{S}}^{d-1}$ and any ${{\mathbf{x}}}\in{{{{\mathbf{U}}}}}$, $$\frac{\partial {{\mathbf{g}}}}{\partial {{\mathbf{w}}}}({{\mathbf{x}}}) \, = \,
\begin{pmatrix}
{{\mathbf{u}}}_1\cdot\frac{\partial {{\mathbf{f}}}}{\partial {{\mathbf{w}}}}({{\mathbf{x}}})\\[2ex]
{{\mathbf{u}}}_2\cdot\frac{\partial {{\mathbf{f}}}}{\partial {{\mathbf{w}}}}({{\mathbf{x}}})
\end{pmatrix}.$$ Therefore, on using the Cauchy–Schwarz inequality, we obtain via that
$$\label{ez81}
\left\| \frac{\partial {{\mathbf{g}}}}{\partial {{\mathbf{w}}}}({{\mathbf{x}}})\right\|_2 \, \le\, \sqrt{2}\left\|\frac{\partial {{\mathbf{f}}}}{\partial {{\mathbf{w}}}}({{\mathbf{x}}})\right\|_2 \, \le \, \sqrt{2}M \, .$$
Now, observe that, in view of , of the definition of $\tau$ and of the fact that $s_0 \le M$, we have that $$\tau \le r \, M.$$ Consider the ball $B'\subset B={{{{\mathbf{U}}}}}$ with radius $\tau/(2M) \leq r/2$ centred at ${{\mathbf{y}}}$, where ${{\mathbf{y}}}$ satisfies . Take a vector ${{\mathbf{v}}}\in{\mathbb{S}}^1$ orthogonal to ${{\mathbf{g}}}({{\mathbf{y}}})$. In view of , there exists a vector ${{\mathbf{u}}}\in{\mathbb{S}}^{d-1}$ and $ 1 \leq k\leq l$ such that $$\left|{{\mathbf{v}}}\cdot\frac{\partial^k{{\mathbf{g}}}}{\partial{{\mathbf{u}}}^k}\left({{\mathbf{x}}}\right)\right| \; \ge \; c=\frac{s_0}{2} \quad \forall \ {{\mathbf{x}}}\in{{{{\mathbf{U}}}}}\; .$$ Thus, on applying [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 3.6] to the function $ {{\mathbf{x}}} \to {{\mathbf{v}}} \cdot {{\mathbf{g}}}({{\mathbf{x}}})$ and the ball $B'$, we obtain that $$\label{ez_w}
\sup_{{{\mathbf{x}}}\in B'}|{{\mathbf{v}}}\cdot{{\mathbf{g}}}({{\mathbf{x}}})|\geq\frac{s_0}{4l^l(l+1)!}\left(\frac{\tau}{M}\right)^l.$$
On the other hand, the upper bound implies that $$\label{ez_delta}
\sup_{{{\mathbf{x}}}\in B'}\|{{\mathbf{g}}}({{\mathbf{x}}})-{{\mathbf{g}}}({{\mathbf{y}}})\|_2\leq \, \frac{\tau}{2M} \, \sqrt{2}M \,= \, \frac{\tau}{\sqrt{2}}\cdotp$$ The upshot of and is that we are able to apply [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 4.2] to the map ${{\mathbf{g}}}:B'\rightarrow{\mathbb{R}}^2$ to yield and thereby complete the proof of part (b) of Proposition \[proposition\_explicit\_4\_1\]. For ease of comparison, we point out that the quantities $a$, $\delta$ and $w$ appearing in the statement of [@Bernik-Kleinbock-Margulis-01:MR1829381 Lemma 4.2] correspond to $\tau$, $\tau/\sqrt{2}$ and the right–hand side of respectively.
[**Acknowledgements.**]{} The main catalysts for this paper were the ’York-BT Mathematics of MIMO’ meeting at BT, Adastral Park on 14 June 2013 and the subsequent international ’Workshop on interactions between number theory and wireless communication’ at the University of York between 9-23 May 2014. The aim of these meetings was to explore potential applications to wireless technology of current research on Diophantine approximation. We would like to take this opportunity to thank the ‘electronic’ participants for putting up with our naive questions and often nonsensical ramblings. In particular, we would like to thank to Alistar Burr (Dept. of Electronics, York) and Keith Briggs (BT Labs) for their enthusiastic support in making sure that the meetings actually happened – they were our main link to the electronics world!
All five authors of this paper are number theorists and we have all benefited hugely from the presence of Maurice Dodson in our lives at some stage in our careers in some form or another. Not only does he have a wonderful mind and personality but his intellectual curiosity knows no boundaries. We would like to think that some of his curiosity has rubbed off onto us and for this we are forever in his debt. There is so much to gain in engaging in dialogue with researchers in other disciplines and attempting to overcome ‘language’ barriers. In short, thank you Maurice for being a genuine intellectual.
SV would like to thank his teenage daughters Iona and Ayesha for still being a joyous addition in his life but would strongly urge them to improve their taste in music. Also a massive thanks to Bridget Bennett for sticking around even at fifty – congratulations! EZ is enormously grateful to Aljona who not only recently gave birth to their daughter Alyssa but looked after her during his obsession with this project. Infinite thanks to Alyssa herself, just for her simple existence and for all the joy and happiness she has already brought to his life.
[99]{}
K. Ball. Cube slicing in ${\mathbb{R}}^n$. , 97: 465-473, 1986.
V. Beresnevich. A Groshev type theorem for convergence on manifolds. , 94 (1–2):99–130, 2002.
V. Beresnevich, V. Bernik, M. Dodson, S. Velani. Classical metric Diophantine approximation revisited. *Analytic number theory*, 38–61, Cambridge Univ. Press, Cambridge, 2009.
V. Beresnevich, D. Dickinson, S. Velani. Measure theoretic laws for limsup sets. , 179(846):x+91, 2006.
V. Beresnevich, D. Dickinson, S. Velani. Diophantine approximation on planar curves and the distribution of rational points. , 166(2)367–426, 2007.
V. Beresnevich, S. Velani. Classical metric Diophantine approximation revisited: the Khintchine—Groshev Theorem. *Int. Math. Res. Not. IMRN* 2010, no. 1, 69–86.
V. Beresnevich, E. Zorin, Explicit bounds for rational points near planar curves and metric Diophantine approximation. *Adv. Math.* 2010, 3064–3087.
V. I. Bernik, D. Kleinbock, G. A. Margulis. Khintchine—type theorems on manifolds: the convergence case for standard and multiplicative versions, : 453–486, 2001.
Z. Füredi, P.A. Loeb. On the best constant for the Besicovitch covering theorem. 121 (4): 1063–1073, 1994.
P.X. Gallagher : Metric Simultaneous Diophantine Approximation II. [*Mathematika*]{}, 12 (1965) 123–127.
A. Ghasemi, A. S. Motahari, A. K Khandani. Interference alignment for the K user MIMO interference channel , 2010
G. Harman. . Volume 18 of [*LMS Monographs New Series*]{}, 1998.
S. A. Jafar. Interference Alignment – A New Look at Signal Dimensions in a Communication Network. , Vol. 7, No. 1, 2010.
D. Y. Kleinbock, G. A. Margulis. [Flows on homogeneous spaces and [D]{}iophantine approximation on manifolds]{}, [*Ann. of Math.*]{} (2), 148 (1998), 339–360.
A. S. Motahari, S. Oveis-Gharan, M.A. Maddah-Ali, A. K. Khandani. Real interference alignment: exploiting the potential of single antenna systems, [*IEEE Trans. Inform. Theory*]{} 60 (2014), no. 8, 4799–4810.
A. S. Motahari, S. Oveis–Gharan, M.A. Maddah–Ali, A. K. Khandani. , 2010.
S. H. Mahboubi, A. S. Motahari, A. K. Khandani. Layered Interference Alignment: Achieving the total DOF of MIMO X–channels. , 2010
U. Niesen, P. Whiting. . Vol. 58, no. 8, 2012.
W. M. Schmidt. A metrical theorem in diophantine approximation, [*Canad. J. Math.*]{} 12 (1960), 619–631.
V. Sprindžuk. . John Wiley & Sons, New York–Toronto–London, 1979. (English transl.).
R. Vaughan, S. Velani. Diophantine approximation on planar curves: the convergence theory. , 166(1):103–124, 2006.
Y. Wu, S. Shamai, S. Verdú. Information dimension and the degrees of freedom of the interference channel. [*IEEE Trans. Inform. Theory*]{} 61 (2015), no. 1, 256-279.
J. Xie, S. Ulukus. Real Interference Alignment for the K–User Gaussian Interference Compound Wiretap Channel 2010
M. Zamanighomi, Z. Wang. Multiple-antenna interference channels with real interference alignment and receive antenna joint processing based on simultaneous Diophantine approximation. [*IEEE Trans. Inform. Theory*]{} 60 (2014), no. 8, 4757-4769.
[^1]: FA research is supported by EPSRC Programme Grant: EP/J018260/1. VB and SV research is supported in part by EPSRC Programme Grant: EP/J018260/1.
[^2]: ‘For almost all’ means for all except from a set of Lebesgue measure zero.
[^3]: Throughout, results and page numbers within [@Bernik-Kleinbock-Margulis-01:MR1829381] are with reference to the arXiv version: math/0210298v1
[^4]: There are two typos in the proof of L2.2 that one should be aware of when verifying the values of the constants given here. On page 6 line -2, the inclusion regarding $U(x)$ is the wrong way round, it should read $U(x)\supset B(x,\frac{\rho}{4\sqrt{d}})$. Next, on page 7 line 11, in the rightmost term of the displayed set of inequalities the quantity $\delta$ is missing, it should read $C_d'''\delta |U(x)|_d$. These typos do not affect the validity of the proof given in [@Bernik-Kleinbock-Margulis-01:MR1829381].
|
---
abstract: 'The effective quantum pseudospin-$1/2$ model for interacting rare-earth magnetic moments, which are locally described with atomic doublets, is studied theoretically for magnetic pyrochlore oxides. It is derived microscopically for localized Pr$^{3+}$ $4f$ moments in Pr$_2TM_2$O$_7$ ($TM=$Zr, Sn, Hf, and Ir) by starting from the atomic non-Kramers magnetic doublets and performing the strong-coupling perturbation expansion of the virtual electron transfer between the Pr $4f$ and O $2p$ electrons. The most generic form of the nearest-neighbor anisotropic superexchange pseudospin-$1/2$ Hamiltonian is also constructed from the symmetry properties, which is applicable to Kramers ions Nd$^{3+}$, Sm$^{3+}$, and Yb$^{3+}$ potentially showing large quantum effects. The effective model is then studied by means of a classical mean-field theory and the exact diagonalization on a single tetrahedron and on a 16-site cluster. These calculations reveal appreciable quantum fluctuations leading to quantum phase transitions to a quadrupolar state as a melting of spin ice for the Pr$^{3+}$ case. The model also shows a formation of cooperative quadrupole moment and pseudospin chirality on tetrahedrons. A sign of a singlet quantum spin ice is also found in a finite region in the space of coupling constants. The relevance to the experiments is discussed.'
author:
- Shigeki Onoda
- Yoichi Tanaka
title: |
Quantum fluctuations in the effective pseudospin-$1/2$ model\
for magnetic pyrochlore oxides
---
Introduction {#sec:intro}
============
Quantum fluctuations and geometrical frustration are a couple of key ingredients in realizing nontrivial spin-disordered ground states without a magnetic dipole long-range order (LRO) in three spatial dimensions [@anderson:56; @anderson; @wen:89; @palee:09]. The pyrochlore lattice structure is a typical example where the geometrical frustration plays a crucial role in preventing the LRO [@anderson:56; @reimers:91; @moessner:98]. Of our particular interest in this paper is a so-called dipolar spin ice [@harris:97; @ramirez:99; @bramwell:01; @gardner:10], such as Dy$_2$Ti$_2$O$_7$ and Ho$_2$Ti$_2$O$_7$, and related systems. The dipolar spin ice provides a classical magnetic analogue of a cubic water ice [@pauling] and is characterized by the emergent U(1) gauge field mediating the Coulomb interaction between monopole charges [@hermele:04; @castelnovo:08] as well as the dipolar spin correlation showing a pinch-point singularity [@isakov:04; @henley:05]. Introducing quantum effects to the classical spin ice may produce further nontrivial states of matter. Evidence of quantum effects has recently been observed with inelastic neutron-scattering experiments on the spin-ice related compounds, Tb$_2$Ti$_2$O$_7$ [@gardner:99; @gardner:01; @mirebeau:07], Tb$_2$Sn$_2$O$_7$ [@mirebeau:07], and Pr$_2$Sn$_2$O$_7$ [@zhou:08]. Exploiting a weak coupling of the rare-earth magnetic moments to conduction electrons, a chiral spin state [@wen:89] has been detected through the anomalous Hall effect [@ahe] at zero magnetic field without magnetic dipole LRO in another related compound Pr$_2$Ir$_2$O$_7$ [@machida:09]. Vital roles of the planar components have also been experimentally observed in Yb$_2$Ti$_2$O$_7$ and Er$_2$Ti$_2$O$_7$ [@cao:09]. Obviously, quantum fluctuations enrich the otherwise classical properties of the spin ice. They may drive it to other states of matter, including quadrupolar states and chiral spin states [@onoda:09]. The aims of this paper are to provide a comprehensive derivation of a realistic effective quantum model for these spin-ice related materials and to understand its basic properties including nontrivial quantum effects.
Classical dipolar spin ice
--------------------------
Let us briefly review the classical (spin) ice. The low-energy properties of water and spin ices are described by Ising degrees of freedom that represent whether proton displacements (electric dipoles) and magnetic dipoles, respectively, point inwards (“in”) to or outwards (“out”) from the center of the tetrahedron. The interaction among the Ising variables favors nearest-neighbor pairs of “in” and “out” and thus suffers from geometrical frustration. This produces a so-called ice rule [@bernal:33; @pauling] stabilizing “2-in, 2-out” configurations on each tetrahedron. Macroscopic degeneracy of this ice-rule manifold produces Pauling’s residual entropy $\frac{1}{2}R\ln\frac{3}{2}$ [@pauling].
In the dipolar spin ice [@harris:97; @ramirez:99; @bramwell:01], a rare-earth magnetic moment $\hat{\bm{m}}_{\bm{r}}=g_J \mu_B \hat{\bm{J}}_{\bm{r}}$ located at a vertex $\bm{r}$ of tetrahedrons plays the role of the Ising variable because of the large crystalline electric field (CEF), which is often [*approximately*]{} modeled by $$\hat{H}_{\mathrm{Ising}}=-D_{\mathrm{Ising}}\sum_{\bm{r}}(\bm{n}_{\bm{r}}\cdot\hat{\bm{J}}_{\bm{r}}/J)^2,
\label{eq:H_Delta}$$ with the Landé factor $g_J$ and $D_{\mathrm{Ising}}>0$. Here, $J$ is the quantum number for the total angular momentum $\hat{\bm{J}}_{\bm{r}}$, and $\bm{n}_{\bm{r}}$ defines a unit vector at a pyrochlore-lattice site $\bm{r}$ that points outwards from the center of the tetrahedron belonging to one fcc sublattice of the diamond lattice and inwards to that belonging to the other sublattice. The amplitude of the rare-earth magnetic moment is so large that the interaction between the magnetic moments is dominated by the magnetic dipolar interaction [@rossat-mignod:83; @hertog:00], $$\hat{H}_{\mathrm{D}}=\frac{\mu_0}{4\pi}\sum_{\langle\bm{r},\bm{r}'\rangle}\left[\frac{\hat{\bm{m}}_{\bm{r}}\cdot\hat{\bm{m}}_{\bm{r}'}}{({\mit\Delta}r)^3}-3\frac{(\hat{\bm{m}}_{\bm{r}}\cdot{\mit\Delta}\bm{r})({\mit\Delta}\bm{r}\cdot\hat{\bm{m}}_{\bm{r}'})}{({\mit\Delta}r)^5}\right],
\label{eq:H_D}$$ with ${\mit\Delta}\bm{r}=\bm{r}-\bm{r}'$ and the summation $\sum_{\langle\bm{r},\bm{r}'\rangle}$ over all the pairs of atomic sites. This yields a ferromagnetic coupling $D_{\mathrm{n.n.}}=\frac{5}{3}\frac{\mu_0}{4\pi}\frac{m^2}{(a/2\sqrt{2})^3}\sim2.4$ K between the nearest-neighbor magnetic moments for Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$ with the moment amplitude $m\sim10\mu_B$ and the lattice constant $a\sim10.1$ Å [@bramwell:01], providing a main driving force of the ice rule. It prevails over the nearest-neighbor superexchange interaction which is usually [*assumed*]{} to take the isotropic Heisenberg form $$\hat{H}_{\mathrm{H}}=-3J_{\mathrm{n.n.}}\sum_{\langle\bm{r},\bm{r}'\rangle}^{\mathrm{n.n.}}\hat{\bm{J}}_{\bm{r}}\cdot\hat{\bm{J}}_{\bm{r}'}/J^2.
\label{eq:H_H}$$ In the limit of $D_{\mathrm{Ising}}\to\infty$, $\hat{H}_{\mathrm{DSI}}=\hat{H}_{\mathrm{Ising}}+\hat{H}_{\mathrm{D}}+\hat{H}_{\mathrm{H}}$ is reduced to an Ising model [@hertog:00], which can explain many magnetic properties experimentally observed at temperatures well below the crystal-field excitation energy [@bramwell:01; @castelnovo:08; @jaubert:09; @gardner:10]. Because of the ferromagnetic effective nearest-neighbor coupling $J_{\mathrm{eff}}=D_{\mathrm{n.n.}}+J_{\mathrm{n.n.}}>0$, creating “3-in, 1-out” and “1-in, 3-out” configurations out of the macroscopically degenerate “2-in, 2-out” spin-ice manifold costs an energy, and can be regarded as defects of magnetic monopoles and anti-monopoles with a unit magnetic charge [@moessner:98]. Then, the spin ice is described as the Coulomb phase of magnetic monopoles where the emergent U(1) gauge fields mediate the Coulomb interaction between monopole charges [@hermele:04; @castelnovo:08]. The density of magnetic monopoles is significantly suppressed to lower the total free energy at a temperature $T<J_{\mathrm{eff}}\sim$ a few Kelvin. Simultaneously, the reduction of the monopole density suppresses spin-flip processes, for instance, due to a quantum tunneling [@ehlers:03], that change the configuration of monopoles. Hence, the relaxation time to reach the thermal equilibrium shows a rapid increase. These phenomena associated with a thermal quench of spin ice have been experimentally observed [@snyder:01; @snyder:04] and successfully mimicked by classical Monte-Carlo simulations on the Coulomb gas model of magnetic monopoles [@jaubert:09; @castelnovo:10]. This indicates that the quantum effects are almost negligible in the dipolar spin ice. It has been shown that the emergent gapless U(1) gauge excitations together with a power-law decay of spin correlations can survive against a weak antiferroic exchange interaction that exchanges the nearest-neighbor pseudospin-$1/2$ variables (“in” and “out”) [@hermele:04]. This $U(1)$ spin liquid [@hermele:04] can be viewed as a quantum version of the spin ice, though the macroscopic degeneracy of the ice-rule manifold should eventually be lifted in the ideal case under the equilibrium.
Quantum effects
---------------
At a first glance, one might suspect that quantum fluctuations should be significantly suppressed by a large total angular momentum $J$ of the localized rare-earth magnetic moment and its strong single-spin Ising anisotropy $D_{\mathrm{Ising}}>0$, since they favor a large amplitude of the quantum number for $\hat{J}_{\bm{r}}^z\equiv\hat{\bm{J}}_{\bm{r}}\cdot\bm{n}_{\bm{r}}$, either $M_J=J$ or $-J$. Namely, in the effective Hamiltonian, $\hat{H}_{\mathrm{DSI}}=\hat{H}_{\mathrm{Ising}}+\hat{H}_{\mathrm{D}}+\hat{H}_{\mathrm{H}}$, a process for successive flips of the total angular momentum from $M_J=J$ to $-J$ at one site and from $-J$ to $J$ at the adjacent site is considerably suppressed at a temperature $T\ll D_{\mathrm{Ising}}$. The coupling constant for this pseudospin-flip interaction is of order of $|J_{\mathrm{n.n.}}|(|J_{\mathrm{n.n.}}|/D_{\mathrm{Ising}})^{2J}$, and becomes negligibly small compared to the Ising coupling $J_{\mathrm{eff}}$.
In reality, however, because of the $D_{3d}$ crystalline electric field (CEF) acting on rare-earth ions \[Fig. \[fig:crystal\]\], the conservation of $\hat{J}^z$, which is implicitly assumed in the above consideration, no longer holds in the atomic level. Eigenstates of the atomic Hamiltonian including the $LS$ coupling and the CEF take the form of a superposition of eigenstates of $\hat{J}^z$ whose eigenvalues are different by integer multiples of three. Obviously, this is advantageous for the quantum spin exchange to efficiently work.
Attempts to include quantum effects have recently been in progress. It has been argued that the presence of a low-energy crystal-field excited doublet above the ground-state doublet in Tb$_2$Ti$_2$O$_7$ [@gardner:99; @gardner:10] enhances quantum fluctuations and possibly drives the classical spin ice into a quantum spin ice composed of a quantum superposition of “2-in, 2-out” configurations [@molavian:07].
We have proposed theoretically an alternative scenario, namely, a quantum melting of spin ice [@onoda:09]. A quantum entanglement among the degenerate states lifts the macroscopic degeneracy, suppresses the spin-ice freezing, and thus leads to another distinct ground state. Actually, the quantum-mechanical spin-exchange Hamiltonian mixes “2-in, 2-out” configurations with “3-in, 1-out” and “1-in, 3-out”, leading to a failure of the strict ice rule and [*a finite density of monopoles at the quantum-mechanical ground state*]{}. Namely, the quantum-mechanically proliferated monopoles can modify the dipolar spin-ice ground state, while a spatial profile of short-range spin correlations still resembles that of the dipolar spin ice [@onoda:09]. They may appear in bound pairs or in condensates. We have reported that there appears a significantly large anisotropic quantum-mechanical superexchange interaction between Pr magnetic moments in Pr$_2TM_2$O$_7$ [@onoda:09] ($TM$ = Zr, Sn, Hf, and Ir) [@subramanian:83]. This anisotropic superexchange interaction drives [*quantum phase transitions among the spin ice, quadrupolar states having nontrivial chirality correlations, and the quantum spin ice*]{}, as we will see later.
Actually, among the rare-earth ions available for magnetic pyrochlore oxides [@subramanian:83; @gardner:10], the Pr$^{3+}$ ion could optimally exhibit the quantum effects because of the following two facts. (i) A relatively small magnitude of the Pr$^{3+}$ localized magnetic moment, whose atomic value is given by $3.2\mu_B$, suppresses the magnetic dipolar interaction, which is proportional to the square of the moment size. Then, for Pr$_2TM_2$O$_7$, one obtains $D_{\mathrm{n.n.}}\sim0.1$ K, which is an order of magnitude smaller than 2.4 K for Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$. Similarly, quantum effects might appear prominently also for Nd$^{3+}$, Sm$^{3+}$, and Yb$^{3+}$ ions because of their small moment amplitudes, $3.3\mu_B$, $0.7\mu_B$, and $4\mu_B$, respectively, for isolated cases. (ii) With fewer $4f$ electrons, the $4f$-electron wavefunction becomes less localized at atomic sites. This enhances the overlap with the O $2p$ orbitals at the O1 site \[Fig. \[fig:crystal\] (a)\], and thus the superexchange interaction which is also further increased by a near resonance of Pr $4f$ and O $2p$ levels. Moreover, this superexchange interaction appreciably deviates from the isotropic Heisenberg form because of the highly anisotropic orbital shape of the $f$-electron wavefunction and the strong $LS$ coupling. Since the direct Coulomb exchange interaction is even negligibly small [@rossat-mignod:83], this superexchange interaction due to virtual $f$-$p$ electron transfers is expected to be the leading interaction.
Recent experiments on Pr$_2$Sn$_2$O$_7$ [@matsuhira:02], Pr$_2$Zr$_2$O$_7$ [@matsuhira:09], and Pr$_2$Ir$_2$O$_7$ [@nakatsuji:06] have shown that the Pr$^{3+}$ ion provides the $\langle111\rangle$ Ising moment described by a non-Kramers magnetic doublet. They show similarities to the dipolar spin ice. (i) No magnetic dipole LRO is observed down to a partial spin-freezing temperature $T_f\sim0.1$-$0.3$ K [@matsuhira:02; @matsuhira:09; @nakatsuji:06; @machida:09; @zhou:08; @maclaughlin:08]. (ii) Pr$_2$Ir$_2$O$_7$ shows a metamagnetic transition at low temperatures only when the magnetic field is applied in the \[111\] direction [@machida:09], indicating the ice-rule formation due to the effective ferromagnetic coupling $2J_{\mathrm{eff}}\sim 2J_{\mathrm{n.n.}}\sim1.4$ K [@machida:09]. On the other hand, substantially different experimental observations from the dipolar spin ice have also been made. The Curie-Weiss temperature $T_{CW}$ is antiferromagnetic for the zirconate [@matsuhira:09] and iridate [@nakatsuji:06], unlike the spin ice. The stannate shows a significant level of low-energy short-range spin dynamics in the energy range up to a few Kelvin [@zhou:08], which is absent in the classical spin ice. Furthermore, the iridate shows the Hall effect at zero magnetic field without magnetic dipole LRO [@machida:09], suggesting an onset of a chiral spin-liquid phase [@wen:89] at $T_H\sim1.5$ K.
The discovery of this chiral spin state endowed with a broken time-reversal symmetry on a macroscopic scale in Pr$_2$Ir$_2$O$_7$ without apparent magnetic LRO [@machida:09] has increased the variety of spin liquids. One might speculate that this is caused mainly by a Kondo coupling to Ir conduction electrons and thus the RKKY interaction [@rkky]. However, the low-temperature thermodynamic properties are common in this series of materials, Pr$_2TM_2$O$_7$, except that a small partial reduction ($\sim10\%$) of Pr magnetic moments probably due to conduction electrons affects the resistivity and the magnetic susceptibility in Pr$_2$Ir$_2$O$_7$ [@nakatsuji:06]. Furthermore, the onset temperature $T_H\sim1.5$ K for the emergent anomalous Hall effect is comparable to the ferromagnetic coupling $2J_{\mathrm{eff}}\sim1.4$ K [@machida:09]. Therefore, it is natural to expect that a seed of the chiral spin state below $T_H$ exists in the Pr moments interacting through the superexchange interaction and possibly the state is stabilized by the conduction electrons. Another intriguing observation here is that without appreciable quantum effects, the chiral manifold of classical ice-rule spin configurations [@machida:09] that has been invented to account for the emergent anomalous Hall effect will result from a magnetic dipole LRO or freezing, which is actually absent down to $T_f$. This points to a significant level of nontrivial quantum effects.
![(Color online) (a) Pr$^{3+}$ ions form tetrahedrons (dashed lines) centered at O$^{2-}$ ions (O1), and are surrounded by O$^{2-}$ ions (O2) in the $D_{3d}$ symmetry as well as by transition-metal ions ($TM$). Each Pr magnetic moment (bold arrow) points to either of the two neighboring O1 sites. $(\bm{x}_{\bm{r}},\bm{y}_{\bm{r}},\bm{z}_{\bm{r}})$ denotes the local coordinate frame. (b) The local coordinate frame from the top. The upward and the downward triangles of the O$^{2-}$ ions (O2) are located above and below the hexagon of the $TM$ ions. []{data-label="fig:crystal"}](fig1){width="\columnwidth"}
In this paper, we develop a realistic effective theory for frustrated magnets Pr$_2TM_2$O$_7$ on the pyrochlore lattice and provide generic implications on quantum effects in spin-ice related materials, giving a comprehensive explanation of our recent Letter [@onoda:09]. In Sec. \[sec:model\], the most generic nearest-neighbor pseudospin-$1/2$ Hamiltonian for interacting magnetic moments on the pyrochlore lattice is derived on a basis of atomic magnetic doublets for both non-Kramers and Kramers ions. In particular, it is microscopically derived from strong-coupling perturbation theory in the Pr$^{3+}$ case. We analyze the model for the non-Kramers case by means of a classical mean-field theory in Sec. \[sec:MFT\], which reveals spin-ice, antiferroquadrupolar, and noncoplanar ferroquadrupolar phases at low temperatures. Then, we perform exact-diagonalization calculations for the quantum pseudospin-$1/2$ case on a single tetrahedron in Sec. \[sec:single\] and on the 16-site cube in Sec. \[sec:ED\]. We have found within the 16-site cluster calculations a cooperative ferroquadrupolar phase, which is accompanied by crystal symmetry lowering from cubic to tetragonal and can then be categorized into a magnetic analog of a smectic or crystalline phase [@degenne]. This provides a scenario of the quantum melting of spin ice and can explain the experimentally observed magnetic properties, including powder neutron-scattering experiments on Pr$_2$Sn$_2$O$_7$ and the magnetization curve on Pr$_2$Ir$_2$O$_7$. We also reveal a possible source of the time-reversal symmetry breaking observed in Pr$_2$Ir$_2$O$_7$. It takes the form of the solid angle subtended by four pseudospins on a tetrahedron, each of which is composed of the Ising dipole magnetic moment and the planar atomic quadrupole moment, and shows a nontrivial correlation because of a geometrical frustration associated with the fcc sublattice structure. A possible sign of a singlet quantum spin-ice state has also been obtained within the 16-site numerical calculations in another finite region of the phase diagram. Sec. \[sec:summary\] is devoted to discussions and the summary.
Derivation of the effective model {#sec:model}
=================================
In this section, we will give a microscopic derivation of the effective pseudospin-$1/2$ Hamiltonian in a comprehensive manner. Though we focus on $4f$ localized moments of Pr$^{3+}$ ions, the form of our nearest-neighbor anisotropic pseudospin-$1/2$ Hamiltonian is most generic for atomic non-Kramers magnetic doublets. We will also present the generic form of the nearest-neighbor Hamiltonian for Kramers doublets of Nd$^{3+}$, Sm$^{3+}$, and Yb$^{3+}$.
Atomic Hamiltonian for Pr$^{3+}$ {#sec:model:local}
--------------------------------
### Coulomb repulsion {#sec:model:Coulomb}
The largest energy scale of the problem should be the local Coulomb repulsion among Pr $4f$ electrons. A photoemission spectroscopy on Pr$_2TM_2$O$_7$, which is desirable for its reliable estimate, is not available yet. A typical value obtained from Slater integrals for Pr$^{3+}$ ions is of order of 3-5 eV [@norman:95]. A cost of the Coulomb energy becomes $0$, $U$, and $3U$ for the occupation of one, two, and three $f$ electrons, respectively \[Fig. \[fig:atomic\]\]. For Pr$^{3+}$ ions, the O $2p$ electron level $\Delta$ at the O1 site should be higher than the $f^1$ level and lower than the $f^3$ level \[Fig. \[fig:atomic\]\]. Then, it would be a reasonably good approximation to start from localized $f^2$ states for Pr$^{3+}$ configurations and then to treat the other effects as perturbations.
![(Color online) Local level scheme for $f$ and $p$ electrons, and the local quantization axes $\bm{z}_{\bm{r}}$ and $\bm{z}_{\bm{r'}}$.[]{data-label="fig:atomic"}](fig2){width="\columnwidth"}
### $LS$ coupling for $f^2$ configurations {#sec:mode:LS}
We introduce operators $\hat{\bm{J}}$, $\hat{\bm{L}}$, and $\hat{\bm{S}}$ for the total, orbital, and spin angular momenta of $f^2$ electron states of Pr$^{3+}$. Within this $f^2$ manifold, the predominant $LS$ coupling $\lambda_{LS}>0$ in $\hat{H}_{LS}=\lambda_{LS}\hat{\bm{L}}\cdot\hat{\bm{S}}$ gives the ground-state manifold ${}^3H_4$ with the quantum numbers $J=4$, $L=5$, and $S=1$ for the total, orbital, and spin angular momenta, respectively.
### Crystalline electric field {#sec:model:CEF}
The ninefold degeneracy of the ground-state manifold $^3H_4$ is partially lifted by the local crystalline electric field (CEF), which has the $D_{3d}$ symmetry about the $\langle111\rangle$ direction toward the O1 site. We define the local quantization axis $\bm{z}_{\bm{r}}$ as this $\langle111\rangle$ direction. Then, the Hamiltonian for the CEF, $$\hat{H}_{\mathrm{CEF}} = \sum_{m_l,m_l'=-3}^3V_{\mathrm{CEF}}^{m_l,m_l'}\sum_{\bm{r}}\sum_{\sigma=\pm}f^\dagger{}_{\bm{r},m_l,\sigma}f_{\bm{r},m_l',\sigma},
\label{eq:H_cry}$$ contains not only orthogonal components with $m_l=m_l'$ but also off-diagonal components with $m_l-m_l'=\pm3$ and $\pm6$, all of which become real if we take $x$ and $y$ axes as $\bm{x}_{\bm{r}}$ and $\bm{y}_{\bm{r}}$ depicted in Figs. \[fig:crystal\] (a) and (b). Here, $f^{}_{\bm{r},m_l,\sigma}$ and $f^\dagger_{\bm{r},m_l,\sigma}$ denote the annihilation and creation operators of an $f$ electron with the $z$ components $m_l$ and $m_s=\sigma/2$ of the orbital and spin angular momenta, respectively, in the local coordinate frame at the Pr site $\bm{r}$. The formal expressions for $V_{\mathrm{CEF}}^{m_l,m_l'}$ within the point-charge analysis are given in Appendix \[app:CEF\]. In the rest of Sec. \[sec:model:local\], we drop the subscript for the site $\bm{r}$ for brevity.
We perform the first-order degenerate perturbation theory, which replaces Eq. with $\hat{P}({}^3H_4)\hat{H}_{\mathrm{CEF}}\hat{P}({}^3H_4)$, where $\hat{P}({}^3H_4)$ is the projection operator onto the ${}^3H_4$ manifold. First, let us introduce a notation of $|L,M_L;S,M_S\rangle$ for the $f^2$ eigenstate corresponding to the orbital and spin quantum numbers $(L, M_L)$ and $(S,M_S)$ in the local coordinate frame. It is straightforward to express the eigenstates $\{|M_J\rangle\}_{M_J=-J,\cdots,J}$ of $\hat{J}^z$ in terms of $|5,M_L;1,M_S\rangle$ and then in terms of $f$-electron operators, $$\begin{aligned}
|M_J\rangle
&=&\sum_{M_L,M_S} C_{M_J,M_L,M_S}|5,M_L;1,M_S\rangle
\nonumber\\
&=&\sum_{M_L,M_S} \tilde{C}_{M_J,m,m',\sigma,\sigma'}f^\dagger_{m,\sigma}f^\dagger_{m',\sigma'}|0\rangle,
\label{eq:J^z=M_J}\end{aligned}$$ as explicitly written in Appendix \[app:f\].
Finally, we obtain the following representation of Eq. in terms of Eq.
$$\begin{aligned}
\langle M_J|\hat{H}_{\mathrm{CEF}}|M_J'\rangle
&=&\sum_{m_l,m_l',m_l''=-3}^3V_{\mathrm{CEF}}^{m_l,m_l'}
\sum_{\sigma,\sigma'=\pm}\left[
\tilde{C}_{M_J,m_l,m_l'',\sigma,\sigma'}\tilde{C}_{M_J',m_l',m_l'',\sigma,\sigma'}
-\tilde{C}_{M_J,m_l'',m_l,\sigma,\sigma'}\tilde{C}_{M_J',m_l',m_l'',\sigma',\sigma}
\right.\nonumber\\
&&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
-\tilde{C}_{M_J,m_l,m_l'',\sigma,\sigma'}\tilde{C}_{M_J',m_l'',m_l',\sigma',\sigma}
+\tilde{C}_{M_J,m_l'',m_l,\sigma,\sigma'}\tilde{C}_{M_J',m_l'',m_l',\sigma,\sigma'}
\right].\ \ \ \end{aligned}$$
The CEF favors $M_J=\pm 4$ configurations that are linearly coupled to $M_J=\pm1$ and $\mp2$ because of the $D_{3d}$ CEF. This leads to the atomic non-Kramers magnetic ground-state doublet, $$|\sigma\rangle_D=\alpha|4\sigma\rangle+\beta\sigma|\sigma\rangle-\gamma|-2\sigma\rangle,
\label{eq:local}$$ with small real coefficients $\beta$ and $\gamma$ as well as $\alpha=\sqrt{1-\beta^2-\gamma^2}$. For Pr$_2$Ir$_2$O$_7$, the first CEF excited state is a singlet located at 168 K and the second is a doublet at 648 K [@machida:phd]. They are located at 210 K and 430 K for Pr$_2$Sn$_2$O$_7$ [@zhou:08]. These energy scales are two orders of magnitude larger than that of our interest, $2J_{\mathrm{n.n.}}\sim1.4$ K. Hence it is safe to neglect these CEF excitations for our purpose. Then, it is convenient to introduce the Pauli matrix vector $\hat{\bm{\sigma}}_{\bm{r}}$ for the pseudospin-$1/2$ representing the local doublet at each site $\bm{r}$, so that Eq. is the eigenstate of $\hat{\sigma}^z_{\bm{r}}=\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{z}_{\bm{r}}$ with the eigenvalue $\sigma$.
Note that in the case of Tb$^{3+}$, the first CEF excited state is a doublet at a rather low energy $\sim18.7$ K [@gingras:00], and the effects of the first excited doublet cannot be ignored [@molavian:07] when a similar analysis is performed. Nevertheless, since this CEF excitation in Tb$^{3+}$ is an order of magnitude larger than $J_{\mathrm{eff}}$, it could be integrated out [@molavian:07]. Then, the model reduces to a similar form of the effective pseudospin-$1/2$ Hamiltonian which has been derived in Ref. and will also be discussed below, though the explicit form has not been presented as far as we know.
Dipole and quadrupole moments {#sec:model:moment}
-----------------------------
Only the $z$ component $\hat{\sigma}^z_{\bm{r}}=\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{z}_{\bm{r}}$ of the pseudospin contributes to the [*magnetic dipole moment*]{} represented as either “in” or “out”, while the transverse components $\hat{\sigma}^x=\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{x}_{\bm{r}}$ and $\hat{\sigma}^y_{\bm{r}}=\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{y}_{\bm{r}}$ correspond to the [*atomic quadrupole moment*]{}, i.e., the orbital. This can be easily shown by directly calculating the Pr $4f$ magnetic dipole and quadrupole moments in terms of the pseudospin. We first take the projection of the total angular momentum $\hat{\bm{J}}=(\hat{J}^x,\hat{J}^y,\hat{J}^z)$ to the subspace of the local non-Kramers magnetic ground-state doublet described by Eq. (\[eq:local\]). It yields $$\begin{aligned}
_D\langle\sigma|\hat{J}^z|\sigma'\rangle_D&=&(4\alpha^2+\beta^2-2\gamma^2)\sigma\delta_{\sigma,\sigma'},
\label{eq:Jz}\\
_D\langle\sigma|\hat{J}^\pm|\sigma'\rangle_D&=&0,
\label{eq:J+-}\end{aligned}$$ with $\hat{J}^\pm=(\hat{J}^x\pm i\hat{J}^y)$. With the Landé factor $g_J=4/5$ and the Bohr magneton $\mu_B$, the atomic magnetic dipole moment is given by $$\hat{\bm{m}}_{\bm{r}}=g_J\mu_B(4\alpha^2+\beta^2-2\gamma^2)\hat{\sigma}^z_{\bm{r}}\bm{z}_{\bm{r}}.
\label{eq:m}$$ Note that $\hat{\sigma}_{\bm{r}}^\pm$ cannot linearly couple to neutron spins without resorting to higher CEF levels. On the other hand, the quadrupole moments are given by $$_D\langle\sigma|\{\hat{J^z},\hat{J}^\pm\}|\sigma'\rangle_D=-36\beta\gamma\delta_{\sigma,-\sigma'}.
\label{eq:JxJ+-}$$
This is a general consequence of the so-called non-Kramers magnetic doublet and not restricted to the Pr$^{3+}$ ion. Namely, when the atomic ground states of the non-Kramers ions having an even number of $f$ electrons and thus an integer total angular momentum $J$ are described by a magnetic doublet, only $\hat{\sigma}^z_{\bm{r}}$ contributes to the [*magnetic dipole moment*]{}, while $\hat{\sigma}^{x,y}_{\bm{r}}$ corresponds to the [*atomic quadrupole moment*]{}. This sharply contrasts to the following two cases: (i) In the case of Kramers doublets, all three components of $\hat{\bm{\sigma}}_{\bm{r}}$ may contribute to the magnetic dipole moment while their coefficients can be anisotropic. (ii) In the case of non-Kramers non-magnetic doublets, $\hat{\bm{\sigma}}_{\bm{r}}$ corresponds to a quadrupole moment or even a higher-order multipole moment which is a time-reversal invariant.
Superexchange interaction {#sec:model:superexchange}
-------------------------
Now we derive the superexchange Hamiltonian through the fourth-order strong-coupling perturbation theory. Keeping in mind the local level scheme of Pr $4f$ electrons and O $2p$ electrons at O1 sites, which has been explained in Sec. \[sec:model:local\], we consider nonlocal effects introduced by the electron transfer between the Pr $4f$ orbital and the O $2p$ orbital.
### Local coordinate frames {#sec:model:frames}
In order to symmetrize the final effective Hamiltonian, it is convenient to choose a set of local coordinate frames so that it is invariant under $180^\circ$ rotations of the whole system about three axes that include an O1 site and are parallel to the global $X$, $Y$, or $Z$ axes, which belong to the space group $F_{d\bar{3}m}$ of the present pyrochlore system. We can start from the local coordinate frame previously defined in Sec. \[sec:model:CEF\] and in Fig. \[fig:crystal\] for a certain site and generate the other three local frames by applying the above three rotations.
For instance, we can adopt
$$\begin{aligned}
\bm{x}_0&=&\frac{1}{\sqrt{6}}\left(1,1,-2\right),
\nonumber\\
\bm{y}_0&=&\frac{1}{\sqrt{2}}\left(-1,1,0\right),
\nonumber\\
\bm{z}_0&=&\frac{1}{\sqrt{3}}(1,1,1),
\label{eq:xyz0}
\end{aligned}$$
for the Pr sites at $\bm{R}+\bm{a}_0$ with $\bm{a}_0=-\frac{a}{8}(1,1,1)$, $$\begin{aligned}
\bm{x}_1&=&\frac{1}{\sqrt{6}}\left(1,-1,2\right),
\nonumber\\
\bm{y}_1&=&\frac{1}{\sqrt{2}}\left(-1,-1,0\right),
\nonumber\\
\bm{z}_1&=&\frac{1}{\sqrt{3}}(1,-1,-1),
\label{eq:xyz1}
\end{aligned}$$ for the Pr sites at $\bm{R}+\bm{a}_1$ with $\bm{a}_1=\frac{a}{8}(-1,1,1)$, $$\begin{aligned}
\bm{x}_2&=&\frac{1}{\sqrt{6}}\left(-1,1,2\right),
\nonumber\\
\bm{y}_2&=&\frac{1}{\sqrt{2}}\left(1,1,0\right),
\nonumber\\
\bm{z}_2&=&\frac{1}{\sqrt{3}}(-1,1,-1),
\label{eq:xyz2}
\end{aligned}$$ for the Pr sites at $\bm{R}+\bm{a}_2$ with $\bm{a}_2=\frac{a}{8}(1,-1,1)$, and $$\begin{aligned}
\bm{x}_3&=&\frac{1}{\sqrt{6}}\left(-1,-1,-2\right),
\nonumber\\
\bm{y}_3&=&\frac{1}{\sqrt{2}}\left(1,-1,0\right),
\nonumber\\
\bm{z}_3&=&\frac{1}{\sqrt{3}}(-1,-1,1),
\label{eq:xyz3}
\end{aligned}$$ for the Pr sites at $\bm{R}+\bm{a}_3$ with $\bm{a}_3=\frac{a}{8}(1,1,-1)$, \[eq:xyz\]
where $\bm{R}$ represents a fcc lattice vector $\bm{R}=\sum_{i=1,2,3}n_i\bm{R}_i$ spanned by $\bm{R}_1=(0,a/2,a/2)$, $\bm{R}_2=(a/2,0,a/2)$, and $\bm{R}_3=(a/2,a/2,0)$ with integers $(n_1,n_2,n_3)$, and $a$ is the lattice constant, i.e., the side length of the unit cube. In particular, all the local $z$ directions attached to the Pr sites belonging to the tetrahedron centered at the O1 site $\bm{R}$ point inwards, and they satisfy the relation $$\sum_{i=0}^3(\bm{x}_i,\bm{y}_i,\bm{z}_i)=(\bm{0},\bm{0},\bm{0}).
\label{eq:sum_xyz}$$ Actually, other sets of local coordinate frames which are obtained by threefold and sixfold rotations about $[111]$ yield exactly the same expression for the effective Hamiltonian for Kramers and non-Kramers cases, respectively.
These local coordinate frames are related to the following rotations of the global coordinate frame,
$$\begin{aligned}
R^{(r)}(\varphi_i,\vartheta_i)
&=&\left(
{}^t\bm{x}_i,
{}^t\bm{y}_i,
{}^t\bm{z}_i
\right),
\\
\varphi_0=\pi/4,&\ \ \ &
\vartheta_0=\arccos\left(1/\sqrt{3}\right),
\label{eq:angles0}\\
\varphi_1=3\pi/4,&\ \ \ &
\vartheta_1=-\pi+\arccos\left(1/\sqrt{3}\right),
\label{eq:angles1}\\
\varphi_2=-\pi/4,&\ \ \ &
\vartheta_2=-\pi+\arccos\left(1/\sqrt{3}\right),
\label{eq:angles2}\\
\varphi_3=-3\pi/4,&\ \ \ &
\vartheta_3=\arccos\left(1/\sqrt{3}\right).
\label{eq:angles3}
\end{aligned}$$
\[eq:R\^r\]
Note that the coordinate frame for the spins is always attached to that for the orbital space in each case. The rotation of $\bm{j}=\bm{l}+\bm{s}$ with the orbital $\bm{l}$ and the spin $\bm{s}$ of a single electron takes the form $$\hat{R}_{\bm{r}}=\exp\left[-i\varphi_i\hat{j}^z\right]\exp\left[-i\vartheta_i\hat{j}^y\right].
\label{eq:R}$$
### $f$-$p$ hybridization {#sec:model:hybridization}
The $4f$ electrons occupying the atomic ground-state doublet, Eq. , or $4f$ holes can hop to the O $2p$ levels at the neighboring O1 site. Because of the symmetry, the $f$-$p$ electron transfer along the local $\bm{z}$ axis is allowed only for the $pf\sigma$ bonding ($m_l=0$) between $f_{z(5z^2-3r^2)}$ and $p_z$ orbitals and the $pf\pi$ bondings ($m_l=\pm1$) between $f_{x(5z^2-r^2)}$ and $p_x$ orbitals and between $f_{y(5z^2-r^2)}$ and $p_y$ orbitals in the local coordinate frame defined in Eqs. \[Fig. \[fig:perturbation\] (a)\]. Their amplitudes are given by two Slater-Koster parameters [@sharma:79] $V_{pf\sigma}$ and $V_{pf\pi}$, respectively. Then, the Hamiltonian for the $f$-$p$ hybridization reads
$$\hat{H}_{\mathrm{t}}=\sum_{\bm{R}\in fcc}\sum_{\tau=\pm}\,\sum_{m_l,m_l'=0,\pm1}V_{m_l}
\sum _{\sigma,\sigma' =\pm}
\hat{f}^{\dagger }{}_{\bm{R}+\bm{a}_i,m_l,\sigma}
\left(R^\dagger_{\bm{R}+\bm{a}_i}\right)_{m_l,m_l';\sigma,\sigma'}
\hat{p}_{\bm{R}+(1+\tau)\bm{a}_i,m_l',\sigma'}
+h.c.
\label{eq:H_t}$$
with $V_{\pm1}=V_{pf\pi}$, $V_0=V_{pf\sigma}$, and an index $\tau=\pm$ for the two fcc sublattices of the diamond lattice, where $\hat{p}_{\bm{R}+(1+\tau)\bm{a}_i,m_l,\sigma}$ represents the annihilation operator of a $2p$ electron at the O1 site $\bm{R}+(1+\tau)\bm{a}_i$ with the orbital and spin quantum numbers $m_l$ and $m_s=\sigma/2$, respectively, in the global coordinate frame. Here, $R^\dagger_{\bm{R}+\bm{a}_i}$ transforms the representation from the global frame for $\hat{p}_{\bm{R}+(1+\tau)\bm{a}_i,m_l',\sigma'}$ to the local frame for $\hat{f}^{\dagger }{}_{\bm{R}+\bm{a}_i,m_l,\sigma}$.
![(Color online) (a) Two $f$-$p$ transfer integrals; $V_{pf\sigma}$ between $p_z$ and $f_{(5z^2-3r^2)z}$ orbitals, and $V_{pf\pi}$ between $p_x$/$p_y$ and $f_{x(5z^2-r^2)}$/$f_{y(5z^2-r^2)}$. (b) $f$-$p$ virtual electron hopping processes. $n$ ($n'$) and $\ell$ in the state $f^np^\ell f^{n'}$ represent the number of $f$ electrons at the Pr site $\bm{r}$ ($\bm{r}'$) and that of $p$ electrons at the O1 site.[]{data-label="fig:perturbation"}](fig3){width="\columnwidth"}
### Strong-coupling perturbation theory {#sec:mode:perturbation}
Now we are ready to perform the strong-coupling perturbation expansion in $V_{pf\pi}$ and $V_{pf\sigma}$. Hybridization between these Pr $4f$ electrons and O $2p$ electrons at the O1 site, which is located at the center of the tetrahedron, couples $f^2$ states having the local energy $U$ with $f^1$ and $f^3$ states having the local energy levels 0 and $3U$, respectively \[Fig. \[fig:atomic\]\]. Here, the $LS$ coupling has been ignored in comparison with $U$ for simplicity. Creating a virtual $p$ hole decreases the total energy by $\Delta$, which is the $p$ electron level measured from the $f^1$ level.
First, the second-order perturbation in $V_{pf\sigma}$ and $V_{pf\pi}$ produces only local terms. They only modify the CEF from the result of the point-charge analysis with renormalized parameters for the effective ionic charges and radii. Nontrivial effects appear in the fourth order in $V_{pf\sigma}$ and $V_{pf\pi}$. Taking into account the virtual processes shown in Fig. \[fig:perturbation\] (b), the fourth-order perturbed Hamiltonian in $V_{pf\sigma}$ and $V_{pf\pi}$ is obtained as
$$\begin{aligned}
\hat{H}_{ff}&=&\frac{2}{(2U-\Delta)^2}\sum_{\langle\bm{r},\bm{r}'\rangle}^{\mathrm{n.n.}}\sum_{m_1,m_2}\sum_{m_1',m_2'}\sum_{\sigma_1,\sigma_2}\sum_{\sigma_1',\sigma_2'}
V_{m_1}V_{m_1'}V_{m_2}V_{m_2'}
\hat{f}^\dagger{}_{\bm{r},m_1,\sigma_1}\hat{f}_{\bm{r},m_2,\sigma_2}
\hat{f}^\dagger{}_{\bm{r}',m_1',\sigma_1'}\hat{f}_{\bm{r}',m_2',\sigma_2'}
\nonumber\\
&&\times\biggl[-\frac{1}{2U-\Delta}\delta_{m_1,m_2}\delta_{m_1',m_2'}\delta_{\sigma_1,\sigma_2}\delta_{\sigma_1',\sigma_2'}
+(\frac{1}{2U-\Delta}+\frac{1}{U})\left(R^\dagger_{\bm{r}}R_{\bm{r}'}\right)_{m_1,m_2';\sigma_1,\sigma_2'}\left(R^\dagger_{\bm{r}'}R_{\bm{r}}\right)_{m_1',m_2;\sigma_1',\sigma_2}
\biggr].
\label{eq:H_sex}\end{aligned}$$
Effective pseudospin-$1/2$ model {#sec:model:hamiltonian}
--------------------------------
![(Color online) The pyrochlore lattice structure. The phase $\phi_{\bm{r},\bm{r}'}$ appearing in Eq. (\[eq:H\_eff\]) takes $-2\pi/3$, $2\pi/3$, and $0$ on the blue, red, green bonds, respectively, in our choice of the local coordinate frames \[Eqs. \[eq:xyz\]\].[]{data-label="fig:pyrochlore"}](fig4){width="\columnwidth"}
Next we project the superexchange Hamiltonian Eq. onto the subspace of doublets given by Eq. (\[eq:local\]). For this purpose, we have only to calculate for a site $\bm{r}$ the matrix elements of the operators $\hat{f}^\dagger{}_{\bm{r},m_l,\sigma}\hat{f}_{\bm{r},m_l',\sigma'}$ with $m_l,m_l'=0,\pm1$ and $\sigma,\sigma'=\pm1$, in terms of $|M_J\rangle$ that is explicitly represented with $f$-electron operators in Appendix \[app:f\], and then in terms of the atomic doublet $|\sigma\rangle_D$, Eq. . Then, we finally obtain the effective quantum pseudospin-$1/2$ Hamiltonian; $$\begin{aligned}
\hat{H}_{\mathrm{eff}}&=&J_{\mathrm{n.n.}}\sum_{\langle\bm{r},\bm{r}'\rangle}^{\mathrm{n.n.}}\left[\hat{\sigma}_{\bm{r}}^z\hat{\sigma}_{\bm{r}'}^z+2\delta\left(\hat{\sigma}_{\bm{r}}^+\hat{\sigma}_{\bm{r}'}^-+\hat{\sigma}_{\bm{r}}^-\hat{\sigma}_{\bm{r}'}^+\right)
\right.
\nonumber\\
&&\left.+2q\left(e^{2i\phi_{\bm{r},\bm{r}'}}\hat{\sigma}_{\bm{r}}^+\hat{\sigma}_{\bm{r}'}^++h.c.\right)\right],
\label{eq:H_eff}\end{aligned}$$ with $\hat{\sigma}^\pm_{\bm{r}}\equiv(\hat{\sigma}^x_{\bm{r}}\pm i\hat{\sigma}^y_{\bm{r}})/2$, where $\hat{\bm{\sigma}}_{\bm{r}}$ represents a vector of the Pauli matrices for the pseudospin at a site $\bm{r}$. The phase [@phase] $\phi_{\bm{r},\bm{r}'}$ takes $-2\pi/3$, $2\pi/3$, and $0$ for the bonds shown in blue, red, and green colors in Fig. \[fig:pyrochlore\], in the local coordinate frames defined in Eq. . This phase cannot be fully gauged away, because of the noncollinearity of the $\langle111\rangle$ magnetic moment directions and the threefold rotational invariance of $(\bm{r}, \bm{\sigma}_{\bm{r}})$ about the \[111\] axes. Equation gives the most generic nearest-neighbor pseudospin-$1/2$ Hamiltonian for non-Kramers magnetic doublets of rare-earth ions such as Pr$^{3+}$ and Tb$^{3+}$ that is allowed by the symmetry of the pyrochlore system. Note that the bilinear coupling terms of $\hat{\sigma}^z_{\bm{r}}$ and $\hat{\sigma}^\pm_{\bm{r}'}$ are prohibited by the non-Kramers nature of the moment; namely $\hat{\sigma}^z_{\bm{r}}$ changes the sign under the time-reversal operation, while $\hat{\sigma}^{x,y}_{\bm{r}}$ does not.
![(Color online) A phase diagram for the sign of the dimensionless Ising coupling $\tilde{J}$, defined through Eq. , as functions of $\gamma$ and $V_{pf\pi}/V_{pf\sigma}$ for $\beta=\gamma/3$, $\gamma/6$, $0$, $-\gamma/6$, and $-\gamma/3$. In each case, $\tilde{J}$ is positive in the shaded region, and negative otherwise.[]{data-label="fig:J"}](fig5rev){width="\columnwidth"}
The dependence of the Ising coupling constant on $U$, $\Delta$, $V_{pf\sigma}$ and $V_{pf\pi}$ takes the form, $$J_{\mathrm{n.n.}}=\frac{V_{pf\sigma}^4}{(2U-\Delta)^2}\left(\frac{1}{U}+\frac{1}{2U-\Delta}\right)\tilde{J}(\beta,\gamma,V_{pf\pi}/V_{pf\sigma}),
\label{eq:J}$$ where $\tilde{J}$ contains the dependence on the remaining dimensionless variables $(\beta,\gamma,V_{pf\pi}/V_{pf\sigma})$. We show the sign of $\tilde{J}$ as functions of $\gamma$ and $V_{pf\pi}/V_{pf\sigma}$ for several choices of $\beta/\gamma$ in Fig. \[fig:J\]. In particular, for $-0.37\lesssim V_{pf\pi}/V_{pf\sigma}\lesssim-0.02$ which includes a realistic case of $V_{pf\pi}/V_{pf\sigma}\sim-0.3$, $\tilde{J}$ is found to be positive. Since the prefactor in Eq. is positive, the Ising coupling $J_{\mathrm{n.n.}}$ is also positive, namely, antiferroic for pseudospins, in this case. Taking account of the tilting of the two neighboring local $z$ axes by $\theta=\arccos(-1/3)$, this indicates a ferromagnetic coupling between the physical $\langle111\rangle$ magnetic moments and provides a source of the ice rule.
The $D_{3d}$ CEF produces two quantum-mechanical interactions in the case of non-Kramers ions; the pseudospin-exchange and pseudospin-nonconserving terms. The ratios $\delta$ and $q$ of their coupling constants to the Ising one are insensitive to $U/V_{pf\sigma}$ and $\Delta/V_{pf\sigma}$ but strongly depends on $\beta$ and $\gamma$. Figures \[fig:coupling\] (a) and (b) show $\delta$ and $q$, respectively, as functions of $\gamma$ that characterizes the $D_{3d}$ CEF for a typical choice of parameters, $U/V_{pf\sigma}=5$, $\Delta/V_{pf\sigma}=4$, and $V_{pf\pi}/V_{pf\sigma}=-0.3$, in the cases of $\beta/\gamma=0, \pm1/3, \pm1/6$. Henceforth, we adopt $\beta=7.5\%$ and $\beta/\gamma=1/3$, following the point-charge analysis of the inelastic powder neutron-scattering data on Pr$_2$Ir$_2$O$_7$ \[Ref. \]. Actually, these estimates of $\beta$ and $\gamma$ lead to the local moment amplitude, $$M_0=g_J\mu_B(4\alpha^2+\beta^2-2\gamma^2)\approx2.9\mu_B,
\label{eq:M_0}$$ according to Eq. , which reasonably agrees with the experimental observation on Pr$_2$Ir$_2$O$_7$ [@nakatsuji:06] and Pr$_2$Zr$_2$O$_7$ [@matsuhira:09]. Then, we obtain $\delta\sim0.51$ and $q\sim0.89$, indicating the appreciable quantum nature. In general, however, the values of $\delta$ and $q$ may vary depending on a transition-metal ion and crystal parameters. We note that a finite $q$ was not taken into account in the literature seriously.
It is instructive to rewrite the $q$ term as $$\begin{aligned}
q
\left((\hat{\vec{\sigma}}_{\bm{r}}\cdot\vec{n}_{\bm{r},\bm{r}'})(\hat{\vec{\sigma}}_{\bm{r}'}\cdot\vec{n}_{\bm{r},\bm{r}'})
-(\hat{\vec{\sigma}}_{\bm{r}}\cdot\vec{n}'_{\bm{r},\bm{r}'})(\hat{\vec{\sigma}}_{\bm{r}}\cdot\vec{n}'_{\bm{r},\bm{r}'})\right).
\label{eq:q2}\end{aligned}$$ where we have introduced a two-dimensional vector composed of the planar components of the pseudospin, $\hat{\vec{\sigma}}_{\bm{r}}=(\hat{\sigma}^x_{\bm{r}},\hat{\sigma}^y_{\bm{r}})$, and two orthonormal vectors
$$\begin{aligned}
\vec{n}_{\bm{r},\bm{r}'}&=&(\cos\phi_{\bm{r},\bm{r}'},-\sin\phi_{\bm{r},\bm{r}'}),
\label{eq:n}\\
\vec{n}'_{\bm{r},\bm{r}'}&=&(\sin\phi_{\bm{r},\bm{r}'},\cos\phi_{\bm{r},\bm{r}'}).
\label{eq:n'}\end{aligned}$$
Then, it is clear that the sign of $q$ can be absorbed by rotating all the pseudospins $\bm{\sigma}_{\bm{r}}$ about the local $\bm{z}_{\bm{r}}$ axes by $\pi/2$. Furthermore, in the particular case $\delta=|q|$ or $-|q|$, the planar ($\hat{\vec{\sigma}}_{\bm{r}}$) part, namely, the sum of the $\delta$ and $q$ terms, of the Hamiltonian Eq. is reduced to the antiferroic or ferroic pseudospin $120^\circ$ Hamiltonian [@chern:10].
![(Color online) The coupling constants (a) $\delta$ and (b) $q$ as functions of $\gamma$ for several choices of $\beta/\gamma=1/3$, $1/6$, $0$, $-1/6$, and $-1/3$. We have adopted $U/V_{pf\sigma}=5$, $\Delta/V_{pf\sigma}=4$, and $V_{pf\pi}/V_{pf\sigma}=-0.3$.[]{data-label="fig:coupling"}](fig6){width="\columnwidth"}
For Kramers ions such as Nd$^{3+}$, Er$^{3+}$, and Yb$^{3+}$, there appears another coupling constant [@comm; @onoda:11] for an additional interaction term $$\begin{aligned}
\hat{H}_{\mathrm{K}}=K\sum_{\langle\bm{r},\bm{r}'\rangle}^{\mathrm{n.n.}}\left[
\hat{\sigma}^z_{\bm{r}}\left(\hat{\vec{\sigma}}_{\bm{r}'}\cdot\vec{n}_{\bm{r},\bm{r}'}\right)
+\left(\hat{\vec{\sigma}}_{\bm{r}}\cdot\vec{n}_{\bm{r},\bm{r}'}\right) \hat{\sigma}^z_{\bm{r}'}\right],
\label{eq:H_K}\end{aligned}$$ whose form has been obtained so that it satisfies the threefold rotational symmetry about $\langle111\rangle$ axes, i.e., $\bm{z}_{\bm{r}}$, the mirror symmetry about the planes spanned by $\bm{z}_{\bm{r}}$ and $\bm{z}_{\bm{r}'}$ for all the pairs of nearest-neighbor sites $\bm{r}$ and $\bm{r}'$, and the twofold rotational symmetry about $\bm{X}$, $\bm{Y}$, and $\bm{Z}$ axes. This reflects the fact that all the components of $\hat{\bm{\sigma}}_{\bm{r}}$ change the sign under the time-reversal, $\hat{\bm{\sigma}}_{\bm{r}}\to-\hat{\bm{\sigma}}_{\bm{r}}$, and hence $\hat{H}_{\mathrm{K}}$ respects the time-reversal symmetry for Kramers ions. Equation and the additional term Eq. appearing only for Kramers doublets define the most generic form of the nearest-neighbor bilinear interacting pseudospin-$1/2$ Hamiltonian that is allowed by the symmetry of the magnetic pyrochlore system. Throughout this paper, we restrict ourselves to the case of non-Kramers ions, for which $K$ vanishes and $\hat{H}_{\mathrm{K}}$ does not appear.
Classical mean-field theory {#sec:MFT}
===========================
We start with a classical mean-field analysis of our effective Hamiltonian Eq. along the strategy of Ref. . We look for the instability with decreasing temperature, and then consider candidates to the mean-field ground state in the space of the two coupling constants $\delta$ and $q$. We restrict ourselves to the nearest-neighbor model, though it is known that longer-range interactions can lift the degeneracy at least partially [@reimers:91]. Since $\hat{\sigma}^z_{\bm{r}}$ and $\hat{\sigma}^{\pm}_{\bm{r}}$ are decoupled in the Hamiltonian in the classical level, we proceed by requiring that either of $\langle\hat{\sigma}^z_{\bm{r}}\rangle$ and/or $\langle\hat{\sigma}^{x,y}_{\bm{r}}\rangle$ is finite. It reveals three distinct mean-field instabilities in the case of $J_{\mathrm{n.n.}}>0$, which also provide candidate mean-field ground states under the constraint $|\langle\hat{\bm{\sigma}}_{\bm{r}}\rangle|\le1$. Note that the decoupling approximation of the Ising and planar components becomes inaccurate when the SU(2)-symmetric point of Heisenberg antiferromagnet.
Ising states $\langle\sigma^z_{\bm{r}}\rangle\ne0$ {#sec:MFT:dipole}
--------------------------------------------------
Let us introduce a vector, $${}^t\hat{d}_{\bm{R}}= \left(\hat{\sigma}^z_{\bm{R}+\bm{a}_0},\hat{\sigma}^z_{\bm{R}+\bm{a}_1},\hat{\sigma}^z_{\bm{R}+\bm{a}_2},\hat{\sigma}^z_{\bm{R}+\bm{a}_3}\right),
\label{eq:d}$$ where $\bm{R}$ is a fcc lattice vector and $\{\bm{a}_i\}_{i=0,\cdots,3}$ have been defined in Sec. \[sec:model:frames\]. In the mean-field approximation, magnetic dipolar states characterized by a nonzero $\langle\sigma^z_{\bm{r}}\rangle$ is obtained as the states having the minimum eigenvalue of the following mean-field Hamiltonian, $$\begin{aligned}
{\cal H}^z_{\mathrm{MF}}
&=&N_T\sum_{\bm{q}}\langle\hat{d}_{\bm{q}}^\dagger\rangle h^z_{\bm{q}}\langle\hat{d}_{\bm{q}}\rangle
\label{eq:H^d_MF}\\
h^z_{\bm{q}}&=&2J_{\mathrm{n.n.}}\left(\begin{array}{cccc}
0 & f_{q_y + q_z} & f_{q_z + q_x} & f_{q_x + q_y}\\
f_{q_y + q_z} & 0 & f_{q_x - q_y} & f_{q_z - q_x}\\
f_{q_z + q_x} & f_{q_x - q_y} & 0 & f_{q_y - q_z}\\
f_{q_x + q_y} & f_{q_x - q_z} & f_{q_y - q_z} & 0
\end{array}\right),
\label{eq:h^d_MF}\end{aligned}$$ where $f_q=\cos(qa/4)$, and $$\hat{d}_{\bm{q}}\equiv\frac{1}{N_T}\sum_{\bm{R}}\left(\begin{array}{c}
\hat{\sigma}^z_{\bm{R}+\bm{a}_0}e^{-i\bm{q}\cdot(\bm{R}+\bm{a}_0)}
\\
\hat{\sigma}^z_{\bm{R}+\bm{a}_1}e^{-i\bm{q}\cdot(\bm{R}+\bm{a}_1)}
\\
\hat{\sigma}^z_{\bm{R}+\bm{a}_2}e^{-i\bm{q}\cdot(\bm{R}+\bm{a}_2)}
\\
\hat{\sigma}^z_{\bm{R}+\bm{a}_3}e^{-i\bm{q}\cdot(\bm{R}+\bm{a}_3)}
\end{array}\right),
\label{eq:d_q}$$ with $N_T=N/4$ where $N$ is the number of pyrochlore-lattice sites. $h^z_{\bm{q}}$ has the eigenvalues [@reimers:91], $$\varepsilon^z_{\bm{q}}=-2J_{\mathrm{n.n.}},
2J_{\mathrm{n.n.}}\left(1\mp\sqrt{1+g_{\bm{q}}}\right),
\label{eq:epsilon^d}$$ where $g_{\bm{q}}=f_{2q_x}f_{2q_y}+f_{2q_y}f_{2q_z}+f_{2q_z}f_{2q_x}$. For $J_{\mathrm{n.n.}}>0$, the lowest energy of the mean-field solution, which is obtained as $-2J_{\mathrm{n.n.}}$ per tetrahedron for any wavevector $\bm{q}$ all over the Brillouin zone reflecting the macroscopic degeneracy, coincides with the exact ground-state energy of the nearest-neighbor spin ice model.
Planar states $\langle\sigma^\pm_{\bm{r}}\rangle\ne0$ {#sec:MFT:quadrupole}
-----------------------------------------------------
Introducing another vector, $${}^t\hat{Q}_{\bm{R}}= (\hat{\vec{\sigma}}_{\bm{R}+\bm{a}_0},\hat{\vec{\sigma}}_{\bm{R}+\bm{a}_1},\hat{\vec{\sigma}}_{\bm{R}+\bm{a}_2},\hat{\vec{\sigma}}_{\bm{R}+\bm{a}_3}),
\label{eq:Q}$$ with $\hat{\vec{\sigma}}_{\bm{r}}=(\hat{\sigma}^x_{\bm{r}},\hat{\sigma}^y_{\bm{r}})$, the mean-field Hamiltonian reads $${\cal H}^{\mathrm{P}}_{\mathrm{MF}}
=\sum_{\bm{q}}\langle\hat{Q}_{\bm{q}}^\dagger\rangle h^{\mathrm{q}}_{\bm{q}}\langle\hat{Q}_{\bm{q}}\rangle,
\label{eq:H^q_MF}\\$$ where
$$h^{\mathrm{P}}_{\bm{q}}=2J_{\mathrm{n.n.}}\left(
\begin{array}{cccc}
0 & (\delta\hat{\tau}_0+q\frac{-\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_y+q_z} & (\delta\hat{\tau}_0+q\frac{\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_z+q_x} & (\delta\hat{\tau}_0+q\hat{\tau}_z)f_{q_x+q_y}\\
(\delta\hat{\tau}_0+q\frac{-\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_y+q_z} & 0 & (\delta\hat{\tau}_0+q\hat{\tau}_z)f_{q_x-q_y} & (\delta\hat{\tau}_0+\frac{\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_z-q_x}\\
(\delta\hat{\tau}_0+q\frac{\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_z+q_x} & (\delta\hat{\tau}_0+q\hat{\tau}_z)f_{q_x-q_y} & 0 & (\delta\hat{\tau}_0+\frac{-\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_y-q_z}\\
(\delta\hat{\tau}_0+q\hat{\tau}_z)f_{q_x+q_y} & (\delta\hat{\tau}_0+\frac{\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_z-q_x} & (\delta\hat{\tau}_0+\frac{-\sqrt{3}\hat{\tau}_x-\hat{\tau}_z}{2})f_{q_y-q_z} & 0
\end{array}\right).\ \ \ \ \$$
We have introduced the Fourier component $Q_{\bm{q}}$ of $Q_{\bm{R}}$ in an analogy to Eq. .
For $\delta>(|q|+1)/2$, the lowest eigenvalue of $h^{\mathrm{P}}_{\bm{q}}$ is given by $$\varepsilon^{\mathrm{PAF}}_{\bm{q}}=-2J_{\mathrm{n.n.}}(\delta+2|q|)
\label{eq:epsilon^PAF_q}$$ for the planar antiferro-pseudospin (PAF) states at the rods $\bm{q}=\frac{2\pi}{a}h(1,\pm1,\pm1)$ with $h$ being an arbitrary real number. It has the 120$^\circ$ planar pseudospin structure within a tetrahedron, which is expressed as $${}^t\langle\hat{Q}_{\bm{q}}\rangle=\left\{\begin{array}{l}
\left(\vec{0},\vec{n}'_{\bm{a}_0,\bm{a}_1},\vec{n}'_{\bm{a}_0,\bm{a}_2},\vec{n}'_{\bm{a}_0,\bm{a}_3}\right)
\ \mbox{for $\bm{q}=q_x(1,1,1)$}
\\
\left(\vec{n}'_{\bm{a}_0,\bm{a}_1},\vec{0},\vec{n}'_{\bm{a}_1,\bm{a}_2},\vec{n}'_{\bm{a}_1,\bm{a}_3}\right)
\ \mbox{for $\bm{q}=q_x(-1,1,1)$}
\\
\left(\vec{n}'_{\bm{a}_0,\bm{a}_2},\vec{n}'_{\bm{a}_1,\bm{a}_2},\vec{0},\vec{n}'_{\bm{a}_2,\bm{a}_3}\right)
\ \mbox{for $\bm{q}=q_x(1,-1,1)$}
\\
\left(\vec{n}'_{\bm{a}_0,\bm{a}_3},\vec{n}'_{\bm{a}_1,\bm{a}_3},\vec{n}'_{\bm{a}_2,\bm{a}_3},\vec{0}\right)
\ \mbox{for $\bm{q}=q_x(1,1,-1)$}
\end{array}\right.
\label{eq:structure^PAF_q}$$ for $q>0$. When $q<0$, $\vec{n}'$ in Eq. are replaced by $\vec{n}$.
In the other case of $\delta<(|q|+1)/2$, the lowest eigenvalue of $h^{\mathrm{P}}_{\bm{q}}$ is given by $$\varepsilon^{\mathrm{PF}}_{\bm{q}}=6J_{\mathrm{n.n.}}\delta
\label{eq:epsilon^PF_q}$$ for the planar ferro-pseudospin (PF) state at $\bm{q}=\bm{0}$ and symmetry-related $\bm{q}$ vectors connected by $\frac{4\pi}{a}(n_1,n_2,n_3)$ and/or $\frac{2\pi}{a}(1,1,1)$ with integers $n_1$, $n_2$, and $n_3$. This state has eigenvectors showing a collinear ferroic alignment of the planar components of the pseudospins.
Because of the saturation of each ordered moment, except at one site per tetrahedron in the case of the PAF phase, these states can be stabilized as the ground state if the energy is lower than in the dipolar state.
Mean-field phase diagram {#sub:MFT:phasediagram}
------------------------
![(Color online) Classical mean-field phase diagram of the model given by Eq. . The $\bm{Q}=\bm{0}$ planar ferro-pseudospin (PF) phase physically represents the antiferroquadrupole (AFQ) phase. The planar antiferro-pseudospin (PAF) phase is characterized by Bragg rods $\bm{Q}\parallel[111]$ and physically represents a 2D AFQ phase for $q>0$ or a 2D noncoplanar ferroquadrupole (FQ) phase for $q<0$, where quadrupole moments are aligned only within the plane perpendicular to Brag rod vectors $\bm{Q}\parallel[111]$ in the mean-field level. In the region around the N.N. Heisenberg antiferromagnet, the present mean-field theory becomes less accurate. The point $X=(\delta,q)=(0.51,0.89)$ for Pr$^{3+}$ is also shown.[]{data-label="fig:MFT:diagram"}](fig7){width="\columnwidth"}
Comparing the energies of the dipolar state and the two quadrupolar states given above, we obtain the classical mean-field phase diagram shown in Fig. \[fig:MFT:diagram\]; (i) the macroscopically degenerate dipolar states associated with the nearest-neighbor spin ice for $-1/3<\delta<1-2|q|$, (ii) the planar antiferro-pseudospin (PAF) states showing the 120$^\circ$ structure of $\langle\hat{\vec{\sigma}}_{\bm{r}}\rangle$ in each plane perpendicular to the rods $\bm{q}=\frac{2\pi}{a}h(1,\pm1,\pm1)$ for $\delta>1-2|q|$, and (iii) a planar ferro-pseudospin (PF) state characterized by $\langle\hat{\bm{\sigma}}_{\bm{r}}\rangle=(\cos\Theta,\sin\Theta,0)$ for $\delta<-1/3$ and $\delta<1-|q|/2$ with an arbitrary angle $\Theta$. Note that the degeneracy along the rods $\bm{q}=q(1,\pm1,\pm1)/\sqrt{3}$ in the PAF phase could be lifted by an order-by-disorder mechanism which favors the $\bm{q}=0$ order because of the higher degeneracy, or by a longer-range interaction which is not taken into account in the present paper. The nearest-neighbor spin ice $(\delta,q)=(0,0)$ and Heisenberg antiferromagnet $(\delta,q)=(1,0)$, both of which are marked in Fig. \[fig:MFT:diagram\], show no LRO [@moessner:98; @isakov:04] down to $T=0$ but dipolar spin correlations [@isakov:04]. We stress again that the present mean-field treatment becomes inaccurate around the nearest-neighbor Heisenberg antiferromagnet. Note that the recently studied $120^\circ$ antiferromagnetic planar model [@chern:10] corresponds to the limit of $\delta=|q|\to\infty$.
For Pr$^{3+}$ ions, the planar components $\hat{\vec{\sigma}}_{\bm{r}}$ represent atomic quadrupole moments, as explained in Sec. \[sec:model:moment\]. Note that they are defined in local coordinate frames through $\hat{\vec{\sigma}}_{\bm{r}}=((\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{x}_{\bm{r}}), (\hat{\bm{\sigma}}_{\bm{r}}\cdot\bm{y}_{\bm{r}}))$. Then, the phases (ii) PAF and (iii) PF are characterized by the following quadrupole order. Since the local frames satisfy Eq. , the collinear PF state has a noncoplanar antiferroquadrupole LRO without any translation-symmetry breaking. On the other hand, the 2D PAF state has a noncollinear alignment of atomic quadrupole moments in each $[111]$ layer. It exhibits a noncoplanar ferroquadrupole (FQ) order having a finite uniform quadrupole moment pointing to the direction of $\bm{q}$ for $q<0$ or a coplanar 120$^\circ$ antiferroquadrupole order for $q>0$. These can be directly shown by using Eqs. and with or without the replacement of $\vec{n}'$ by $\vec{n}$ for $q<0$ or $q>0$, respectively.
Single-tetrahedron analysis {#sec:single}
===========================
It is instructive to investigate the quantum interplay of $\hat{\sigma}^z_{\bm{r}}$ and $\hat{\sigma}^{x,y}_{\bm{r}}$ in the model given by Eq. (\[eq:H\_eff\]) within a single tetrahedron. A similar analysis on a model for Tb$_2$Ti$_2$O$_7$ [@gardner:99] has been employed [@molavian:07].
In the classical case of $\beta=\gamma=0$ and thus $\delta=q=0$, there appear three energy levels in a single tetrahedron:
- Sixfold degenerate “2-in, 2-out” configurations $|\pm X\rangle$, $|\pm Y\rangle$, and $|\pm Z\rangle$ have the energy $-2J_{\mathrm{n.n.}}$ and are characterized by the direction of the net Ising moment on the tetrahedron $T$, $$\hat{\bm{{\cal M}}}_T=\sum_{\bm{r}\in T}\hat{\bm{m}}_{\bm{r}}=M_0\sum_{\bm{r}\in T}\hat{\sigma}_{\bm{r}}^z\bm{z}_{\bm{r}},
\label{eq:M_T}$$ which points to $\pm \bm{X}$, $\pm \bm{Y}$, and $\pm \bm{Z}$ directions in the global frame, respectively. Here, $M_0$ is the local moment amplitude introduced in Eq. .
- Eightfold degenerate “3-in, 1-out” and “1-in, 3-out” configurations have the energy $0$.
- “4-in” and “4-out” configurations $|4+\rangle$ and $|4-\rangle$ have the energy $6J_{\mathrm{n.n.}}$
With nonzero $\beta$ and $\gamma$ and thus nonzero $\delta$ and $q$, the Hamiltonian Eq. can be diagonalized on a single tetrahedron to yield the following set of eigenvalues and eigenstates for a singlet, three doublets, and three triplets;
1. an $A_{1g}$ singlet which is a superposition of the six “2-in, 2-out” configurations [@onoda:09; @molavian:07];
$$\begin{aligned}
E_{A_{1g}}&=& -2J_{\mathrm{n.n.}}(1-4\delta),
\label{eq:E_A1g}\\
|\Psi_{A_{1g}}\rangle&=&\frac{1}{\sqrt{6}}\sum_{\tau=\pm}\left(|\tau X\rangle+|\tau Y\rangle+|\tau Z\rangle\right).
\label{eq:A1g}\end{aligned}$$
2. two $E_g$ doublets which are superpositions of both “2-in, 2-out” and “4-in”/“4-out” configurations [@onoda:09];
$$\begin{aligned}
E_{E_g}&=& -2 J_{\mathrm{n.n.}}\left(\sqrt{(2+\delta)^2+6 q^2}-1+\delta\right),
\label{eq:E_Eg}\\
|\Psi^\chi_{E_g}\rangle&=&\frac{c}{\sqrt{6}}\sum_{\tau=\pm}\left(e^{i\frac{2\pi}{3}\chi}|\tau X\rangle+e^{-i\frac{2\pi}{3}\chi}|\tau Y\rangle+|\tau Z\rangle\right)
\nonumber\\
&&{}+c'|4\chi\rangle,
\label{eq:Eg}\end{aligned}$$
and
$$\begin{aligned}
&&E_{E_g}'= 2 J_{\mathrm{n.n.}}\left(\sqrt{(2+\delta)^2+6 q^2}+1-\delta\right),
\label{eq:E_Eg'}\\
&&\frac{c'}{\sqrt{6}}\sum_{\tau=\pm}\left(e^{i\frac{2\pi}{3}\chi}|\tau X\rangle+e^{-i\frac{2\pi}{3}\chi}|\tau Y\rangle+|\tau Z\rangle\right)
-c|4\chi\rangle,
\nonumber\\
\label{eq:Eg'}\end{aligned}$$
with a sign $\chi=\pm$ and dimensionless functions $c'=[\sqrt{6}q/2][(2+d)^2+6q^2+(2+d)\sqrt{(2+d)^2+6q^2}]^{-1}$ and $c=\sqrt{1-c'^2}$,
3. a $T_{1u}$ triplet described with the antisymmetric superposition of “2-in, 2-out” configurations
$$\begin{aligned}
&&E_{T_{1u}}= -2J_{\mathrm{n.n.}},
\label{eq:E_T1u}\\
&&\frac{1}{\sqrt{2}}\sum_{\tau=\pm}\tau\left(|\tau X\rangle, |\tau Y\rangle, |\tau Z\rangle \right),
\label{eq:T1u}\end{aligned}$$
and
4. two triplets and a doublet purely comprised of “3-in, 1-out” and “1-in, 3-out” configurations, whose energy levels are given by $-2J_{\mathrm{n.n.}}(\delta\pm2q)$ and $6J_{\mathrm{n.n.}}\delta$, respectively.
In our case of $J_{\mathrm{n.n.}}>0$, the ground state is given by either a singlet $|\Psi_{A_{1g}}\rangle$ (Eq. ) or a doublet $|\Psi^\chi_{E_g}\rangle$ (Eq. ), depending on whether $\delta$ is less or greater than $$\delta_B(q)=-(\sqrt{1+q^2}-1)/2,
\label{eq:delta_B}$$ respectively, as shown in Fig. \[fig:single\].
![(Color online) The symmetry of the ground states of the effective Hamiltonian ${\cal H}_{\mathrm{eff}}$ in the space of $\delta$ and $q$ in a single-tetrahedron analysis. []{data-label="fig:single"}](fig8){width="\columnwidth"}
![(Color online) (a) Outward normal vectors (green arrows) of the surfaces of the tetrahedron, used to define the chirality $\kappa_T$. (b) Solid angle subtended by four pseudospins $\bm{\sigma}_{\bm{r}_i}$. (c) Distribution of the tetrahedral magnetic moment $\bm{{\cal M}}_T$ in a cooperative ferroquadrupolar state with $\langle Q_T^{zz}\rangle>0$. The arrows represent the lattice deformation linearly coupled to $Q_T^{zz}$.[]{data-label="fig:chiral-quadrupole"}](fig9rev){width="\columnwidth"}
In the above chiral representation of the doubly degenerate $E_g$ states, $|\Psi_{E_g}^\chi\rangle$, the index $\chi=\pm$ represents the sign of the net pseudospin chirality of the tetrahedron, $$\hat{\kappa}_T=\frac{1}{2}\sum_{\bm{r}_1,\bm{r}_2,\bm{r}_3}^T\
\hat{\bm{\sigma}}_{\bm{r}_1}\cdot\hat{\bm{\sigma}}_{\bm{r}_2}\times\hat{\bm{\sigma}}_{\bm{r}_3},
\label{eq:kappa}$$ through the relation $$\langle\Psi^\chi_{E_g}|\hat{\kappa}_T|\Psi^{\chi'}_{E_g}\rangle=\frac{\sqrt{3}}{2}c^2\chi\delta_{\chi,\chi'}.
\label{eq:kappa_ave}$$ Here, the summation over the sites $\bm{r}_1,\bm{r}_2,\bm{r}_3$ on the tetrahedron $T$ is taken so as they appear counterclockwise about the outward normal to the plane spanned by the three sites for each triangle \[Fig. \[fig:chiral-quadrupole\] (a)\]. $\langle\hat{\kappa}_T\rangle$ gives the solid angle subtended by the four pseudospins \[Fig. \[fig:chiral-quadrupole\] (b)\].
In this level of approximation, the “2-in, 2-out” and “4-in”/”4-out” configurations are totally decoupled from the “3-in, 1-out”/”1-in, 3-out”, because any single pseudospin cannot be flipped by the Hamiltonian Eq. within a single tetrahedron. This decoupling is an artifact and a drawback of the single-tetrahedron analysis, which is resolved by larger system-size calculations in the next section.
Numerics on the 16-site cube {#sec:ED}
============================
![(Color online) The symmetry properties of the ground states obtained with the exact diagonalization of the cube in the periodic boundary condition. The dashed curve is the boundary between the “2-in, 2-out” singlet and the “2-in, 2-out”+”4-in/4-out” doublet for the ground state of the model on a single tetrahedron, as shown in Fig. \[fig:single\]. The point $X=(\delta,q)=(0.51,0.89)$ for Pr$^{3+}$ is also shown.[]{data-label="fig:diagram"}](fig10){width="\columnwidth"}
Next, we perform exact-diagonalization calculations of the model given by Eq. (\[eq:H\_eff\]) on the 16-site cube \[Fig. \[fig:pyrochlore\]\] in the periodic boundary condition. Because of the lack of the total pseudospin conservation, this system size already gives a large Hilbert space, though we can exploit the following symmetry operations;
1. the even-odd parity of the total pseudospin, $\hat{\Sigma}=\prod_{\bm{r}}\hat{\sigma}^z_{\bm{r}}$,
2. the translations $\hat{T}(\bm{R})$ by fcc lattice vectors $\bm{R}=\sum_{i=1,2,3}n_i\bm{R}_i$ with integers $n_i$,
3. the spatial inversion $\hat{I}$ about a site,
4. the threefold rotation $\hat{R}$ about a $(111)$ axis.
Among these symmetry operations, $\hat{\Sigma}$, $\hat{I}$ and $\hat{R}$ commute with each other. In general, $\hat{T}(\bm{R})$ commutes with $\hat{I}_\sigma$ and $\hat{\Sigma}$ but not with $\hat{I}$ and $\hat{R}$. In our case of the 16-site cube with the periodic boundary condition, however, there exist only two nonequivalent translations $\hat{T}(\bm{R}_i)$ with the fcc primitive lattice vectors $\bm{R}_i$ ($i=1,2$), since $\hat{T}(\bm{R}_1)\hat{T}(\bm{R}_2)=\hat{T}(\bm{R}_2)\hat{T}(\bm{R}_1)=\hat{T}(\bm{R}_3)$ in this case, and these two translations eventually commute with $\hat{I}$. Therefore, we can adopt the following set of commuting operators, $\hat{H}_{\mathrm{eff}}$, $\hat{\Sigma}$, $\hat{T}(\bm{R}_1)$, $\hat{T}(\bm{R}_2)$, and $\hat{I}$. $\hat{R}$ can also be used only in the translationally invariant manifold where both $\hat{T}(\bm{R}_1)$ and $\hat{T}(\bm{R}_1)$ have the eigenvalue $1$.
In Fig. \[fig:diagram\], we show the symmetry properties of the ground state in the parameter space of $\delta$ and $|q|$. Special points corresponding to the nearest-neighbor spin ice and the nearest-neighbor Heisenberg antiferromagnet are denoted as a triangle and a star, respectively, where dipolar spin correlations appear without any LRO [@moessner:98; @isakov:04]. At finite $\delta$ and/or $q$, there appear four regions in the parameter range $\delta, |q|\le1$. The boundary $\delta=\delta_B(q)$ \[Eq. \] between the $A_{1g}$ singlet and the $E_g$ doublet ground states in the single-tetrahedron level, which is shown as the dashed curve, almost gives one of the boundaries. On the left-hand side of the curve, i.e., $\delta\lesssim\delta_B(q)$, the ferroic pseudospin exchange ($\delta<0$) stabilizes a rotationally and translationally invariant singlet ground state with the even parity $I=\Sigma=+1$ for both the spatial inversion and pseudospin parity (filled blue squares). The mean-field result of the collinear ferroic LRO of the planar components of pseudospins should be realized when $\delta$ is negatively large. Therefore, it is plausible to assign most of this region to the PF (AFQ) phase. The ground state having the same symmetry appears in the case of antiferroic pseudospin exchange coupling $\delta>0$ when $|q|$ is much less than $\delta$. Noting that the U(1) spin liquid is stable against a weak antiferroic pseudospin exchange interaction [@hermele:04] and the mean-field result also gives a macroscopically degenerate spin-ice state, this could be assigned to a U(1) spin liquid [@hermele:04] or a quantum spin ice without magnetic dipole LRO. It remains open whether it is also stable against a weak ferroic pseudospin exchange coupling, namely, whether the U(1) spin liquid might appear even in the case of $\delta\lesssim\delta_B(q)$.
On the other hand, increasing $|q|$ out of the above two regions changes the ground state from the singlet to sextets. The sixfold degeneracy of the ground states are described by a product of (i) the double degeneracy characterized by the eigenvalues $+1$/$-1$ for the spatial inversion $\hat{I}$ and (ii) the threefold degeneracy characterized by the three sets of eigenvalues, $(1,-1,-1)$, $(-1,1,-1)$, and $(-1,-1,1)$, for the translation $(\hat{T}(\bm{R}_1),\hat{T}(\bm{R}_2),\hat{T}(\bm{R}_3))$, or equivalently, the wavevectors $\bm{k}_X=\frac{2\pi}{a}(1,0,0)$, $\bm{k}_Y=\frac{2\pi}{a}(0,1,0)$, and $\bm{k}_Z=\frac{2\pi}{a}(0,0,1)$. In fact, the singlet-sextet transition occurs in two steps. The singlet ground state is first replaced by the submanifold of the above sixfold degenerate states that has an even pseudospin-parity $\Sigma=+1$ (filled red circles). With further increasing $|q|$, it is replaced by the other submanifold that has an odd pseudospin-parity $\Sigma=-1$ (open red circles). Though these states might have an antiferroic LRO of planar components of pseudospins as obtained in the mean-field approximation in Sec. \[sec:MFT\], the determination of a possible LRO in these regions is nontrivial within calculations on a small system size. The particular case of $\delta=0.51$ and $q=0.89$ which we have found for Pr$^{3+}$ is also located in the region of the six-fold degenerate ground state, as shown in Fig. \[fig:diagram\], with the energy $\sim -8.825J_{\mathrm{n.n.}}$ per tetrahedron. In the rest of this paper, we will investigate magnetic dipole, quadrupole, and chiral correlations in this particular case.
Magnetic dipole correlation {#sec:ED:dipole}
---------------------------
First, we calculate the magnetic dipole correlation, $$S(\bm{q})=\frac{M_0^2}{N}\sum_{\bm{r},\bm{r}}\sum_{i,j}(\delta_{ij}-\frac{q_iq_j}{|\bm{q}|^2})z_{\bm{r}}^i z_{\bm{r}'}^j\langle\hat{\sigma}^z_{\bm{r}}\hat{\sigma}^z_{\bm{r}'}\rangle_{\mathrm{ave}}e^{i\bm{q}\cdot(\bm{r}-\bm{r}')},$$ averaged over the degenerate ground states. For non-Kramers ions such as Pr$^{3+}$, this quantity is relevant to the neutron-scattering intensity integrated over the low-energy region below the crystal-field excitations from the atomic ground-state doublet Eq. (\[eq:local\]), while for Kramers ions, the transverse components $\hat{\sigma}_{\bm{r}}^{x,y}$ must also be taken into account. In Fig. \[fig:Neutron\], we show the profiles of (a) $S(\bm{q})/M_0^2$ and (b) $S(\bm{q})/M_0^2\cdot F_{\mathrm{Pr}^{3+}}(|\bm{q}|)^2$ for $\bm{q}=\frac{2\pi}{a}(hhl)$ with $F_{\mathrm{Pr}^{3+}}(q)$ being the form factor for Pr$^{3+}$. It exhibits maxima at $(001)$ and $(003)$ as well as at $(\frac{3}{4}\frac{3}{4}0)$, and a minimum at $(000)$, as observed in the dipolar spin ice [@bramwell:01]. Note however that the calculated profiles are constructed from the on-site, nearest-neighbor, and second-neighbor correlations, which gives a good approximation when the magnetic dipole correlations remain short-range. Obviously, when the spin correlation length is longer, we need calculations on a larger system size, in particular, around the wavevectors such as (111) and (002), where the pinch-point singularity [@isakov:04; @henley:05] appears as in the case of classical spin ice [@fennell:07]. Nevertheless, the failure of the strict “2-in, 2-out” ice rule can broaden the singularity.
At the moment, there are not so many experimental results on the magnetic dipole correlations in Pr$_2TM_2$O$_7$. The only currently available one is a powder neutron-scattering experiment on Pr$_2$Sn$_2$O$_7$ [@zhou:08]. It reveals the absence of magnetic Bragg peaks and the enhanced low-energy short-ranged intensity around $|\bm{q}|\sim\frac{2\pi}{a}\sim0.5$ Å$^{-1}$ and $\sim\frac{6\pi}{a}\sim1.5$ Å$^{-1}$. These features can be explained in terms of the peak positions in our calculated profile which are similar to the dipolar spin ice, as shown in Fig. \[fig:PowNeut\]. This experiment also shows the quasielastic peak width $\sim0.1$ meV saturated at 0.2 K [@zhou:08]. Such large dynamical spin relaxation rate $\sim J_{\mathrm{n.n.}}$ can be attributed to the appreciable quantum nature of the Hamiltonian, i.e., large $\delta$ and $q$.
![(Color online) Calculated neutron scattering profile (a) $S(\bm{q})/M_0^2$ and (b) $S(\bm{q})F_{\mathrm{Pr}^{3+}}(\bm{q})^2/M_0^2$ for $\bm{q}=\frac{2\pi}{a}(hhl)$, with the form factor $F_{\mathrm{Pr}^{3+}}(\bm{q})$. []{data-label="fig:Neutron"}](Neutron3){width="\columnwidth"}
![(Color online) Calculated neutron scattering profile (a) $S(\bm{q})/M_0^2$ and (b) $S(\bm{q})F_{\mathrm{Pr}^{3+}}(\bm{q})^2/M_0^2$ for $\bm{q}=\frac{2\pi}{a}(hhl)$, with the form factor $F_{\mathrm{Pr}^{3+}}(\bm{q})$. []{data-label="fig:Neutron"}](Neutron3_Fq){width="\columnwidth"}
![(Color online) Poweder neutron-scattering intensity. Theoretical curves with/without the form factor (blue/magneta curve) and the experimental results on Pr$_2$Sn$_2$O$_7$ from Ref. .[]{data-label="fig:PowNeut"}](PowNeut){width="\columnwidth"}
Magnetization curve {#sec:ED:MH}
-------------------
![(Color online) The magnetization (left) and energy (right) per site calculated for the $I$-odd(-) ground state and the $I$-even(+) state by adding the Zeeman term $-\bm{H}\cdot\bm{M}$ to ${\cal H}_{\mathrm{eff}}$ at the applied field $\bm{H}\parallel[111]$. The symmetry of the ground-state manifold does not change until a level cross occurs at $\mu_B H/J_{\mathrm{n.n.}}\sim4.6$ to the almost fully polarized state. Experimental data on Pr$_2$Ir$_2$O$_7$ at $T=0.5$ K and 0.06 K from Ref. are also shown for comparison, though they include additional contributions from Ir conduction electrons.[]{data-label="fig:MH"}](fig13rev){width="\columnwidth"}
Next, we show the magnetization curve. The applied magnetic field $\bm{H}\parallel[111]$ partially lifts the ground-state degeneracy associated with the inversion symmetry: it splits the energies of the $I$-odd(-) and $I$-even(+) ground-state manifolds. Then, the ground state has the $I$-odd(-) property. In both submanifolds, the magnetic susceptibility is finite, as seen from the slope of the magnetization curve shown in Fig. \[fig:MH\]. This is consistent with a finite magnetic susceptibility and thus the negative $T_{CW}$ in Pr$_2$Zr$_2$O$_7$ [@matsuhira:09] and Pr$_2$Ir$_2$O$_7$ [@machida:09], and with the absence of an internal magnetic field in Pr$_2$Ir$_2$O$_7$ [@maclaughlin:08]. The ground-state $I$-odd(-) magnetization curve shows a small step or dip around $\mu_BH/J_{\mathrm{n.n.}}\sim1.8$, in comparison with that of the $I$-even(+) excited state. This indicates that this structure develops upon cooling. These results qualitatively agree with the experimental results indicating the absence and the emergence of the metamagnetic transition at $H_c\sim2.3$ T for $T=0.5$ K and 0.06 K, respectively, on Pr$_2$Ir$_2$O$_7$ [@machida:09] \[Fig. \[fig:MH\]\]. Requiring that $\mu_B H_c/J_{\mathrm{n.n.}}\sim1.8$, we can estimate the effective ferromagnetic Ising coupling as $J_{\mathrm{eff}}\sim J_{\mathrm{n.n.}}\sim0.84$ K. However, the magnitude of the magnetization is overestimated by about 25 %, probably because the experimental results on Pr$_2$Ir$_2$O$_7$ include contributions from Ir conduction electrons. Experiments on the single crystals of insulating compounds Pr$_2$Zr$_2$O$_7$ are not available at the moment but could directly test our theoretical model in the quantitative level.
Multipole correlations
----------------------
The sixfold degenerate ground state can be written as a linear combination, $$|GS\rangle=\sum_{i=X,Y,Z}\sum_{I=\pm}c_{i,I}|\Psi_{i,I}\rangle,
\label{eq:GS}$$ where $c_{i,I}$ being complex constants satisfying the normalization condition $\sum_{i,I}|c_{i,I}|^2=1$. Here, we have introduced a ground state $|\Psi_{i,I}\rangle$ associated with $\bm{k}_i$ for both $I=+1$ and $I=-1$, which shows a finite cooperative quadrupole moment defined on each tetrahedron, $\langle\Psi_{i,I}|\hat{{\cal Q}}^{jj}_T|\Psi_{i,I}\rangle=0.0387M_0^2\delta_{ij}$, where $$\hat{{\cal Q}}^{ij}_T=3\hat{{\cal M}}^i_T\hat{{\cal M}}^j_T-\hat{\bm{{\cal M}}}^2_T\delta_{ij},
\mbox{ $i,j=X, Y, Z$ (global axes)}.$$ Namely, for instance, in the ground state sector associated with $\bm{q}=\bm{k}_Z$, the net magnetic moment $\bm{{\cal M}}_T$ in each tetrahedron $T$ is inclined to point to the $\pm Z$ directions with a higher probability than to the $\pm X$ and $\pm Y$, as shown in Fig. \[fig:chiral-quadrupole\] (c). This reflects a $C_3$ symmetry breaking in the choice of the ground-state sector $\bm{q}=\bm{k}_i$. Thus, if we impose the $C_3$ symmetry to the ground state given by Eq. , it is of course possible to cancel the cooperative quadrupole moment, $\langle GS|\hat{{\cal Q}}^{ij}_T|GS\rangle=0$. However, it is natural to expect that the discrete $C_3$ symmetry is eventually broken in the thermodynamic limit. Therefore, we study properties of the ground state having a particular wavevector $\bm{k}_Z$ and thus a direction for the finite cooperative quadrupole moment $\langle GS|\hat{{\cal Q}}^{ZZ}_T|GS\rangle\ne0$. Note that this state shows not only axial alignments of magnetic dipoles but also a broken translational symmetry, and can then be classified into a magnetic analog of a smectic (or crystalline) phase of liquid crystals [@degenne].
The cooperative quadrupole moment $\langle\hat{{\cal Q}}_T^{ii}\rangle$ linearly couples to a lattice distortion to be verified experimentally: the four ferromagnetic bonds and the two antiferromagnetic bonds should be shortened and expanded, respectively, leading to a crystal symmetry lowering from cubic to tetragonal accompanied by a compression in the direction of the ferroquadrupole moment \[Fig. \[fig:chiral-quadrupole\] (c)\]. This also indicates that a uniaxial pressure along $[100]$ directions can align possible domains of quadrupole moments. Experimental clarification of magnetic dipole and quadrupole correlations in Pr$_2TM_2$O$_7$ by NMR is intriguing.
![ (Color online) Upper/lower panels: dominant/subdominant forms of quadrupole-quadrupole correlations $\langle\hat{{\cal Q}}^{ii}_T\hat{{\cal Q}}^{jj}_{T'}\rangle$ between the tetrahedrons $T$ and $T'$ displaced by $\bm{R}=\bm{R}_1$ (a), $\bm{R}_2$ (b), and $\bm{R}_3$ (c) in the cooperative ferroquadrupolar state with the wavevector $\bm{k}_Z$ and $\langle {\cal Q}_T^{zz}\rangle\ne0$. In particular, $\bm{\lambda}_{\bm{R}_1,1}=(-0.803,0.274,0.529)$, $\bm{\lambda}_{\bm{R}_2,1}= (0.274,-0.893,0.529)$, and $\bm{\lambda}_{\bm{R}_3,1}= (1,-1,0)/\sqrt{2}$. Red and blue regions represent the positive and negative values of ${\cal Q}_{\bm{R},1}$.[]{data-label="fig:q"}](fig14){width="\columnwidth"}
Next, we perform numerical calculations of equal-time spatial correlations of cooperative quadrupole moments, $\langle {\cal Q}^{ii}_T{\cal Q}^{jj}_{T'}\rangle$, between the tetrahedrons $T$ and $T'$ displaced by $\bm{R}$ which take matrix form in $i$ and $j$. To characterize the real-space correlations, we diagonalize this matrix to obtain the two correlation amplitudes, $$F^{\cal Q}_{\bm{R},\mu}= \langle\hat{{\cal Q}}_{T,\bm{R},\mu}\hat{{\cal Q}}_{T',\bm{R},\mu}\rangle,
\label{eq:F^Q_R}$$ having orthogonal forms of quadrupoles, $$\hat{{\cal Q}}_{T,\bm{R},\mu}=\sum_{i=X,Y,Z}\lambda_{\bm{R},\mu}^i\hat{Q}_T^{ii},
\label{eq:Q_T}$$ where $\bm{\lambda}_{\bm{R},\mu}=(\lambda_{\bm{R},\mu}^X,\lambda_{\bm{R},\mu}^Y,\lambda_{\bm{R},\mu}^Z)$ with $\mu=1,2$ is a set of orthonormal vectors satisfying $\sum_{i=X,Y,Z}\lambda_{\bm{R},\mu}^i\lambda_{\bm{R},\nu}^i=\delta_{\mu\nu}$ and $\sum_{i=X,Y,Z}\lambda_{\bm{R},\mu}^i=0$. Figures \[fig:q\] (a), (b), and (c) show the contour plots of $\lambda_{\bm{R},\mu}^X X^2+\lambda_{\bm{R},\mu}^Y Y^2+\lambda_{\bm{R},\mu}^Z Z^2$, which represent the diagonalized shapes for the quadrupole-quadrupole correlations $\langle \hat{{\cal Q}}^{ii}_T\hat{{\cal Q}}^{jj}_{T'}\rangle$ between the tetrahedrons $T$ and $T'$ displaced by $\bm{R}_1=(0,a/2,a/2)$, $\bm{R}_2=(a/2,0,a/2)$, and $\bm{R}_3=(a/2,a/2,0)$, respectively. Here, red and blue colors represent the positive and negative values. The upper and lower panels correspond to the forms ${\cal Q}_{\bm{R},1}$ and ${\cal Q}_{\bm{R}2}$ showing the larger and smaller correlation amplitudes, respectively. There exist dominant ferroquadrupolar correlations shown in the upper panels of (a) and (b), both of which favors ferroquadrupole moments along the $z$ direction. They prevail over antiferroquadrupole correlations shown in (c), and are responsible for forming the ferroquadrupole order $\langle\hat{{\cal Q}}_T^{zz}\rangle\ne0$.
To gain insight into a “chiral spin state” observed in Pr$_2$Ir$_2$O$_7$, we have also performed numerical calculations of the chirality-chirality correlation $\langle\hat{\kappa}_T\hat{\kappa}_{T'}\rangle$ between the tetrahedrons at $T$ and $T'$. Note that the chirality $\hat{\kappa}_T$ is a pseudospin chirality defined through Eq. , and is not a simple one defined only with the Ising dipole moments $\hat{\sigma}^z_{\bm{r}}\bm{z}_{\bm{r}}$. It turned out that this pseudospin chirality correlation $\langle\hat{\kappa}_T\hat{\kappa}_{T'}\rangle$ is weakly ferrochiral between the tetrahedrons displaced by $\bm{R}_1$ and $\bm{R}_2$, which corresponds to Figs. \[fig:q\] (a) and (b). On the other hand, it is strongly antiferrochiral between those displaced by $\bm{R}_3$, which corresponds to Fig. \[fig:q\] (c). Namely, the pseudospin chirality, which is a scalar quantity defined on the tetrahedrons forming a diamond lattice, dominantly shows an antiferrochiral correlation on the nearest-neighbor pairs of the same fcc sublattice of the diamond lattice. This points to a strong geometrical frustration for a chirality ordering. The fate of this pseudospin chirality correlation should be examined by further investigations, which may open an intriguing possibility of a chiral spin liquid [@wen:89].
Discussions and summary {#sec:summary}
=======================
The effective quantum pseudospin-$1/2$ model is quite generically applicable to other pyrochlore magnets associated with rare-earth magnetic moments, though the values of three coupling constants for non-Kramers ions and four for Kramers ions may depend largely on the materials. In this paper, we have concentrated on novel quantum effects in the case of non-Kramers ions, in particular, Pr$^{3+}$ ions, where we expect the most pronounced quantum effects among the rare-earth magnetic ions available for magnetic pyrochlores [@subramanian:83; @gardner:10]. The quantum effects may result in two different scenarios, depending on the values of coupling constants: (i) a quantum spin ice where the quantum-mechanical mixing of “3-in, 1-out” and “1-in, 3-out” could be integrated out to bear quantum effects in magnetic monopoles, and (ii) a ferroquadrupolar state that replaces the spin ice because of a quantum melting. We have obtained ferroquadrupolar state for the case of Pr$^{3+}$, whose magnetic properties explain currently available experimental observations in Pr$_2$Sn$_2$O$_7$ and Pr$_2$Ir$_2$O$_7$. Note that long-distance properties are still beyond the scope of our present calculations on finite-size systems. Further extensive studies from both theoretical and experimental viewpoints are required for the full understanding of nontrivial quantum effects in these systems, in particular, Pr$_2TM_2$O$_7$ and Tb$_2TM_2$O$_7$. Also from a purely theoretical viewpoint, it will be an intriguing and urgent issue to clarify the fate of deconfined magnetic monopoles under the circumstance of large quantum-mechanical interactions we derived.
Effects of coupling of localized $f$-electrons to conduction electrons on the transport properties are left for a future study. A coupling of Pr moments to the atomic and/or delocalized orbital degrees of freedom of conduction electrons allows a flip of the pseudospin-$1/2$ for the magnetic doublet of Pr ion. This could be an origin of the resistivity minimum observed in Pr$_2$Ir$_2$O$_7$ [@nakatsuji:06].
The authors thank L. Balents, M. P. Gingras, S. Nakatsuji, C. Broholm, Y. Machida, K. Matsuhira, and D. MacLaughlin for useful discussions. The work was partially supported by Grants-in-Aid for Scientific Research under No. 19052006 from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan and No. 21740275 from the Japan Society of Promotion of Science, and by the NAREGI Nanoscience Project from the MEXT. S.O. is also grateful to the hospitality during the stay in Johns Hopkins University where a part of the work was performed.
Crystalline Electric Field {#app:CEF}
==========================
Here, we give the formal expression for the crystalline electric field (CEF) acting on a Pr site. In this Appendix, the Pr site is taken as the origin for convenience. The coordinate frame is chosen as that defined in Fig. \[fig:crystal\] (b).
We consider the CEF created by nearby ions; two oxygen ions at the O1 sites, six $TM$ ions, and six oxygen ions at the O2 sites.
CEF from O1 sites {#app:CEF:O1}
-----------------
Two oxygen ions at the O1 sites are located at $\pm\frac{\sqrt{3}a}{8}\bm{z}$ and produce the Coulomb potential $$\begin{aligned}
\lefteqn{
U_{O1}(\bm{r}) = q_{O1}\sum_{\tau=\pm}\frac{1}{|\bm{r}-\tau\frac{\sqrt{3}a}{8}\bm{z}|}
}
\nonumber\\
&=&\frac{16q_{O1}}{\sqrt{3}a}\sum_{\ell=0}^\infty\left(\frac{8r}{\sqrt{3}a}\right)^{2\ell}\sqrt{\frac{4\pi}{4\ell+1}}Y_{2\ell}^0(\Omega_{\bm{r}}),
\label{eq:CEF:O1}\end{aligned}$$ at a position $\bm{r}$ from the Pr site, where $r=|\bm{r}|$ is assumed to be smaller than $\sqrt{3}{a}/8$, $\Omega_{\bm{r}}$ represents the spherical coordinates of $\bm{r}$, and $q_{O1}\sim-2|e|$ is an effective charge of the oxygen ions at O1 sites.
CEF from $TM$ sites {#app:CEF:TM}
-------------------
Six $TM$ ions are located at $\frac{a}{2}\bm{y}$ and its symmetry related points obtained by successively applying the sixfold rotation $R_6$ about $z$, and produce the Coulomb potential $$\begin{aligned}
\lefteqn{
U_{TM}(\bm{r}) = q_{TM}\sum_{n=0}^5\frac{1}{|\bm{r}-\frac{a}{2}R_6^n\bm{y}|}
}
\nonumber\\
&=&\frac{12q_{TM}}{a}\sum_{\ell=0}^\infty\left(\frac{2r}{a}\right)^{2\ell}\frac{4\pi}{4\ell+1}\sum_{6|m|\le2\ell}Y_{2\ell}^{6m*}(\Omega_{\bm{r}})Y_{2\ell}^{6m}(\Omega_{TM})
\nonumber\\
&=&\frac{12q_{TM}}{a}\sum_{\ell=0}^3\left(\frac{2r}{a}\right)^{2\ell}\frac{4\pi}{4\ell+1}Y_{2\ell}^0(\Omega_{\bm{r}})Y_{2\ell}^0(\Omega_{TM})
\nonumber\\
&&+\frac{12q_{TM}}{a}\left(\frac{2r}{a}\right)^6\frac{4\pi}{13}\sum_{m=\pm1}Y_6^{6m*}(\Omega_{\bm{r}})Y_6^{6m}(\Omega_{TM})
\nonumber\\
&&+\cdots,
\label{eq:CEF:TM}\end{aligned}$$ at a position $\bm{r}$ from the Pr site, where $r=|\bm{r}|$ is assumed to be smaller than $a/2$, $\Omega_{TM}=(\frac{\pi}{2},\frac{\pi}{2})$, and $q_{TM}\sim+4|e|$ is an effective charge of the $TM$ ions.
CEF from O2 sites {#app:CEF:O2}
-----------------
Two oxygen ions at the O1 sites are located at $\pm(\sqrt{2}(\frac{1}{8}-\eta)\bm{x}+\eta\bm{z})$ and their symmetry-related points obtained by successively applying the threefold rotation $R_3$ about $\bm{z}$, and produce the Coulomb potential $$\begin{aligned}
\lefteqn{
U_{O2}(\bm{r}) = q_{O2}\sum_{\tau=\pm}\sum_{n=0}^2\frac{1}{|\bm{r}-\tau(\sqrt{2}(\frac{1}{8}-\eta)R_3^n\bm{x}+\eta\bm{z})|}
}
\nonumber\\
&=&\frac{6q_{O2}}{b_{O2}}\sum_{\ell=0}^\infty\left(\frac{r}{b_{O2}}\right)^{2\ell}\frac{4\pi}{4\ell+1}\sum_{3|m|\le2\ell}Y_{2\ell}^{3m*}(\Omega_{\bm{r}})Y_{2\ell}^{3m}(\Omega_{O2}),
\nonumber\\
\label{eq:CEF:O2}\end{aligned}$$ at a position $\bm{r}$ from the Pr site, where $r=|\bm{r}|$ is assumed to be smaller than the Pr-O2 bond length $b_{O2}=\sqrt{3(\frac{1}{32}-\eta/2+3\eta^2)}a$, $\Omega_{O2}=(\theta_2,0)$ with $\theta_2=\arctan(\frac{\sqrt{2}}{\eta}(\frac{1}{8}-\eta))$, and $q_{O2}\sim-2|e|$ is an effective charge of the oxygen ions at O2 sites.
Matrix elements between the $f$-electron wavefunctions {#app:CEF:1}
------------------------------------------------------
To take into account the orbital dependence of the CEF for $f$ electrons ($l=3$ and $m_l=-3,\cdots,3$), it is sufficient to include the following terms for $U(\bm{r})=U_{O1}(\bm{r})+U_{TM}(\bm{r})+U_{O2}(\bm{r})$; $$\begin{aligned}
U(\bm{r})&=&
\sum_{\ell=1}^3u_{2\ell}^{0*}(r)Y_{2\ell}^0(\Omega_{\bm{r}})
\nonumber\\
&&+\sum_{\ell=2}^3\left[u_{2\ell}^{3*}(r)Y_{2\ell}^3(\Omega_{\bm{r}})+u_{2\ell}^{-3*}(r)Y_{2\ell}^{-3}(\Omega_{\bm{r}})\right]
\nonumber\\
&&+u_6^{6*}(r)Y_6^6(\Omega_{\bm{r}})+u_6^{-6*}(r)Y_6^{-6}(\Omega_{\bm{r}}),\end{aligned}$$ where
$$\begin{aligned}
u_{2\ell}^0(r)&=&\frac{16q_{O1}}{\sqrt{3}a}\left(\frac{8r}{\sqrt{3}a}\right)^{2\ell}\sqrt{\frac{4\pi}{4\ell+1}}
\nonumber\\
&&+\frac{12q_{TM}}{a}\left(\frac{2r}{a}\right)^{2\ell}\frac{4\pi}{4\ell+1}Y_{2\ell}^0(\Omega_{TM})
\nonumber\\
&&+\frac{6q_{O2}}{b_{O2}}\left(\frac{r}{b_{O2}}\right)^{2\ell}\frac{4\pi}{4\ell+1}Y_{2\ell}^0(\Omega_{O2}),
\label{eq:u_2l^0}\\
u_{2\ell}^{\pm3}(r)&=&\frac{6q_{O2}}{b_{O2}}\left(\frac{r}{b_{O2}}\right)^{2\ell}\frac{4\pi}{4\ell+1}Y_{2\ell}^{\pm3}(\Omega_{O2}),
\label{eq:u_2l^3}\\
u_6^{\pm6}(r)&=&\frac{6q_{O2}}{b_{O2}}\left(\frac{r}{b_{O2}}\right)^6\frac{4\pi}{13}Y_6^{\pm6}(\Omega_{O2})
\nonumber\\
&&+\frac{12q_{TM}}{a}\left(\frac{2r}{a}\right)^6\frac{4\pi}{13}Y_6^{\pm6}(\Omega_{TM}).\end{aligned}$$
Then, their nonvanishing matrix elements of $$V_{\mathrm{CEF}}^{m_l,m_l'}=\int\!d\Omega\,Y_3^{m_l*}(\Omega)\overline{U(\bm{r})}Y_3^{m_l'}(\Omega)
\label{eq:CEF:V_mm'}$$ are obtained as
$$\begin{aligned}
V_{\mathrm{CEF}}^{m,m}&=&\left\{\begin{array}{r}
-\frac{1}{6}\sqrt{\frac{5}{\pi}}\bar{u}_2^0+\frac{3}{22\sqrt{\pi}}\bar{u}_4^0-\frac{5}{66\sqrt{13\pi}}\bar{u}_6^0
\\
(m=\pm3)
\\
-\frac{7}{22\sqrt{\pi}}\bar{u}_4^0+\frac{5}{11\sqrt{13\pi}}\bar{u}_6^0
\\
(m=\pm2)
\\
\frac{1}{10}\sqrt{\frac{5}{\pi}}\bar{u}_2^0+\frac{1}{22\sqrt{\pi}}\bar{u}_4^0-\frac{25}{22\sqrt{13\pi}}\bar{u}_6^0
\\
(m=\pm1)
\\
\frac{2}{15}\sqrt{\frac{5}{\pi}}\bar{u}_2^0+\frac{3}{11\sqrt{\pi}}\bar{u}_4^0+\frac{50}{33\sqrt{13\pi}}\bar{u}_6^0
\\
(m=0)
\end{array}\right.,\ \ \ \ \
\label{eq:CEF:V_mm}\\
V_{\mathrm{CEF}}^{0,3}&=&\left(V_{\mathrm{CEF}}^{3,0}\right)^*=-V_{\mathrm{CEF}}^{-3,0}=-\left(V_{\mathrm{CEF}}^{0,-3}\right)^*
\nonumber\\
&=&-\frac{3}{22}\sqrt{\frac{7}{\pi}}\bar{u}_4^3+\frac{5}{11}\sqrt{\frac{7}{39\pi}}\bar{u}_6^3,
\label{eq:CEF:V_30}\end{aligned}$$
$$\begin{aligned}
V_{\mathrm{CEF}}^{-1,2}&=&\left(V_{\mathrm{CEF}}^{2,-1}\right)^*=-V_{\mathrm{CEF}}^{-2,1}=-\left(V_{\mathrm{CEF}}^{1,-2}\right)^*
\nonumber\\
&=&-\frac{1}{11}\sqrt{\frac{7}{2\pi}}\bar{u}_4^3-\frac{5}{11}\sqrt{\frac{21}{26\pi}}\bar{u}_6^3,
\label{eq:CEF:V_2-1}\\
V_{\mathrm{CEF}}^{-3,3}&=&\left(V_{\mathrm{CEF}}^{3,-3}\right)^*=-5\sqrt{\frac{7}{429\pi}}\bar{u}_6^6,
\label{eq:VEF:V_6-6}\end{aligned}$$
\[eq:CEF:V\]
where $\overline{U(\bm{r})}$ and $\bar{u}_{2\ell}^m$ represent the radial averages of $U(\bm{r})$ and $u_{2\ell}^m(r)$.
Representation of the ${}^4H_3$ manifold in terms of single $f$-electrons {#app:f}
=========================================================================
$$\begin{aligned}
|M_J=4\sigma\rangle
&=&
\frac{1}{\sqrt{55}}|5,3\sigma;1,\sigma\rangle
-\frac{3}{\sqrt{55}}|5,4\sigma;1,0\rangle
+\frac{3}{\sqrt{11}}|5,5\sigma;1,-\sigma\rangle
\nonumber\\
&=&
\sigma\left[
\sqrt{\frac{2}{165}}\hat{f}^\dagger_{3\sigma,\sigma}\hat{f}^\dagger_{0,\sigma}
+\frac{1}{\sqrt{165}}\hat{f}^\dagger_{2\sigma,\sigma}\hat{f}^\dagger_{\sigma,\sigma}
-\frac{3}{\sqrt{110}}\sum_{\sigma'=\pm}\hat{f}^\dagger_{3\sigma,\sigma'}\hat{f}^\dagger_{\sigma,-\sigma'}
+\frac{3}{\sqrt{11}}\hat{f}^\dagger_{3\sigma,-\sigma}\hat{f}^\dagger_{2\sigma,-\sigma}
\right]|0\rangle,
\label{eq:Jz4}\\
|M_J=3\sigma\rangle
&=&
\sqrt{\frac{3}{55}}|5,2\sigma;1,\sigma\rangle
-\frac{4}{\sqrt{55}}|5,3\sigma;1,0\rangle
+\frac{6}{\sqrt{55}}|5,4\sigma;1,-\sigma\rangle
\nonumber\\
&=&\frac{\sigma}{\sqrt{55}}\left[
\hat{f}^\dagger_{3\sigma,\sigma}\hat{f}^\dagger_{-\sigma,\sigma}
+\sqrt{2}\hat{f}^\dagger_{2\sigma,\sigma}\hat{f}^\dagger_{0,\sigma}
-\frac{4}{\sqrt{3}}\sum_{\sigma'=\pm}\left(\hat{f}^\dagger_{3\sigma,\sigma'}\hat{f}^\dagger_{0,-\sigma'}
+\frac{1}{\sqrt{2}}\hat{f}^\dagger_{2\sigma,\sigma'}\hat{f}^\dagger_{\sigma,-\sigma'}\right)
% \right.\nonumber\\
% &&\left.\ \ \
+6\hat{f}^\dagger_{3\sigma,-\sigma}\hat{f}^\dagger_{\sigma,-\sigma}
\right]|0\rangle,\ \ \
\label{eq:Jz3}\\
|M_J=2\sigma\rangle
&=&
\sqrt{\frac{6}{55}}|5,\sigma;1,\sigma\rangle
-\sqrt{\frac{21}{55}}|5,2\sigma;1,0\rangle
+2\sqrt{\frac{7}{55}}|5,3\sigma;1,-\sigma\rangle
\nonumber\\
&=&\frac{\sigma}{\sqrt{11}}\left[
\frac{1}{\sqrt{7}}\hat{f}^\dagger_{3\sigma,\sigma}\hat{f}^\dagger_{-2\sigma,\sigma}
+\sqrt{\frac{27}{35}}\hat{f}^\dagger_{2\sigma,\sigma}\hat{f}^\dagger_{-\sigma,\sigma}
+\sqrt{\frac{2}{7}}\hat{f}^\dagger_{\sigma,\sigma}\hat{f}^\dagger_{0,\sigma}
-\sqrt{\frac{7}{5}}\sum_{\sigma'=\pm}\left(\frac{1}{\sqrt{2}}\hat{f}^\dagger_{3\sigma,\sigma'}\hat{f}^\dagger_{-\sigma,-\sigma'}
% -\sqrt{\frac{7}{5}}\sum_{\sigma'=\pm}
+\hat{f}^\dagger_{2\sigma,\sigma'}\hat{f}^\dagger_{0,-\sigma'}\right)
\right.\nonumber\\
&&\left.\ \ \
+2\sqrt{\frac{14}{15}}\hat{f}^\dagger_{3\sigma,-\sigma}\hat{f}^\dagger_{0,-\sigma}
+2\sqrt{\frac{7}{15}}\hat{f}^\dagger_{2\sigma,-\sigma}\hat{f}^\dagger_{\sigma,-\sigma}
\right]|0\rangle,
\label{eq:Jz2}\\
|M_J=\sigma\rangle
&=&
\sqrt{\frac{2}{11}}|5,0;1,\sigma\rangle
-2\sqrt{\frac{6}{55}}|5,\sigma;1,0\rangle
+\sqrt{\frac{21}{55}}|5,2\sigma;1,-\sigma\rangle
\nonumber\\
&=&\frac{\sigma}{\sqrt{11}}\left[
\frac{1}{\sqrt{21}}\hat{f}^\dagger_{3\sigma,\sigma}\hat{f}^\dagger_{-3\sigma,\sigma}
+\frac{4}{\sqrt{21}}\hat{f}^\dagger_{2\sigma,\sigma}\hat{f}^\dagger_{-2\sigma,\sigma}
+\frac{5}{\sqrt{21}}\hat{f}^\dagger_{\sigma,\sigma}\hat{f}^\dagger_{-\sigma,\sigma}
-\sqrt{\frac{2}{7}}\sum_{\sigma'=\pm}\hat{f}^\dagger_{3\sigma,\sigma'}\hat{f}^\dagger_{-2\sigma,-\sigma'}
\right.\nonumber\\
&&\left.\ \ \
-3\sqrt{\frac{6}{35}}\sum_{\sigma'=\pm}\hat{f}^\dagger_{2\sigma,\sigma'}\hat{f}^\dagger_{-\sigma,-\sigma'}
-\frac{2}{\sqrt{7}}\sum_{\sigma'=\pm}\hat{f}^\dagger_{\sigma,\sigma'}\hat{f}^\dagger_{0,-\sigma'}
+\sqrt{\frac{7}{5}}\hat{f}^\dagger_{3\sigma,-\sigma}\hat{f}^\dagger_{-\sigma,-\sigma}
+\sqrt{\frac{14}{5}}\hat{f}^\dagger_{2\sigma,-\sigma}\hat{f}^\dagger_{0,-\sigma}
\right]|0\rangle,\ \ \ \ \
\label{eq:Jz1}\\
|M_J=0\rangle
&=&
\sqrt{\frac{3}{11}}|5,-1;1,1\rangle
-\sqrt{\frac{5}{11}}|5,0;1,0\rangle
+\sqrt{\frac{3}{11}}|5,1;1,-1\rangle
\nonumber\\
&=&\sqrt{\frac{5}{77}}\sum_{\sigma=\pm}\left[\sigma\left(
\frac{1}{\sqrt{2}}\hat{f}^\dagger_{3\sigma,-\sigma}\hat{f}^\dagger_{-2\sigma,-\sigma}
+\sqrt{\frac{27}{10}}\hat{f}^\dagger_{2\sigma,-\sigma}\hat{f}^\dagger_{-\sigma,-\sigma}
+\hat{f}^\dagger_{\sigma,-\sigma}\hat{f}^\dagger_{0,-\sigma}\right)
\right.\nonumber\\
&&\left.\ \ \ \ \ \ \ \ \
-\frac{1}{2\sqrt{3}}\hat{f}^\dagger_{3,\sigma}\hat{f}^\dagger_{-3,-\sigma}
-\frac{2}{\sqrt{3}}\hat{f}^\dagger_{2,\sigma}\hat{f}^\dagger_{-2,-\sigma}
-\frac{5}{2\sqrt{3}}\hat{f}^\dagger_{1,\sigma}\hat{f}^\dagger_{-1,-\sigma}
\right]|0\rangle.
\label{eq:Jz0}\end{aligned}$$
\[eq:Jz43210\]
[9]{}
P. W. Anderson, Phys. Rev. **102**, 1008 (1956).
P. W. Anderson, Mater. Res. Bull. **8**, 153 (1973).
X. G. Wen, F. Wilczek, and A. Zee, Phys. Rev. B **39**, 11413 (1989).
P. A. Lee, Science **321**, 1306 (2009).
J. N. Reimers, A. J. Berlinsky, and A.-C. Shi, Phys. Rev. B **43**, 865 (1991). R. Moessner and J. T. Chalker, Phys. Rev. Lett. **80**, 2929 (1998).
M. J. Harris, S. T. Bramwell, D. F. McMorrow, T. Zeiske, and K. W. Godfrey, Phys. Rev. Lett. **79**, 2554 (1997). A. P. Ramirez [*et al.*]{}, Nature (London) **399**, 333 (1999). S. T. Bramwell and M. J. P. Gingras, Science **294**, 1495 (2001). For a recent review, see J. S. Gardner, M. J. P. Gingras, and J. E. Greedan, Rev. Mod. Phys. **82**, 53 (2010).
L. Pauling, [*The Nature of the Chemical Bonds*]{} (Cornel University Press, Ithaca, 1938).
M. Hermele, M. P. A. Fisher, L. Balents, Phys. Rev. B **69**, 064404 (2004).
C. Castelnovo, R. Moessner, and S. L. Sondhi, Nature **451**, 42 (2008).
S. V. Isakov, K. Gregor, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. **93**, 167204 (2004).
C. L. Henley, Phys. Rev. B **71**, 014424 (2005).
J. S. Gardner [*et al.*]{}, Phys. Rev. Lett. **82**, 1012 (1999). J. S. Gardner, B. D. Gaulin, A. J. Berlinsky, P. Waldron, S. R. Dunsiger, N. P. Raju, and J. E. Greedan, Phys. Rev. B **64**, 224416 (2001) I. Mirebeau, P. Bonville, and M. Hennion, Phys. Rev. B **76**, 184436 (2007).
H. D. Zhou, C. R. Wiebe, J. A. Janik, L. Balicas, Y. J. Yo, Y. Qiu, J. R. D. Copley, and J. S. Gardner, Phys. Rev. Lett. **101**, 227204 (2008).
N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and N. P. Ong, Rev. Mod. Phys. **82**, 1539 (2010). Y. Machida, S. Nakatsuji, S. Onoda, T. Tayama, and T. Sakakibara, Nature (London) **463**, 210 (2010).
H. Cao, A. Gukasov, I. Mirebeau, P. Bonville, C. Decorse, and G. Dhalenne, Phys. Rev. Lett. **103**, 056402 (2009).
S. Onoda and Y. Tanaka, Phys. Rev. Lett. **105**, 047201 (2010).
J. D. Bernal and R. H. Fowler, J. Chem. Phys. **1**, 515 (1933).
J. Rossat-Mignod, in [*Proceedings of the Nato Advanced Study Institute on Systematics and the Properties of the Lanthanides*]{}, Chap. 7, ed. S. P. Sinha (Reidel, Dordrecht, 1983).
B. C. den Hertog and M. J. P. Gingras, Phys. Rev. Lett. **84**, 3430 (2000).
L. D. C. Jaubert and P. C. W. Holdsworth, Nature Physics **5**, 258 (2009).
G. Ehlers, A. L. Cornelius, M. Orendáč, M. Kjňaková, T. Fennell, S. T. Bramwell, and J. S. Gardner, J. Phys.: Condens. Matter **15**, L9 (2003).
J. Snyder, J. S. Slusky, R. J. Cava, and P. Schiffer, Nature **413**, 48 (2001).
J. Snyder, B. G. Ueland, J. S. Slusky, H. Karunadasa, R. J. Cava, and P. Schiffer, Phys. Rev. B **69**, 064414 (2004).
C. Castelnovo, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. **104**, 107201 (2010).
H. R. Molavian, M. J. P. Gingras, and B. Canals, Phys. Rev. Lett. **98**, 157204 (2007).
M. A. Subramanian, G. Aravamudan, and G. V. Subba Rao, Prog. Solid St. Chem. **15**, 55 (1983).
K. Matsuhira [*et al.*]{}, J. Phys. Soc. Jpn. **71**, 1576 (2002).
K. Matsuhira [*et al.*]{}, J. Phys.: Conference Series **145**, 012031 (2009).
S. Nakatsuji, Y. Machida, Y. Maeno, T. Tayama, T. Sakakibara, J. van Duijn, L. Balicas, J. N. Millican, R. T. Macaluso, and J. Y. Chan, Phys. Rev. Lett. **96**, 087204 (2006).
D. E. MacLaughlin, Y. Ohta, Y. Machida, S. Nakatsuji, G. M. Luke, K. Ishida, R. H. Heffner, L. Shu, O. O. Bernal, Physica B **404**, 667 (2009).
Effects of an RKKY interaction mediated by conduction electrons have been studied for a possible relevance to Pr$_2$Ir$_2$O$_7$ \[A. Ikeda and H. Kawamura, J. Phys. Soc. Jpn. **77**, 073707 (2008)\]. It has been argued that a magnetic dipole LRO appears.
P. G. de Genne and J. Prost, [*The Physics of Liquid Crystals*]{}, 2nd ed. (Clarendon, Oxford, 1993).
M. R. Norman, Phys. Rev. B **52**, 1421 (1995).
Y. Machida, Ph. D thesis, Kyoto University (2006).
M. J. P. Gingras, B. C. den Hertog, M. Faucher, J. S. Gardner, S. R. Dunsiger, L. J. Chang, B. D. Gaulin, N. P. Raju, and J. E. Greedan, Phys. Rev. B **62**, 6496 (2000).
R. R. Sharma, Phys. Rev. B **19**, 2813 (1979).
The notation of the phase in the $q$ term of the Hamiltonian has been changed from $\phi_{\bm{r},\bm{r}'}$ in the previous Letter Ref. to $2\phi_{\bm{r},\bm{r}'}$ in the present paper, but the physics is totally the same.
G.-W. Chern, N. Perkins, and Z. Hao, Phys. Rev. B **81**, 125127 (2010).
This is debt to discussions with L. Balents. The superexchange Hamiltonian between the CEF doublets for Yb$^{3+}$ has recently been obtained microscopically in \[S. Onoda, arXiv:1101.1230\] and introduced phenomenologically in \[J. D. Thompson, P. A. McClarty, H. M. Ronnow, L. P. Regnault, A. Sorge, and M. J.P. Gingras, arXiv:1010.5476\].
D. J. P. Morris, D. A. Tennant, S. A. Grigera, B. Klemke, C. Castelnovo, R. Moessner, C. Czternasty, M. Meissner, K. C. Rule, J.-U. Hoffmann, K. Kiefer, S. Gerischer, D. Slobinsky, R. S. Perry, Science **326**, 411 (2009); T. Fennell, P. P. Deen, A. R. Wildes, K. Schmalzl, D. Prabhakaran, A. T. Boothroyd, R. J. Aldus, D. F. McMorrow, S. T. Bramwell, Sicence **326**, 415 (2009).
|
---
abstract: 'We say that an ultrafilter on an infinite group $G$ is DTC if it determines the topological centre of the semigroup $\beta G$. We prove that DTC ultrafilters do not exist for virtually BFC groups, and do exist for the countable groups that are not virtually FC. In particular, an infinite finitely generated group is virtually abelian if and only if it does not admit a DTC ultrafilter.'
author:
- Jan Pachl
- 'Juris Steprāns[^1]'
title: DTC ultrafilters on groups
---
Introduction
============
When $G$ is an infinite group, the binary group operation on $G$ extends to the Čech–Stone compactification $\beta G$ in two natural ways. They are defined in section \[sec:prelim\] and, as in [@Dales2010bas Ch.6], denoted by ${\mathbin\text{\Pisymbol{pxsya}{"03}}}$ and ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$. Say that $v{\mathord\in}\beta G$ is a *DTC ultrafilter for $\beta G$* if $u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v \neq u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$ for every $u{\mathord\in}\beta G \setminus G$. DTC stands for *determining the (left) topological centre*.
Dales et al prove that the free group ${\mathbb{F}_2}$ admits a DTC ultrafilter [@Dales2010bas 12.22], and that no abelian group does. Here we address the problem of characterizing those groups that admit DTC ultrafilters, a class of groups we call DTC(1). A simple computation examining the quantifiers used in the definition of DTC(1) indicates that the intersection of DTC(1) with a suitable representation of countable infinite groups may not even fall within the projective hierarchy. However, it follows from our results that DTC(1) restricted to the class of finitely generated groups is a Borel set. In fact, we prove that an infinite finitely generated group is virtually abelian if and only if it does not belong to DTC(1). This provides a partial answer to question (13) in [@Dales2010bas Ch.13].
The algebraic property used above to define a DTC ultrafilter is equivalent to a topological one: $v{\mathord\in}\beta G$ is a DTC ultrafilter for $\beta G$ if and only if for every $u{\mathord\in}\beta G \setminus G$ the mapping $w\mapsto u{\mathbin\text{\Pisymbol{pxsya}{"03}}}w$ from $G\cup\{v\}$ to $\beta G$ is discontinuous at $v$. In this paper we do not deal with another notion of sets determining the topological centre, in which the mapping $w\mapsto u{\mathbin\text{\Pisymbol{pxsya}{"03}}}w$ from the whole $\beta G$ to $\beta G$ is required to be discontinuous at $v$. Every infinite group admits a DTC ultrafilter in that sense. Budak et al [@Budak2011mdt] discuss and compare the two notions.
Preliminaries {#sec:prelim}
=============
When $X$ is a set, $\beta X$ is the Čech–Stone compactification of $X$, the set ultrafilters on $X$ with the usual compact topology [@Hindman1998asc §3.2]. We identify each element of $X$ with the corresponding principal ultrafilter, so that $X\subseteq\beta X$. When $Y\subseteq X$, identify each ultrafilter on $Y$ with its image on $X$, so that $\beta Y \subseteq \beta X$.
If $\{x_\xi\}_{\xi\in I}$ is a net of elements of $X$ indexed by a directed partially ordered set $I$, then ${\mathcal{F}}:= \{ \{ x_\xi \mid \xi \geq \eta \} \mid \eta {\mathord\in}I \}$ is a filter of subsets of $X$. An ultrafilter $u{\mathord\in}\beta X$ is a cluster point of the net $\{x_\xi\}_{\xi}$ in $\beta X$ if and only if ${\mathcal{F}}\subseteq u$.
In accordance with the standard set theory notation, each ordinal is the set of all smaller ordinals, and the least infinite ordinal is $\omega=\{0,1,2,\dots\}$. The cardinality of a set $X$ is ${\lvertX\rvert}$. The set of all subsets of $X$ of cardinality $\kappa$ is $[X]^\kappa$, and the set of all subsets of cardinality less than $\kappa$ is $[X]^{<\kappa}$.
When $G$ is an infinite group and $u,v{\mathord\in}\beta G$, define [@Dales2010bas Ch.6] $$\begin{aligned}
u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v & := \{ A \subseteq G \mid \{ x{\mathord\in}G \mid x^{-1} A {\mathord\in}v \} {\mathord\in}u \} \\
u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v & := \{ A \subseteq G \mid \{ x{\mathord\in}G \mid A x^{-1} {\mathord\in}u \} {\mathord\in}v \}\end{aligned}$$ The operations ${\mathbin\text{\Pisymbol{pxsya}{"03}}}$ and ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$ are associative, and $u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v, u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v {\mathord\in}\beta G$ for $u,v{\mathord\in}\beta G$. Thus $(\beta G,{\mathbin\text{\Pisymbol{pxsya}{"03}}})$ and $(\beta G,{\mathbin\text{\Pisymbol{pxsyc}{"5E}}})$ are semigroups. When $u{\mathord\in}\beta G$, define $u^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}1} := u$ and $u^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}(n+1)} := u {\mathbin\text{\Pisymbol{pxsya}{"03}}}u^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$ for $n{\mathord\in}\omega$, $n>0$.
Say that $D\subseteq\beta G$ is a *(left) DTC set for $\beta G$ (in the sense of Dales–Lau–Strauss [@Dales2010bas])* if $$\forall u{\mathord\in}\beta G \setminus G \quad
\exists v{\mathord\in}D \quad u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v \neq u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v .$$ Thus $v{\mathord\in}\beta G$ is a DTC ultrafilter for $\beta G$ (defined in the introduction) if and only if the singleton $\{v\}$ is a DTC set.
When $G$ is a group, denote by $e_G$ its identity element. For any group $G$ and $y,z{\mathord\in}G$ define $$\begin{aligned}
G_{yz} &:= \{ x {\mathord\in}G \mid x^{-1}yx=z \} \\
[y]_G &:= \{ x^{-1}yx \mid x{\mathord\in}G\} \\
{\mathsf{FC}}(G) &:= \{ y{\mathord\in}G \mid [y]_G \text{ is finite} \}\end{aligned}$$ Each $G_{yy}$ is a subgroup of $G$. Clearly $G_{yz}\neq\emptyset$ if and only if $[y]_G=[z]_G$, and in that case $G_{yz}$ is a right coset of $G_{yy}$: If $x^{-1}yx=z$ then $G_{yz}=G_{yy} x$. For a fixed $y{\mathord\in}G$, $\varphi(x):= x^{-1}yx$ defines a mapping $\varphi$ from $G$ onto $[y]_G$ such that $\varphi^{-1}(z)=G_{yz}$ for each $z{\mathord\in}[y]_G$. Hence the cardinality of $[y]_G$ equals the index of $G_{yy}$ in $G$.
${\mathsf{FC}}(G)$ is a normal subgroup of $G$. Say that $G$ is an *ICC group* if $G$ is infinite and ${\mathsf{FC}}(G)=\{e_G \}$. Say that $G$ is an *FC group* if ${\mathsf{FC}}(G)=G$. Say that $G$ is a *BFC group* if $\sup_{y\in G} {\lvert[y]_G\rvert} < \infty$.
When $P$ is a property of groups, a group is said to be *virtually $P$* if it has a subgroup of finite index that has property $P$.
The next lemma gathers the elementary properties of ${\mathbin\text{\Pisymbol{pxsya}{"03}}}$ and ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$ needed in the sequel.
\[lem:prelim\] The following hold for any infinite group $G$ and $x,y {\mathord\in}G$, $u,v{\mathord\in}\beta G$:
1. \[lem:prelim:i\] $x{\mathbin\text{\Pisymbol{pxsya}{"03}}}y = x {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}y = xy$.
2. $x{\mathbin\text{\Pisymbol{pxsya}{"03}}}u = x{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u$.
3. \[lem:prelim:iii\] $u{\mathbin\text{\Pisymbol{pxsya}{"03}}}x = u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}x$.
4. \[lem:prelim:iv\] If $U{\mathord\in}u$ and $V{\mathord\in}v$ then $UV{\mathord\in}u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ and $UV{\mathord\in}u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$.
5. If $H$ is a subgroup of $G$ then $(\beta H,{\mathbin\text{\Pisymbol{pxsya}{"03}}})$ is a subsemigroup of $(\beta G,{\mathbin\text{\Pisymbol{pxsya}{"03}}})$ and $(\beta H,{\mathbin\text{\Pisymbol{pxsyc}{"5E}}})$ is a subsemigroup of $(\beta G,{\mathbin\text{\Pisymbol{pxsyc}{"5E}}})$.
6. \[lem:prelim:viii\] If $G$ is abelian then $u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v = v {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u$, and in particular $u{\mathbin\text{\Pisymbol{pxsya}{"03}}}u = u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u$.
7. \[lem:prelim:ix\] If $v$ is a DTC ultrafilter then $v{\mathord\in}\beta G \setminus G$.
8. \[lem:prelim:x\] If $u,v{\mathord\in}\beta G \setminus G$ then $u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v,u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v{\mathord\in}\beta G \setminus G$.
Parts \[lem:prelim:i\] to \[lem:prelim:viii\] follow directly from the definition of ${\mathbin\text{\Pisymbol{pxsya}{"03}}}$ and ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$, and \[lem:prelim:ix\] follows from \[lem:prelim:iii\]. Part \[lem:prelim:x\] is a special case of Corollary 4.29 in [@Hindman1998asc].
Say that a group is DTC(0) if it is finite. When $G$ is an infinite group, say $G$ is DTC($\kappa$) if $\kappa$ is the least cardinality of a DTC set for $\beta G$. By the next theorem every infinite group is either DTC(1) or DTC(2).
\[th:twopt\] Let $G$ be an infinite group. Then there is a two-point DTC subset of $\beta G$.
This is a special case of Corollary 12.5 in [@Dales2010bas] and of Theorem 1.2 in [@Budak2011mdt]. Here we give a direct proof using the following lemma.
\[lem:separation\] Let $G$ be an infinite group, and let $A$ be an index set with ${\lvertA\rvert}={\lvertG\rvert}$. For each $\alpha{\mathord\in}A$ let $F_\alpha$ be a finite subset of $G$. Then there are $z_\alpha$ for $\alpha{\mathord\in}A$ such that $$\label{eq:sep}
F_\alpha z_\alpha z_\gamma^{-1} \cap F_\beta z_\beta z_\delta^{-1} = \emptyset
\quad\text{ for }\quad \{\alpha,\beta,\gamma,\delta\}{\mathord\in}[A]^4.$$
Without loss of generality, assume $A$ is the cardinal $\kappa={\lvertG\rvert}$. Define $z_0 = z_1 = z_2 = e_G$. Then proceed by transfinite recursion: For $\beta{\mathord\in}\kappa\setminus \{0,1,2\}$, when $z_\alpha$ have been defined for all $\alpha{\mathord\in}\beta$, take any $$z_\beta {\mathord\in}G \setminus
\bigcup_{\{\alpha,\gamma,\delta\}\in[\beta]^3}
\left(
F_\beta^{-1} F_\alpha z_\alpha z_\gamma^{-1} z_\delta
\cup z_\gamma z_\alpha^{-1} F_\alpha^{-1} F_\delta z_\delta
\right)\;.
\qedhere$$
Put $A:=\{0,1\}\times[G]^{<\aleph_0}$ and $F_{iK}:=K$ for every $(i,K){\mathord\in}A$. By Lemma \[lem:separation\] there are $z_{iK}$ for $(i,K){\mathord\in}A$ such that (\[eq:sep\]). For $i=0,1$ the elements $z_{iK}$ form a net indexed by the directed poset $([G]^{<\aleph_0},\subseteq)$; let $v_i {\mathord\in}\beta G$ be a cluster point of the net $\{ z_{iK} \}_K$. We shall prove that $\{v_0 , v_1 \}$ is a DTC set.
For $i=0,1$ define $$\begin{aligned}
W_i := & \bigcup \{K z_{iK} \mid K {\mathord\in}[G]^{<\aleph_0} \} \\
S_i := & \bigcup \{K z_{iK} z_{iL}^{-1} \mid K,L {\mathord\in}[G]^{<\aleph_0}, K \neq L \}\end{aligned}$$
Since $x^{-1} W_i {\mathord\in}v_i$ for every $x{\mathord\in}G$, it follows that $W_i{\mathord\in}u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v_i$ for every $u{\mathord\in}\beta G$.
Now take any $u{\mathord\in}\beta G \setminus G$. From (\[eq:sep\]) we have $S_0 \cap S_1 = \emptyset$, so there is $i{\mathord\in}\{0,1\}$ such that $S_i \not\in u$. For every $K {\mathord\in}[G]^{<\aleph_0}$ we have $W_i z_{iK}^{-1} \subseteq K \cup S_i $, hence $W_i z_{iK}^{-1} \not\in u$. But $\{ z_{iK} \mid K {\mathord\in}[G]^{<\aleph_0} \} {\mathord\in}v_i$, and from the definition of ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$ we get $W_i \not\in u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v_i$. We have proved that $u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v_i \neq u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v_i$.
Sufficient condition
====================
In this section we prove a sufficient condition for a group to be DTC(2). We start by showing that the DTC(1) and DTC(2) properties are inherited by subgroups of finite index.
\[th:finindex\] Let $G$ be a group and $H$ its subgroup of finite index. Then $G$ is DTC(1) if and only if $H$ is.
As $G$ is the finite union of the left cosets of $H$, for every ultrafilter $u{\mathord\in}\beta G$ there is $y{\mathord\in}G$ such that $yH{\mathord\in}u$, and then $y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}u = y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u {\mathord\in}\beta H$.
When $G$ is DTC(1), let $v{\mathord\in}\beta G$ be a DTC ultrafilter for $\beta G$, and let $y{\mathord\in}G$ be such that $y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}v = y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v {\mathord\in}\beta H$. Take any $u{\mathord\in}\beta H \setminus H$. Then $u {\mathbin\text{\Pisymbol{pxsya}{"03}}}y^{-1} = u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}y^{-1} {\mathord\in}\beta G \setminus G$ and $$u {\mathbin\text{\Pisymbol{pxsya}{"03}}}(y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}v)
= (u {\mathbin\text{\Pisymbol{pxsya}{"03}}}y^{-1}) {\mathbin\text{\Pisymbol{pxsya}{"03}}}v
\neq (u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}y^{-1}) {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v
= u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}(y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v) .$$ Thus $y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ is a DTC ultrafilter for $\beta H$, and $H$ is DTC(1).
When $H$ is DTC(1), let $v{\mathord\in}\beta H \subseteq \beta G$ be a DTC ultrafilter for $\beta H$. Take any $u{\mathord\in}\beta G \setminus G$, and let $y{\mathord\in}G$ be such that $y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}u = y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u {\mathord\in}\beta H \setminus H$. Then $$y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}(u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v)
= (y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}u){\mathbin\text{\Pisymbol{pxsya}{"03}}}v
\neq (y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}u){\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v
= y^{-1}{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}(u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v)
= y^{-1}{\mathbin\text{\Pisymbol{pxsya}{"03}}}(u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v),$$ hence $u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v \neq u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$. Thus $v$ is a DTC ultrafilter for $\beta G$, and $G$ is DTC(1).
By \[lem:prelim\]\[lem:prelim:viii\] and \[th:finindex\], every infinite virtually abelian group is DTC(2). Next we shall prove that even every infinite virtually BFC group is DTC(2).
\[lem:finhom\] Let $G$ be an infinite group, $H$ a finite group, and $\alpha\colon G \to H$ a surjective homomorphism. Then $\alpha$ extends to a homomorphism $\overline{\alpha}\colon (\beta G,{\mathbin\text{\Pisymbol{pxsya}{"03}}})\to H$ such that for every $u{\mathord\in}\beta G$ and every $h{\mathord\in}H$ we have $\alpha^{-1}(h){\mathord\in}u$ if and only if $\overline{\alpha}(u)=h$.
In fact $\overline{\alpha}$ is the unique continuous extension of $\alpha$ to $\beta G$. This observation is not needed in the sequel.
Write $\overline{\alpha}(u):=h$ when $u{\mathord\in}\beta G$, $h{\mathord\in}H$ and $\alpha^{-1}(h){\mathord\in}u$. That defines a mapping from $\beta G$ onto $H$, because the sets $\alpha^{-1}(h)$, $h{\mathord\in}H$, form a finite partition of $G$, and therefore for every $u{\mathord\in}\beta G$ there is a unique $h{\mathord\in}H$ such that $\alpha^{-1}(h){\mathord\in}u$.
Obviously $\overline{\alpha}(x)=\alpha(x)$ for $x{\mathord\in}G$. To prove $\overline{\alpha}$ is a homomorphism, take any $u,v{\mathord\in}\beta G$ and let $f:=\overline{\alpha}(u)$, $h:=\overline{\alpha}(v)$. Then $\alpha^{-1}(fh) = \alpha^{-1}(f) \alpha^{-1}(h) {\mathord\in}u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ by \[lem:prelim\]\[lem:prelim:iv\], hence $\overline{\alpha}(u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v)=fh$.
\[lem:fincommute\] Let $G$ be an infinite group and $y{\mathord\in}G$ such that $[y]_G$ is finite. Let $n\geq 1$ be an integral multiple of ${\lvert[y]_G\rvert}\,!$. Then $$y^{-1}A {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n} \Leftrightarrow Ay^{-1} {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$$ for all $v{\mathord\in}\beta G$, $A\subseteq G$.
Denote by ${\mathsf{Sym}}([y]_G)$ the group of all permutations of the set $[y]_G$. Define the homomorphism $\alpha\colon G \to {\mathsf{Sym}}([y]_G)$ by $$\alpha(x)(z):=x^{-1}zx, \quad x{\mathord\in}G, z{\mathord\in}[y]_G,$$ and $H:=\alpha(G)\subseteq {\mathsf{Sym}}([y]_G)$. Write $E:=\alpha^{-1}(e_H)$. Then $xy=yx$ for $x{\mathord\in}E$, hence $$E\cap y^{-1}A = E\cap Ay^{-1}$$ for every $A\subseteq G$.
Let $\overline{\alpha}\colon (\beta G,{\mathbin\text{\Pisymbol{pxsya}{"03}}})\to H$ be as in Lemma \[lem:finhom\]. As the order of every element in $H$ divides ${\lvert{\mathsf{Sym}}([y]_G)\rvert}$, it also divides $n$. Hence $\overline{\alpha}(v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}) = \overline{\alpha}(v)^n = e_H$, and thus $E {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$.
It follows that $y^{-1}A {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$ if and only if $E\cap Ay^{-1} = E \cap y^{-1}A {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$ if and only if $Ay^{-1} {\mathord\in}v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$.
\[th:virtBFC\] Every infinite virtually BFC group is DTC(2).
In view of Theorem \[th:finindex\] it is enough to prove that every infinite BFC group is DTC(2). Take any such $G$ and any $v{\mathord\in}\beta G \setminus G$.
Let $n$ be the factorial of $\max_{y\in G} {\lvert[y]_G\rvert}$. Set $u:=v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}n}$. Then $u{\mathord\in}\beta G \setminus G$ by \[lem:prelim\]\[lem:prelim:x\]. By Lemma \[lem:fincommute\] we have $y^{-1}A {\mathord\in}u \Leftrightarrow Ay^{-1} {\mathord\in}u$ for all $y{\mathord\in}G$ and $A\subseteq G$, so $w{\mathbin\text{\Pisymbol{pxsya}{"03}}}u = u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}w$ for all $w{\mathord\in}\beta G$. Therefore $$u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v = v^{{\mathbin\text{\Pisymbol{pxsya}{"03}}}(n+1)} = v {\mathbin\text{\Pisymbol{pxsya}{"03}}}u = u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v .
\qedhere$$
Countable groups {#sec:countable}
================
Let $G$ be a countable infinite group, $G= \bigcup_{n\in\omega} F_n$ where $F_0\subseteq F_1\subseteq F_2 \subseteq \dots$ are finite sets. Let $v{\mathord\in}\beta G$ be a cluster point of a sequence $\{x_n\}_{n\in\omega}$ in $G$, and set $W:= \bigcup_n F_n x_n$. From the definition of ${\mathbin\text{\Pisymbol{pxsya}{"03}}}$ we get $W {\mathord\in}u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ for every $u{\mathord\in}\beta G$. The next theorem describes a condition that allows a choice of $x_n$ for which $W \not\in u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$ for every $u{\mathord\in}\beta G \setminus G$, so that $v$ is a DTC ultrafilter.
\[th:countable\] Consider four conditions for a countable infinite group $G$:
1. \[th:countable:conda\] There is $V\subseteq G$ such that for every finite $F\subseteq G$ there is $x{\mathord\in}G$ for which $$x\not\in F V \cup F x (F \setminus V) .$$
2. \[th:countable:condb\] There are finite $F_n\subseteq G$, $n{\mathord\in}\omega$, such that $e_G {\mathord\in}F_n = F^{-1}_n$ and $F_n F_n \subseteq F_{n+1}$ for all $n$, and $G= \bigcup_{n\in\omega} F_n$. There is a sequence $\{x_n\}_{n\in\omega}$ in $G$ such that $$\begin{aligned}
\label{eq:a1}
F_n x_n x^{-1}_i \cap F_k x_k x^{-1}_j & = \emptyset \quad\text{for}\;\; i,j< k < n \\
\label{eq:a2}
F_n x_n x^{-1}_i \cap F_n x_n x^{-1}_j & = \emptyset \quad\text{for}\;\; i < j < n\end{aligned}$$
3. \[th:countable:condc\] There are $v{\mathord\in}\beta G \setminus G$ and $W\subseteq G$ such that $W{\mathord\in}u{\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ for all $u{\mathord\in}\beta G$ and $W\not\in u{\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$ for all $u{\mathord\in}\beta G \setminus G$.
4. \[th:countable:condd\] $G$ is DTC(1).
Then \[th:countable:conda\]$\Rightarrow$\[th:countable:condb\]$\Rightarrow$\[th:countable:condc\]$\Rightarrow$\[th:countable:condd\].
Write $G= \bigcup_{n\in\omega} F_n$ with finite sets $F_n\subseteq G$ such that $e_G {\mathord\in}F_n = F^{-1}_n$ and $F_n F_n \subseteq F_{n+1}$ for all $n$. Assuming \[th:countable:conda\], $e_G {\mathord\in}V$ because otherwise we would have $x{\mathord\in}\{e_G\} x (\{e_G\} \setminus V)$ for all $x{\mathord\in}G$. A recursive construction yields a sequence of $x_n$ such that $$\begin{aligned}
\label{eq:b1}
x_n & \not\in x_i V \cup F_{n+1} x_k x^{-1}_i x_j \quad\text{for}\;\; i,j < k < n \\
\label{eq:b2}
x_n & \not\in F_{n+1} x_n x^{-1}_i x_j \quad\quad\quad\;\;\text{for}\;\; i < j < n\end{aligned}$$ Since (\[eq:a1\]) follows from (\[eq:b1\]) and (\[eq:a2\]) follows from (\[eq:b2\]), this proves \[th:countable:conda\]$\Rightarrow$\[th:countable:condb\].
Now assume \[th:countable:condb\]. From (\[eq:a2\]) we get $x_i \neq x_j$ for $i\neq j$. Let $v{\mathord\in}\beta G \setminus G$ be a cluster point of the sequence $\{x_n\}_n$, and put $W:= \bigcup_n F_n x_n$. Then $W {\mathord\in}u {\mathbin\text{\Pisymbol{pxsya}{"03}}}v$ for every $u{\mathord\in}\beta G$.
Take any $u{\mathord\in}\beta G \setminus G$. From (\[eq:a1\]) and (\[eq:a2\]), for $i<j$ we get $$W x^{-1}_i \cap W x^{-1}_j
= \left( \bigcup_{n\in\omega} F_n x_n x^{-1}_i \right)
\cap \left( \bigcup_{k\in\omega} F_k x_k x^{-1}_j \right)
\subseteq \bigcup_{k=0}^j F_k x_k x^{-1}_j$$ Thus the intersection $W x^{-1}_i \cap W x^{-1}_j$ is finite for $i\neq j$. Hence there is at most one $i{\mathord\in}\omega$ for which $W x^{-1}_i {\mathord\in}u$. Since $v{\mathord\in}\beta G \setminus G$ and $\{x_i \mid i{\mathord\in}\omega\} {\mathord\in}v$, it follows that $\{ x \mid W x^{-1} {\mathord\in}u \} \not\in v$, and $W\not\in u {\mathbin\text{\Pisymbol{pxsyc}{"5E}}}v$ from the definition of ${\mathbin\text{\Pisymbol{pxsyc}{"5E}}}$. That proves \[th:countable:condb\]$\Rightarrow$\[th:countable:condc\].
Obviously \[th:countable:condc\]$\Rightarrow$\[th:countable:condd\].
We do not know if condition \[th:countable:condd\] in Theorem \[th:countable\] is inherited from quotients, but condition \[th:countable:conda\] is:
\[prop:quotient\] Let condition \[th:countable\]\[th:countable:conda\] hold for a group $G$ Let $H$ be a group with a surjective homomorphism $\pi\colon H \to G$. Then condition \[th:countable\]\[th:countable:conda\] holds also for $H$ in place of $G$ and $\pi^{-1}(V)$ in place of $V$.
Write $U:=\pi^{-1}(V)$. Then $\pi(F\setminus U) = \pi(F) \setminus V$ for every $F\subseteq H$.
Take any finite $F\subseteq H$. By the assumption there is $x{\mathord\in}G$ such that $x\not\in \pi(F) V \cup \pi(F) x (\pi(F) \setminus V)$. Let $y{\mathord\in}H$ be such that $\pi(y)=x$. Then $$\pi(y) \not\in \pi(F) \pi(U) \cup \pi(F) \pi(y) \pi(F \setminus U),$$ hence $y\not\in F U \cup F y (F \setminus U)$.
For what follows we need the following known results.
\[lem:threelem\]
1. \[lem:neumann\] (B.H. Neumann’s theorem) Let $G$ be a group such that $G=\bigcup_{i=0}^n x_i G_i$ where $x_i {\mathord\in}G$ and $G_i$ is a subgroup of $G$ for $i=0,1,\dots,n$. Then at least one of the groups $G_i$ has finite index in $G$.
2. \[lem:schreier\] (Schreier’s subgroup lemma) Every subgroup of finite index in a finitely generated group is finitely generated.
3. \[lem:fingenvab\] Every finitely generated FC group is virtually abelian.
\[lem:neumann\], \[lem:schreier\] and \[lem:fingenvab\] are respectively Lemma 4.17, Theorem 1.41 and a corollary of Theorem 4.32 in [@Robinson1972fcg].
\[th:FCinf\] Let $G$ be a countable group such that ${\lvertG/{\mathsf{FC}}(G)\rvert}=\aleph_0$. Then $G$ satisfies condition \[th:countable\]\[th:countable:conda\], and hence is DTC(1).
We shall prove that $G$ satisfies condition \[th:countable\]\[th:countable:conda\] with $V={\mathsf{FC}}(G)$. Take any finite set $F\subseteq G$. If $y{\mathord\in}V$ and $z{\mathord\in}F\setminus V$ then $G_{yz}=\emptyset$, and for $y{\mathord\in}F^{-1}\setminus V$ the index of $G_{yy}$ is infinite. Therefore by \[lem:threelem\]\[lem:neumann\] there is $$x \not\in FV \cup \bigcup_{y\in F^{-1}}\;\;
\bigcup_{z\in F\setminus V} G_{yz}$$ which means that $x\not\in FV \cup F x (F\setminus V)$.
\[cor:icc\] Every countable ICC group satisfies condition \[th:countable\]\[th:countable:conda\], and hence is DTC(1).
\[cor:iccquot\] Every countable group that has an ICC quotient is DTC(1).
Apply Proposition \[prop:quotient\] and Corollary \[cor:icc\].
\[cor:vnilp\] An infinite finitely generated group is DTC(2) if and only if it is virtually abelian.
Let $G$ be an infinite finitely generated group. If $G$ is virtually abelian then it is DTC(2) by Theorem \[th:finindex\]. If $G$ is DTC(2) then ${\lvertG/{\mathsf{FC}}(G)\rvert}<\aleph_0$ by Theorem \[th:FCinf\]. In that case ${\mathsf{FC}}(G)$ is finitely generated by \[lem:threelem\]\[lem:schreier\], hence $G$ is virtually abelian by \[lem:threelem\]\[lem:fingenvab\].
Example \[ex:redprod\] shows that the assumption that the group is finitely generated cannot be omitted in Corollary \[cor:vnilp\].
Examples
========
Dales et al [@Dales2010bas 12.22] prove that the free group ${\mathbb{F}_2}$ is DTC(1). This follows from Corollary \[cor:icc\], since non-commutative free groups are ICC [@Ceccherini2010cag Ex.8.3].
The comment after [@Dales2010bas 12.22] asks whether there is an amenable semigroup $S$ and an ultrafilter in $\beta S$ that determines the topological centre of $\mathsf{M}(\beta S)$. In this section we exhibit several examples of a slightly weaker property: An amenable group $G$ and an ultrafilter in $\beta G$ that determines the topological centre of $\beta G$; that is, a DTC ultrafilter. The first such example is the group ${{\mathsf{Sym}}_{<\aleph_0}(\omega)}$ of finite permutations of $\omega$. This group is ICC [@Ceccherini2010cag Ex.8.3], hence again DTC(1) by \[cor:icc\]. More generally we obtain other subgroups of ${{\mathsf{Sym}}_{<\aleph_0}(\omega)}$ that are DTC(1):
\[ex:finsym\] *Subgroups of ${{\mathsf{Sym}}_{<\aleph_0}(\omega)}$ that act transitively on $\omega$.*
Let $G$ be a subgroup of ${{\mathsf{Sym}}_{<\aleph_0}(\omega)}$ that acts transitively on $\omega$. We shall prove that $G$ is ICC and therefore DTC(1).
For $x{\mathord\in}{{\mathsf{Sym}}_{<\aleph_0}(\omega)}$ denote by ${\mathsf{supp}}(x)$ the support of $x$.
Take any $y{\mathord\in}G\setminus\{e_G\}$ and finite $F\subseteq G$ for which $y{\mathord\in}F$. There are $a,b{\mathord\in}\omega$ such that $y(a)=b\neq a$. By transitivity there is $x{\mathord\in}G$ such that $x(a)\not\in\bigcup_{z\in F} {\mathsf{supp}}(z)$. Then $xyx^{-1}(x(a))=x(b)\neq x(a)$, hence $x(a){\mathord\in}{\mathsf{supp}}(xyx^{-1})$, hence $xyx^{-1}\not\in F$. Thus $[y]_G$ is infinite.
\[ex:metab\] *A finitely generated metabelian group of exponential growth and generalizations.*
Let $R$ be a countable infinite integral domain, and $P$ an infinite multiplicative subgroup of $R$. Let $G$ be the set $P\times R$ with multiplication defined by $$(x,r)(y,s) := (xy, r + s x )
\quad\text{for}\quad x,y {\mathord\in}P, r,s {\mathord\in}R .$$ The mapping $$(x,r) \mapsto
\begin{pmatrix}
x& r \\
0& 1
\end{pmatrix}$$ is an isomorphism between $G$ and a group of $2\times 2$ matrices with the usual matrix multiplication. A particular instance, in which $R$ is the ring of dyadic rationals and $P$ is the multiplicative group of integer powers of 2, is a metabelian group with two generators and exponential growth [@Ceccherini2010cag 6.7.1].
Write $0:=0_R$ and $1:=1_R$ and note that $e_G=(1,0)$ and $(x,r)^{-1}=(x^{-1},-rx^{-1})$.
We shall prove that $G$ is ICC and therefore DTC(1). Take any $(y,s){\mathord\in}G\setminus\{e_G\}$ and finite $F\subseteq G$. Write $S:=\{t{\mathord\in}R \mid (y,t) {\mathord\in}F \}$. By cancellability in $R$ we get:
- If $s\neq 0$ then there exists $x{\mathord\in}P$ such that $sx\not\in S$. In that case let $r:=0$.
- If $s=0$ then $y\neq 1$, and there exists $r{\mathord\in}R$ such that $r(1-y)\not\in S$. In that case let $x:=1$.
Thus in both cases there exists $(x,r){\mathord\in}G$ such that $r+sx-ry \not\in S$, hence $$(x,r)(y,s)(x,r)^{-1} = (y,r + sx -ry) \not\in F .$$ That proves $[(y,s)]$ is infinite.
\[ex:heisenberg\] *Discrete Heisenberg group and generalizations.*
Let $R$ be a countable infinite integral domain. Let $G$ be $R\times R\times R$ with the multiplication $$(a,b,c)(p,q,r) := (a+p,b+q,c+r+aq)
\quad\text{for}\quad a,b,c,p,q,r {\mathord\in}R.$$ The mapping $$(a,b,c) \mapsto
\begin{pmatrix}
1& a& c \\
0& 1& b \\
0& 0& 1
\end{pmatrix}$$ is an isomorphism between $G$ and a group of $3\times 3$ matrices with the usual matrix multiplication. For the special case $R={\mathbb{Z}}$, the ring of integers, this is the *discrete Heisenberg group*. In that case $G$ is finitely generated and nilpotent, hence has no ICC quotients by the Duguid–McLain theorem [@Frisch2018nvn]. Nevertheless Theorem \[th:FCinf\] applies to $G$, as will now be shown, so that $G$ is DTC(1).
Write $0:=0_R$ and $1:=1_R$ and note that $e_G=(0,0,0)$ and $(a,b,c)^{-1} = (-a,-b,ab-c)$.
Put $V:=\{(0,0,c) \mid c{\mathord\in}R \}$. We shall prove that ${\mathsf{FC}}(G)=V$.
Clearly $[(0,0,c)]=\{(0,0,c)\}$ for every $c{\mathord\in}R$, hence $V\subseteq {\mathsf{FC}}(G)$. Take any $(p,q,r)\not\in V$ and finite $F\subseteq G$. Write $S:=\{t{\mathord\in}R \mid (p,q,t) {\mathord\in}F \}$. By cancellability in $R$ we get:
- If $p\neq 0$ then there exists $b{\mathord\in}R$ such that $r-bp\not\in S$. In that case let $a:=0$.
- If $p=0$ then $q\neq 0$, and there exists $a{\mathord\in}R$ such that $r+aq\not\in S$. In that case let $b:=0$.
Thus in both cases there exists $(a,b,0){\mathord\in}G$ such that $r+aq-bp\not\in S$, hence $$(a,b,0)(p,q,r)(a,b,0)^{-1}
= (p,q,r+aq-bp) \not\in F .$$ That proves $[(p,q,r)]$ is infinite. Thus ${\mathsf{FC}}(G)=V$, and so ${\lvertG/{\mathsf{FC}}(G)\rvert}=\aleph_0$.
\[ex:redprod\] *Reduced power of a finite group.*
Let $H$ be a finite group, and $I$ an infinite index set. Let $H^I$ be the product group, and $G \subseteq H^I$ the *reduced product*, i.e. the subgroup of those $h=(h_i)_{i\in I}{\mathord\in}H^I$ for which $h_i \neq e_H$ for only finitely many coordinates $i$. Then $G$ is BFC, hence DTC(2) by Theorem \[th:virtBFC\]. If $H$ is not abelian then $G$ is not virtually abelian.
Open problems
=============
In view of Theorem \[th:virtBFC\] it is natural to ask
Is it true that every infinite (or at least every countable infinite) FC group is DTC(2)?
A positive answer would yield an improvement of Corollary \[cor:vnilp\]: It would then follow that a countable infinite group is DTC(2) if and only if it is virtually FC. However, as mentioned in the introduction, we do not even know if the countable DTC(1) groups form a projective set. If the answer to the following question is positive, then it would show that DTC(1) is at least analytic.
Does Condition \[th:countable:condd\] of Theorem \[th:countable\] imply Condition \[th:countable:conda\]?
The results in section \[sec:countable\] are specific to countable groups. That raises
Which results in section \[sec:countable\] generalize to uncountable groups?
[1]{}
Budak, T., I[ş]{}[i]{}k, N., and Pym, J. S. *Minimal determinants of topological centres for some algebras associated with locally compact groups*. Bull. London Math. Soc. **43** (2011), 495–506.
Ceccherini-Silberstein, T., and Coornaert, M. *Cellular automata and groups*. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010.
Dales, H. G., Lau, A. T.-M., and Strauss, D. *Banach algebras on semigroups and on their compactifications*. Mem. Amer. Math. Soc. **205**, 966 (2010).
Frisch, J., and Ferdowsi, P. V. *Non-virtually nilpotent groups have infinite conjugacy class quotients*. arXiv:1803.05064v1.
Hindman, N., and Strauss, D. *Algebra in the [S]{}tone-Čech compactification*, de Gruyter Expositions in Mathematics, Vol. 27. Walter de Gruyter & Co., Berlin, 1998.
Robinson, D. J. S. *Finiteness conditions and generalized soluble groups. [P]{}art 1*. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 62. Springer-Verlag, New York-Berlin, 1972.
Jan Pachl\
Toronto, Ontario\
Canada\
Juris Steprāns\
Department of Mathematics and Statistics\
York University\
Toronto, Ontario\
Canada
[^1]: Research supported by NSERC.
|
---
address: |
$^{*}$ Low Temperature Laboratory, Aalto University, School of Science and Technology, P.O. Box 15100, FI-00076 AALTO, Finland\
$^+$ Landau Institute for Theoretical Physics RAS, Kosygina 2, 119334 Moscow, Russia
author:
- 'Tero T. Heikkilä $^{*}$[^1] and G.E. Volovik $^{*+}$ [^2]'
title: 'Dimensional crossover in topological matter: Evolution of the multiple Dirac point in the layered system to the flat band on the surface'
---
Introduction
============
Topological matter is characterized by nontrivial topology of the Green’s function in momentum space [@Volovik2003; @Horava2005]. The topological objects in momentum space (zeroes in the spectrum of fermionic quasiparticles) in many respects are similar to the topological defects in real space, and are also described by different homotopy groups including the relative homotopy groups. In particular, the Fermi surface is the momentum-space analog of the vortex loop in superfluids/superconductors; the Fermi point (or Dirac point) corresponds to the real-space point defects, such as hedgehog (monopole) in ferromagnets; the fully gapped topological matter is characterized by skyrmions in momentum space, which are analogs of non-singular objects – textures; etc.
Here we discuss the transformations of the topologically protected zeroes, which occur during the dimensional crossover from a 2-dimensional to a 3-dimensional system. We consider the dimensional crossover which involves such topological objects as a nodal line in a 3D system; the flat band, which is an analog of the vortex sheet; and the topologically protected Dirac points with multiple topological charge $|N|>1$ in quasi 2-dimensional substance, which are analogous to the multiply quantized vortex.
The Fermi bands, where the energy vanishes in a finite region of the momentum space, and thus zeroes in the fermionic spectrum have co-dimension 0, have been discussed in different systems. The flat band appears in the so-called fermionic condensate [@Khodel1990; @NewClass; @Volovik2007; @Shaginyan2010]. Topologically protected flat band exists in the spectrum of fermion zero modes localized in the core of some vortices [@KopninSalomaa1991; @Volovik1994; @MisirpashaevVolovik1995]. In particle physics, the Fermi band (called the Fermi ball) appears in a 2+1 dimensional nonrelativistic quantum field theory which is dual to a gravitational theory in the anti-de Sitter background with a charged black hole [@Sung-SikLee2009]. The flat band has also been discussed on the surface of the multi-layered graphene [@Guinea2006] and on the surface of superconductors without inversion symmetry [@SchnyderRyu2010].
The topologically protected 2-dimensional and 3-dimensional Dirac points with multiple topological charge $N$ were considered both in condensed matter [@VolovikKonyshev1988; @Volovik2003; @Volovik2007; @Manes2007; @Dietl-Piechon-Montambaux2008; @Chong2008; @Banerjee2009; @Sun2010; @Fu2010; @HeikkilaVolovik2010] and for relativistic quantum vacua [@Volovik2001; @Volovik2003; @KlinkhamerVolovik2005; @Volovik2007]. In the vicinity of the multiple Dirac point with topological charge $N$ the spectrum may have the form $E^2 \propto p^{2N}$. We consider the special model of the multilayered system discussed in [@HeikkilaVolovik2010], where the topological charge $N$ of the Dirac point coincides with the number of layers. In this model, when $N\rightarrow \infty$, the multiple Dirac point transforms to the flat band in the finite region of the two-dimensional momentum on the surface of the sample. The interior layers in the limit $N\rightarrow \infty$ transform to the bulk state, which represents a semi-metal in which the nodal line (line of zeroes) forms a spiral. The projection of this spiral onto the edge layer produces the boundary of the flat band. The latter is similar to what occurs in superconductors without inversion symmetry, where the region of the flat band on the surface is also determined by the projection of the topological nodal line in the bulk on the corresponding surface [@SchnyderRyu2010].
Flat band and spiral nodal line
===============================
Let us first consider the model in Ref. [@HeikkilaVolovik2010], specified in Sec. \[flatbandformation\] below, in the limit $N\rightarrow \infty$. The effective Hamiltonian in the 3-dimensional bulk system which emerges in the continuous limit $N\rightarrow \infty$ is the $2\times 2$ matrix $$H=\begin{pmatrix}
0 & f\\
f^*& 0
\end{pmatrix}
~~,~~f=p_x-ip_y - t_+ e^{-ia p_z} - t_-^* e^{ia p_z} \,.
\label{ContinuousH}$$ Here $t_+=|t_+|e^{i\phi_+}$ and $t_-=|t_- | e^{i\phi_-}$ are the hopping matrix elements between the layers and $a$ is the interlayer distance. The hopping matrix element proportional to $\sigma^{+(-)}$ is $t_{+(-)}$. The energy spectrum of the bulk system $$\begin{split}
&E^2= [p_x - |t_+| \cos (ap_z-\phi_+)-|t_-|\cos(a p_z-\phi_-)]^2
\\
&+ [p_y +| t_+| \sin (ap_z-\phi_+)-|t_-|\sin(a p_z -\phi_-)]^2 \,,
\end{split}
\label{ContinuousE}$$ has zeroes on the line (see Fig. \[fig:spiral\]): $$\begin{split}
& p_x =| t_+| \cos (ap_z-\phi_+)+|t_- |\cos(a p_z-\phi_-)\,,
\\
& p_y = |t_-| \sin(a p_z-\phi_-)-| t_+| \sin (ap_z-\phi_+) \,.
\label{NodalLine}
\end{split}$$ These zeroes are topologically protected by the topological invariant [@Volovik2007] $$N_1=- {1\over 4\pi i} ~{\rm tr} ~\oint dl ~ \sigma_z H^{-1}\nabla_l H \,,
\label{InvariantForLine}$$ where the integral is along the loop around the nodal line in momentum space. The winding number around the element of the nodal line is $N_1=1$. For the interacting system the Hamiltonian matrix must be substituted by the inverse Green’s function at zero frequency, $H\rightarrow G^{-1}(\omega=0, {\bf p})$, which plays the role of effective Hamiltonian, see also [@Gurarie2010].
![Fig. 1. Fermi line for the case $t=|t_+|=10 |t_-|$ (red with circles) and $t=|t_-|=10 |t_+|$ (blue) along with their projection to the $p_z=0$ plane (dashed). This projection represents the boundary of the dispersionless flat band on the surface. In both cases $\phi_+=\phi_-=0$. Note that the helicity of the two lines is opposite, which gives the opposite signs of the invariant $N_1({\bf p}_\perp)=\pm 1$ in , and the opposite chiralities of the flat band.[]{data-label="fig:spiral"}](spiral){width="8cm"}
The same invariant can be written if the contour of integration is chosen parallel to $p_z$, i.e. at fixed ${\bf p}_\perp$. Due to periodic boundary conditions, the points $p_z=\pm \pi/a$ are equivalent and the contour of integrations forms the closed loop. $$N_1({\bf p}_\perp)=- {1\over 4\pi i} ~{\rm tr} ~\int_{-\pi/a}^{+\pi/a} dp_z ~ \sigma_z H^{-1}\nabla_{p_z} H\,,
\label{InvariantForLine2}$$ For interacting systems, this invariant can be represented in terms of the Green’s function expressed via the 3D vector ${\bf g}(p_z,\omega)$ [@Volovik2007]: $$G^{-1}(\omega,p_z)=ig_z(\omega,p_z) - g_x(\omega,p_z) \sigma_x+ g_y(\omega,p_z)\sigma_y\,.
\label{IsingFermions2}$$ In our model the components ${\bf g}(p_z,\omega)$ are: $$\begin{split}
& g_x(p_z,\omega)=p_x -| t_+| \cos (ap_z-\phi_+)-|t_-| \cos(a p_z-\phi_-)
\\
& g_y(p_z,\omega)=p_y +| t_+| \sin(a p_z-\phi_+)-|t_-| \sin (ap_z-\phi_-)\,,
\\
& g_z(p_z,\omega)=\omega \,.
\end{split}
\label{Components}$$ Then the invariant becomes [@Volovik2007] $$N_1({\bf p}_\perp)={1\over 4\pi}\int_{-\pi/a}^{\pi/a} dp_z\int_{-\infty}^{\infty}
d\omega~\hat{\bf g}\cdot
\left({\partial \hat{\bf g}\over\partial {p_z}} \times {\partial \hat{\bf
g}\over\partial {\omega}}\right)\,,
\label{2DInvariantIsing}$$ where $\hat{\bf g}= {\bf g}/|{\bf g}|$. It describes the topological properties of the fully gapped 1D system, with $p_x$ and $p_y$ being the parameters of the system.
![Fig. 2. The Fermi line (nodal line) in the bulk at the topological quantum phase transition, which occurs at $t=|t_+|=|t_-|$, for two values of the phase shift $\phi_- -\phi_+$. In this case the projection of the nodal line to the $p_z=0$ plane shrinks to the line segment with zero area because the Fermi line is flat: it is within the corresponding plane drawn on the figure. As a result the dispersionless flat band on the surface is absent at the transition, and has opposite chiralities on two sides of the transition.[]{data-label="fig:spiraltpequalstm"}](spiraltpequalstm){width="8cm"}
Let us first consider the case $t_-=0$. For $t_-=0$, the nodal line in forms a spiral, the projection of this spiral on the plane $p_z={\rm const}$ being the circle ${\bf p}_\perp^2\equiv p_x^2+p_y^2= |t_+|^2$ (the spiral survives for $t_- \neq 0$, but circle transforms to an ellipse, see Fig. \[fig:spiral\] for the case $|t_-| < |t_+|$). The topological charge in is $N_1({\bf p}_\perp)=1$ for momenta $|{\bf p}_\perp|<| t_+|$. If the momentum ${\bf p}_\perp$ is considered as a parameter of the 1D system, then for $|{\bf p}_\perp|<| t_+|$ the system represents the 1D topological insulators. For $|{\bf p}_\perp|> | t_+|$ one has $N_1({\bf p}_\perp)=0$ and thus the non-topological 1D insulator. The line $|{\bf p}_\perp|=| t_+|$ marks the topological quantum phase transition between the topological and non-topological 1D insulators.
Topological invariant $N_1({\bf p}_\perp)$ in determines also the property of the surface bound states of the 1D system: the topological insulator must have the surface states with exactly zero energy. These states exist for any parameter within the circle $|{\bf p}_\perp|=| t_+|$. This means that there is a flat band of states with exactly zero energy, $E(|{\bf p}_\perp|<| t_+|)=0$, which is protected by topology. The bound states on the surface of the system can be obtained directly from the Hamiltonian: $$\begin{split}
&\hat H=\sigma_x (p_x - |t_+| \cos (a\hat p_z)) + \sigma_y (p_y + |t_+| \sin (a\hat p_z))
\\
&\hat p_z=-i\partial_z~~,~~ z<0\,.
\end{split}
\label{ContinuousHamilton}$$ We assumed that the system occupies the half-space $z<0$ with the boundary at $z=0$, and made rotation in $(p_x,p_y)$ plane to remove the phase $\phi_+$ of the hopping element $t_+$. This Hamiltonian has the bound state with exactly zero energy, $E({\bf p}_\perp)=0$, for any $|{\bf p}_\perp|<|t_+|$, with the eigenfunction concentrated near the surface: $$\Psi \propto \left( \begin{array}{cc}
0\\
1
\end{array} \right)
(p_x-ip_y) \exp{\frac{z \ln(t_+/(p_x+i p_y))}{a}}~~,~~|{\bf p}_\perp|< |t_+| \,.
\label{SurfaceWaveFunction}$$ The normalizable wave functions with zero energy exist only for ${\bf p}_\perp$ within the circle $|{\bf p}_\perp|\leq |t_+|$, i.e. the surface flat band is bounded by the projection of the nodal spiral onto the surface. Such correspondence between the flat band on the surface and lines of zeroes in the bulk has been also found in Ref. [@SchnyderRyu2010].
Restoring the non-zero hopping element $t_-$, we find that for $|t_-|<|t_+|$ there is still the region of the momentum ${\bf p}_\perp$ for which the topological charge in is $N_1({\bf p}_\perp)=1$. However, the area of the projection of the nodal line on the surface (and thus the area of the flat band) is reduced. Finally at $|t_-|=|t_+|$, the nodal line becomes flat, its projection on the surface shrinks to the line segment $p_y/p_x=\tan (\phi_+-\phi_-)/2$, and the flat band disappears (Fig. \[fig:spiraltpequalstm\]). For $|t_-|>|t_+|$, the spiral appears again, but the helicity of the spiral changes sign together with the topological charge in , which becomes $N_1({\bf p}_\perp)=-1$. Thus the point $|t_-|=|t_+|$ marks the topological quantum phase transition, at which the flat band changes its orientation (or actually its chirality). At the transition line $|t_-|=|t_+|$ the flat band on the surface does not exist. The non-zero helicity of the nodal line at $|t_-|\neq |t_+|$ reflects the broken inversion symmetry of the system at $|t_-|\neq |t_+|$. Note that in Ref. [@SchnyderRyu2010] the flat surface bands also appeared in systems without inversion symmetry.
Formation of the flat band in multilayered system {#flatbandformation}
=================================================
We consider the discrete model with finite number $N$ of layers. It is described by the $2N\times 2N$ Hamiltonian with the nearest neighbor interaction between the layers in the form: $$\begin{split}
&H_{ij}({\bf p}_\perp)=
\\
&\boldsymbol{\sigma}\cdot{\bf p}_\perp \delta_{ij} - (t_+ \sigma^+ + t_- \sigma^- ) \delta_{i,j+1} -(t_+^*\sigma^- + t_-^*\sigma^+)\delta_{i,j-1}
\\
& 1\leq i\leq N~~,~~{\bf p}_\perp=(p_x,p_y)\,.
\label{DiscreteHamiltonian}
\end{split}$$ In the continuous limit of infinite number of layers transforms to . For $t_-=0$ and $t_+\equiv t$, equation represents the particular case of the model [@HeikkilaVolovik2010], which exhibits the Dirac point with multiple topological charge equal to $N$.
![Fig. 3. Formation of the surface flat band for parameters $t\equiv t_+ \in \Bbb{R}$, $t_-=0$. When the number $N$ of layers increases, the dispersionless band evolves from the gapless branch of the spectrum, which has the form $E=\pm |{\bf p}_\perp|^N$ in the vicinity of multiple Dirac point. The spectrum is shown as a function of $p_x$ for $p_y=0$ and finite value of $p_z$. The curves for $N=100$ and $N=200$ are almost on top of each other. Asymptotically the spectrum $E=\pm |{\bf p}_\perp|^N$ transforms to the dispersionless band within the projection of the nodal line to the surface. []{data-label="spectrum_multiple_Dirac"}](lowenspectrumtm0){width="8cm"}
Let us consider how this multiple Dirac point transforms to the flat band in the dimensional crossover, i.e. in limit $N\rightarrow\infty$ (see Fig. \[spectrum\_multiple\_Dirac\]). The Hamiltonian $H$ in with $t_-=0$ and $t_+\equiv t$ has two low-energy eigenstates with dispersion in the vicinity of multiple Dirac point (at $|{\bf p}_\perp| \ll |t|$) $$\epsilon/|t| \approx \pm (|{\bf p}_\perp|/|t|)^N.
\label{Nthorderdispersion}$$ Let us look for the eigenfunctions corresponding to the dispersion . At ${\bf p}_\perp=0$, the eigenfunctions are finite only in the first and $N$’th layer, $$\psi_{0\pm}=\psi({\bf p}_\perp=0)=\frac{1}{\sqrt{2}}[|\downarrow\rangle_1 \pm |\uparrow\rangle_N].$$ That is: we get $H\psi_{0\pm}=0$. Now for $|{\bf p}_\perp| \ll |t|$ it suffices to find a function $\delta \psi$ satisfying $H\delta \psi =
|{\bf p}_\perp|^N/|t|^{N-1}$. It turns out that the small parameter in this expansion is dependent on the layer index, and the smallest correction we have to include is $\eta \sim |{\bf p}_\perp|^N$. In this order, we can expand the eigenvalue equation in $\eta$: $$\begin{split}
&H\psi_{p}=H(\psi_0+\eta \delta \psi_1 + \eta^2 \delta \psi_2 +
\cdots)\\&=\eta (H \delta \psi_1 + \eta H \delta \psi_2 + \cdots)\\
&=\alpha \eta \psi_0 + \eta^2 H \delta \psi_2 = |{\bf p}_\perp|^N/|t|^{N-1} \psi_p
\\
&=\eta (\psi_0/|t|^{N-1} + \eta \delta \psi_1/|t|^{N-1} + \cdots),
\end{split}$$ where $\eta \delta \psi_1 = \delta \psi$. To lowest order in $\eta$ it is thus sufficient to find this $\delta \psi$. Moreover, to preserve the normalization, we seek $\delta \psi$ such that it is orthogonal to $\psi_0$. In this case, to the lowest order in $\eta$ the normalization remains unaltered.
Let us hence write $\delta \psi$ in the form $$\delta \psi = \sum_{n=1}^N (\alpha_{n\uparrow} |\uparrow\rangle_n +
\alpha_{n\downarrow}|\downarrow\rangle_n$$ so that $\alpha_{1\downarrow}=\alpha_{N\uparrow}=0$. Now acting with the Hamiltonian yields $$\begin{split}
H\delta \psi =& \sum_{n=1}^N \bigg[(p_x-i
p_y)\alpha_{n\uparrow}|\downarrow\rangle_n + (p_x+i p_y)
\alpha_{n\downarrow}|\uparrow \rangle_n \\&- t
\alpha_{n+1,\downarrow}(1-\delta_{nN})|\uparrow\rangle_n - t^*
\alpha_{n-1,\uparrow} (1-\delta_{n1})|\downarrow\rangle_n \bigg]\\=& \pm
\frac{(p_x^2+p_y^2)^{N/2}}{\sqrt{2}|t|^{N-1}}(|\downarrow\rangle_1
\pm
|\uparrow \rangle_N).
\end{split}$$
![Fig. 4. Formation of the nodal line from the evolution of the gapped branch of the spectrum of the multilayered system, when the number $N$ of layers increases. The spectrum is shown as function of $p_x$ for $p_y=0$ and finite value of $p_z$ for $t=t_+=2 t_- \in \Bbb{R}$. The curves for $N=100$ and $N=200$ lie almost on top of each other, indicating the bulk limit. Asymptotically the nodal line $p_x = \pm (t_++t_-) \cos (ap_z)$, $ p_y = \pm (t_+-t_-) \sin (ap_z)$ is formed (two points on this line are shown, which correspond to $p_y=0$).[]{data-label="spectrum1"}](gappedspectrum){width="8cm"}
Considering this equation separately for each component results in
$$\begin{aligned}
\alpha_{n\uparrow} &= \pm \frac{(-1)^{N-1}(p_x+ip_y)^{N/2}
(p_x-ip_y)^{N/2-n}}{\sqrt{2}t^{(N-1)/2} (t^*)^{(N+1)/2-n}}\\
\alpha_{n\downarrow} &=
\frac{(-1)^{N-1}(p_x+ip_y)^{n-N/2}(p_x-ip_y)^{N/2}}{\sqrt{2}t^{n-(N-1)/2}(t^*)^{(N-1)/2}}.\end{aligned}$$
The expressions are somewhat simpler for $p_y=0$:
$$\begin{aligned}
\alpha_{n\uparrow} &= \pm \frac{(-1)^{N-1}}{\sqrt{2}}\left(\frac{p_x}{t^*}\right)^{N-n} \left(\frac{t^*}{t}\right)^{(N-1)/2}\\
\alpha_{n\downarrow} &=
\frac{(-1)^{N-1}}{\sqrt{2}} \left(\frac{p_x}{t}\right)^n \left(\frac{t}{t^*}\right)^{(N-1)/2}.\end{aligned}$$
We hence find that the eigensolutions behave as $\sim |{\bf p}_\perp|^n$ for $n$ layers away from the surfaces, in agreement with the wave function for the surface flat band obtained in the continuous limit $$\begin{split}
& \Psi \propto \left( \begin{array}{cc}
0\\
1
\end{array} \right)(p_x-ip_y)\exp{\frac{z \ln(t/(p_x+ip_y))}{a}}
\\& + \left( \begin{array}{cc}
1\\
0
\end{array} \right)(p_x+ip_y)\exp{\frac{-(L+z) \ln(t^*/(p_x-ip_y))}{a}}
\\
& \overset{p_y = 0}{\approx }
\left( \begin{array}{cc}
0\\
1
\end{array} \right) \left(\frac{p_x}{t}\right)^{n}
+ \left( \begin{array}{cc}
1\\
0
\end{array} \right) \left(\frac{p_x}{t^*}\right)^{N-n}
~~,~~|{\bf p}_\perp|<|t|\,.
\label{SurfaceWaveFunctionDiscrete}
\end{split}$$
![Fig. 5. Gapped spectrum as a function of $p_x$ and $p_y$ for N=200 layers corresponding to the bulk limit. The nodal line forms at the projection of the Fermi line $p_x = \pm (t_++t_-) \cos (ap_z)$, $ p_y = \pm (t_+-t_-) \sin (ap_z)$ to the $p_z$-plane.[]{data-label="spectrummesh"}](gappedspectrummesh){width="8cm"}
![Fig. 6. Evolution of the spectrum at $t=t_+=2 t_- \in \Bbb{R}$. The curves for $N=100$ and $N=200$ are almost on top of each other, indicating the bulk limit. Asymptotically the flat band is formed for $|p_x|<|t_+|+|t_-|$ and $|p_y|<|t_+|-|t_-|$. []{data-label="spectrum2"}](lowenspectrum){width="8cm"}
![Fig. 7. Energy of the surface states for different $p_x$ and $p_y$ for $N=200$ layers corresponding close to the bulk limit, calculated with the same parameters as in Fig. \[spectrum2\]. The flat band forms in the ellipse-shaped region $|p_x| < (t_++t_-)\cos(\theta)$, $|p_y| < (t_+-t_-)\sin(\theta)$, $\theta \in [0,2\pi]$. []{data-label="lowenspectrummesh"}](lowenspectrummeshforN200){width="8cm"}
We have considered the evolution of the multiple Dirac point with spectrum $E\sim \pm |{\bf p}_\perp|^N$ into the surface flat band when $N\rightarrow \infty$. However, the surface flat band survives when the coupling $t_-$ is added which splits the multiple Dirac point. The reason for the robustness of the flat band is the nodal line which is developed in the bulk (see Figs. \[spectrum1\] and \[spectrummesh\]). As we discuss in the previous section, the nodal line supports the topological stability of the flat band due to the bulk-surface correspondence. Formation of the dispersionless flat band for finite $|t_-|<|t_+|$ is shown in Figure \[spectrum2\]. The flat band has the ellipse-shaped region (Fig. \[lowenspectrummesh\]) whose boundary is the projection of the bulk nodal line on the surface.
Conclusion
==========
We have considered the dimensional crossover in which the multiple Dirac (Fermi) point in quasi 2+1 system evolves into a flat band on the surface of the 3+1 system when the number of atomic layers increases. The formation of the surface flat band is a generic phenomenon which accompanies the formation of the nodal lines in bulk in the form of a spiral. We have also demonstrated a new type of a topological quantum phase transition, at which the flat band shrinks and changes its chirality. This transition is accompanied by the change of the helicity of the nodal line. The considered crossover is one of the numerous examples of the evolution of the topologically non-trivial quantum vacua, which are represented by the momentum-space topological objects. This example displays the ambivalent role of symmetry for these objects: the symmetry may support the topological charge of the object in momentum space, or may destroy the object. In our case the time reversal symmetry supports the existence of the nodal line in bulk, and thus the flat band on the surface, while the space inversion symmetry kills the flat band.
Different scenarios of the dimensional crossover can be realized with cold atoms in optical lattices [@StanescuGalitskiSarma2010]. For our scenario we need the special stacking of graphene-like layers to have a spiraling nodal line. The spiral formed by zeroes in the energy spectrum has been discussed in Ref. [@McClure1969] for rhombohedral graphite. The nodal line there is modified to the chain of the connected electron and hole Fermi pockets, so that “the Fermi surfaces resemble very long ‘link sausages’ wound in a loose spiral” [@McClure1969].
The dispersionless flat band also exists on the surface of the polar phase of triplet superfluid/superconductor. This superconductor obeys the time reversal and space inversion symmetry, and it has a line of zeroes in the form of a ring [@Volovik2003]. This ring gives rise to two surface flat bands with opposite chirality corresponding to two directions of spin. However, spin-orbit interaction may lead to the mutual annihilation of the flat bands.
More examples of dimensional crossover and exotic quantum phase transitions emerge when one considers the topology in the phase space, i.e. in the combined momentum-real $({\bf p},{\bf r})$ space [@GrinevichVolovik1988; @Volovik2003]. This is appropriate in particular for the fermion zero modes localized on topological defects, which also may have dispersionless flat band [@KopninSalomaa1991; @Volovik2003] and bulk-vortex correspondence [@TeoKane2010; @SilaevVolovik2010]. The flat band inside the vortex core in 3-dimentional superfluids with Fermi (Dirac) points emerging due to the bulk-vortex correspondence is discussed in [@Volovik2010].
Systems with topologically protected Fermi lines or Fermi points belong to the broad class of topological matter. As distinct from topological insulators and superconductors/superfluids of the $^3$He-B type [@HasanKane2010; @QiZhang2010], which belong to fully gapped topological matter, these are the gapless topological matter. However, it has the features which was earlier ascribed only to topological insulators, i.e. protected gapless states on the surface or inside the vortex core. If one or two components of the momentum ${\bf p}$ is fixed, such as the projection $p_z$ of the momentum on the direction of the vortex axis (see accompanied paper [@Volovik2010]) or $|p_\perp| < |t|$ in our case, the system effectively behaves as one-dimensional and two-dimensional topological insulator correspondingly. This is because for these parameters the system is fully gapped, while the effective 1D or 2D Hamiltonian has a non-trivial topology. Since the topological insulators cannot be adiabatically turned to a trivial insulator, this gives rise to the zero energy edge states in those intervals of parameters $p_z$ or $|p_\perp| $, for which the topology is nontrivial. As a result, in both cases one has the dispersionless spectrum with zero energy – the flat band – in the vortex core and on the surface of the system correspondingly.
This work is supported in part by the Academy of Finland, Centers of excellence program 2006–2011 and the European Research Council (Grant No. 240362-Heattronics). It is our pleasure to thank N.B. Kopnin for helpful discussions.
[99]{}
G.E. Volovik, [*The Universe in a Helium Droplet*]{}, Clarendon Press, Oxford (2003).
P. Hořava, Stability of Fermi surfaces and $K$-theory, Phys. Rev. Lett. **95**, 016405 (2005).
V.A. Khodel and V.R. Shaginyan, Superfluidity in system with fermion condensate, JETP Lett. **51**, 553 (1990).
G.E. Volovik, A new class of normal Fermi liquids, JETP Lett. **53**, 222 (1991).
G.E. Volovik, Quantum phase transitions from topology in momentum space, in: “Quantum Analogues: From Phase Transitions to Black Holes and Cosmology”, eds. W.G. Unruh and R. Schützhold, Springer Lecture Notes in Physics [**718**]{} (2007), pp. 31–73; cond-mat/0601372.
V.R. Shaginyan, M.Ya. Amusia, A.Z. Msezane, K.G. Popov, Scaling behavior of heavy fermion metals, Physics Reports [**492**]{}, 31–109 (2010).
N.B. Kopnin and M.M. Salomaa, Mutual friction in superfluid $^3$He: Effects of bound states in the vortex core, Phys. Rev. B [**44**]{}, 9667–9677 (1991).
G.E. Volovik, On Fermi condensate: near the saddle point and within the vortex core, JETP Lett. **59**, 830 (1994).
T.Sh. Misirpashaev and G.E. Volovik, Fermion zero modes in symmetric vortices in superfluid $^3$He, Physica, [**B 210**]{}, 338–346 (1995).
Sung-Sik Lee, Non-Fermi liquid from a charged black hole: A critical Fermi ball, Phys. Rev. D [**79**]{}, 086006 (2009).
F. Guinea, A.H. Castro Neto and N.M.R. Peres, Electronic states and Landau levels in graphene stacks, Phys. Rev. B [**73**]{}, 245426 (2006).
A.P. Schnyder and Shinsei Ryu, Topological phases and flat surface bands in superconductors without inversion symmetry, arXiv:1011.1438.
G.E. Volovik and V.A. Konyshev, Properties of the superfluid systems with multiple zeros in fermion spectrum, JETP Lett. [**47**]{}, 250–254 (1988).
J.L. Manes, F. Guinea and M.A.H. Vozmediano, Existence and topological stability of Fermi points in multilayered graphene, Phys. Rev. B [**75**]{}, 155424 (2007).
P. Dietl, F. Piechon and G. Montambaux, New magnetic field dependence of Landau levels in a graphenelike structure, Phys. Rev. Lett. 100, 236405 (2008); G. Montambaux, F. Piechon, J.-N. Fuchs, and M.O. Goerbig, A universal Hamiltonian for motion and merging of Dirac points in a two-dimensional crystal, Eur. Phys. J. B [**72**]{}, 509–520 (2009); arXiv:0907.0500.
Y.D. Chong, X.G. Wen and M. Soljacic, Effective theory of quadratic degeneracies, Phys. Rev. B 77, 235125 (2008).
S. Banerjee, R. R. Singh, V. Pardo and W. E. Pickett, Tight-binding modeling and low-energy behavior of the semi-Dirac point, Phys. Rev. Lett. [**103**]{}, 016402 (2009).
K. Sun, H. Yao, E. Fradkin and S.A. Kivelson, Topological insulators and nematic phases from spontaneous symmetry breaking in 2D Fermi systems with a quadratic band crossing, Phys. Rev. Lett. [**103**]{}, 046811 (2009).
L. Fu, Topological crystalline insulators, arXiv:1010.1802.
T.T. Heikkilä and G.E. Volovik, Fermions with cubic and quartic spectrum, Pis’ma ZhETF [**92**]{}, (2010); arXiv:1010.0393.
G.E. Volovik, Reentrant violation of special relativity in the low-energy corner, JETP Lett. [**73**]{}, 162–165 (2001); hep-ph/0101286.
F.R. Klinkhamer and G.E. Volovik, Emergent CPT violation from the splitting of Fermi points, Int. J. Mod. Phys. A [**20**]{}, 2795–2812 (2005); hep-th/0403037.
V. Gurarie, Single particle Green’s functions and interacting topological insulators, arXiv:1011.2273.
T.D. Stanescu, V. Galitski and S. Das Sarma, Topological states in two-dimensional optical lattices, Phys. Rev. A [**82**]{}, 013608 (2010).
J.W. McClure, Electron energy band structure and electronic properties of rhombohedral graphite, Carbon [**7**]{}, 425–432 (1969).
P.G. Grinevich and G.E. Volovik, Topology of gap nodes in superfluid $^3$He: $\pi_4$ homotopy group for $^3$He-B disclination, J. Low Temp. Phys. [**72**]{}, 371–380 (1988).
J.C.Y. Teo and C.L. Kane, Topological defects and gapless modes in insulators and superconductors, Phys. Rev. B [**82**]{}, 115120 (2010).
M.A. Silaev and G.E. Volovik, Topological superfluid $^3$He-B: fermion zero modes on interfaces and in the vortex core, J. Low Temp. Phys, [**161**]{}, 460–473 (2010); arXiv:1005.4672.
G.E. Volovik, Flat band in the core of topological defects: bulk-vortex correspondence in topological superfluids with Fermi points, arXiv:1011.4665.
M.Z. Hasan and C.L. Kane, Topological Insulators, arXiv:1002.3895.
Xiao-Liang Qi and Shou-Cheng Zhang Topological insulators and superconductors, arXiv:1008.2026.
[^1]: e-mail: tero.heikkila@tkk.fi
[^2]: e-mail: volovik@boojum.hut.fi
|
---
abstract: |
ACL2(r) is a variant of ACL2 that supports the irrational real and complex numbers. Its logical foundation is based on internal set theory (IST), an axiomatic formalization of non-standard analysis (NSA). Familiar ideas from analysis, such as continuity, differentiability, and integrability, are defined quite differently in NSA—some would argue the NSA definitions are more intuitive. In previous work, we have adopted the NSA definitions in ACL2(r), and simply taken as granted that these are equivalent to the traditional analysis notions, e.g., to the familiar $\epsilon$-$\delta$ definitions. However, we argue in this paper that there are circumstances when the more traditional definitions are advantageous in the setting of ACL2(r), precisely because the traditional notions are classical, so they are unencumbered by IST limitations on inference rules such as induction or the use of pseudo-lambda terms in functional instantiation. To address this concern, we describe a formal proof in ACL2(r) of the equivalence of the traditional and non-standards definitions of these notions.
[Keywords:]{}2(r), non-standard analysis, real analysis.
author:
- John Cowles
- Ruben Gamboa
bibliography:
- 'rag.bib'
title: 'Equivalence of the Traditional and Non-Standard Definitions of Concepts from Real Analysis'
---
Introduction {#intro}
============
ACL2(r) is a variant of ACL2 that has support for reasoning about the irrational numbers. The logical basis for ACL2(r) is *non-standard analysis* (NSA), and in particular, the axiomatic treatment of NSA developed as *internal set theory* (IST) [@Nel:nsa]. Traditional notions from analysis, such as limits, continuity, and derivatives, have counterparts in NSA
Previous formalizations of NSA typically prove that these definitions are equivalent early on. We resisted this in the development of ACL2(r), preferring simply to state that the NSA notions were the “official” notions in ACL2(r), and that the equivalence to the usual notions was a “well-known fact” outside the purview of ACL2(r). In this paper, we retract that statement for three reasons.
First, the traditional notions from real analysis require the use of quantifiers. For instance, we say that a function $f$ has limit $L$ as $x$ approaches $a$ iff $$\forall\epsilon>0, \exists \delta>0 \text{ such that }
|x-a|<\delta \Rightarrow |f(x)-L|<\epsilon.$$ While ACL2(r) has only limited support for quantifiers, this support is, in fact, sufficient to carry out the equivalence proofs. However, it should be noted that the support depends on recent enhancements to ACL2 that allow the introduction of Skolem functions with non-classical bodies. So, in fact, it is ACL2’s improved but still modest support for quantifiers that is sufficient. That story is interesting in and of itself.
Second, the benefit of formalization in general applies to this case, as the following anecdote illustrates. While trying to update the proof of the Fundamental Theorem of Calculus, we were struggling to formalize the notion of *continuously differentiable*, i.e., that $f$ is differentiable and $f'$ is continuous. To talk about the class of differentiable functions in ACL2(r), we use an `encapsulate` event to introduce an arbitrary differentiable functions. It would be very convenient to use the existing `encapsulate` for differentiable functions, and prove as a theorem that the derivative was continuous. That is to say, it would be very convenient if all derivative functions were continuous. Note: we mean “derivative” functions, not “differentiable” functions. The latter statement had previously been proved in ACL2(r).
Encouraged by Theorem 5.6 of [@Nel:nsa], one of us set out to prove that, indeed, all derivatives of functions are continuous.
Let $f:I\rightarrow\mathbb{R}$ where $I$ is an interval. If $f$ is differentiable on $I$, then $f'$ is continuous on $I$.
Nelson’s proof of this theorem begins with the following statement:
> We know that $$\label{eqn-deriv}
> \forall^\text{st}x \forall x_1 \forall x_2 \left\{ x_1 \approx x
> \wedge x_2 \approx x \wedge x_1 \ne x_2 \Rightarrow
> \frac{f(x_2)-f(x_1)}{x_2-x_1} \approx f'(x)\right\}.$$
This is, in fact, plausible from the definition of continuity, which is similar but with $x$ taking the place of $x_2$. The remainder of the proof was “trivially” (using the mathematician’s sense of the word) carried out in ACL2(r), so only the proof of this known fact remained. The hand proof for this fact was tortuous, but eminently plausible. Unfortunately, the last step in the proof failed, because it required that $y\cdot y_1 \approx y\cdot y_2$ whenever $y_1
\approx y_2$—but this is true only when $y$ is known to be limited.
The other of us was not fooled by Theorem 5.6: What about the function $x^2 \sin(1/x)$? The discrepancy was soon resolved. Nelson’s definition of derivative in [@Nel:nsa] is precisely Equation \[eqn-deriv\]. No wonder this was a known fact! And the problem is that Equation \[eqn-deriv\] is equivalent to the notion of continuously differentiable, and *not equivalent* to the usual notion of differentiability. But in that case, how are we to know if theorems in ACL2(r) correspond to the “usual” theorems in analysis. I.e., what if we had chosen Equation \[eqn-deriv\] as the definition of derivative in ACL2(r)? Preventing this situation from reoccurring is the second motivator for proving the equivalence of the definitions in ACL2(r) once and for all.
Third, the NSA definitions are non-classical; i.e., they use notions such as “infinitely close” and “standard.” Indeed, it is these non-classical properties that make NSA such a good fit for the equational reasoning of ACL2(r). However, non-classical functions are severely limited in ACL2(r): Induction can be used to prove theorems using non-classical functions only up to standard values of the free variables, and function symbols may not map to pseudo-lambda expressions in a functional instantiation [@GC:acl2r-theory]. As a practical consequence of these restrictions, it is impossible to prove that $\frac{d(x^n)}{dx} = n \cdot x^{n-1}$ by using the product rule and induction in ACL2(r). In [@Gam:dissertation], for example, this is shown only for standard values of $n$. However, using the traditional notion of differentiability, the result does follow from induction. This, too, would have been reason enough to undertake this work.
It should be emphasized that the main contribution of this paper is the formalization in ACL2(r) of the results described in this paper. The actual mathematical results are already well-known in the non-standard analysis community. Moreover, some of these equivalence results were formalized mechanically as early as [@BaBl:nsa]. The novelty here is the formalization in ACL2(r), which complicates things somewhat because of the poor support for (even first-order) set theory.
The rest of this paper is organized as follows. In Section \[series\], we discuss equivalent definitions regarding convergence of series[^1]. Section \[limits\] considers the limit of a function at a point. The results in this section are used in Section \[continuity\] to show that the notions of continuity at a point are also equivalent. This leads into the discussion of differentiability in Section \[differentiability\]. Finally, Section \[integrability\] deals with the equivalent definitions of Riemann integration.
Convergence of Series {#series}
=====================
In this section, we show that several definitions of convergence are in fact equivalent. In particular, we will consider the traditional definitions, e.g., as found in [@Rudin:analysis], and the corresponding concepts using non-standard analysis, e.g., as found in [@Robinson:nsa].
We start with the constrained function `Ser1`, which represents an arbitrary sequence; i.e., it is a fixed but arbitrary function that maps the natural numbers to the reals. Moreover, `Ser1` is assumed to be a classical function—otherwise, some of the equivalences do not hold. Similarly, the function `sumSer1-upto-n` defines the partial sum of `Ser1`, i.e., the sum of the values of `Ser1` from $0$ to `n`.
The first definition of convergence is the traditional one due to Weierstrass: $$(\exists L) (\forall \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon).$$ In ACL2, we can write the innermost quantified subformula of this definition as follows:
(defun-sk All-n-abs-sumSer1-upto-n-L<eps (L eps M)
(forall n (implies (and (standardp n)
(integerp n)
(> n M))
(< (abs (- (sumSer1-upto-n n) L))
eps))))
This version of the definition restricts `n` to be a standard integer, which makes it a non-classical formula. A different version omits this requirement, and it is a more direct translation of Weierstrass’s criterion.
(defun-sk Classical-All-n-abs-sumSer1-upto-n-L<eps (L eps M)
(forall n (implies (and (integerp n)
(> n M))
(< (abs (- (sumSer1-upto-n n) L))
eps))))
ACL2 can verify that these two conditions are equal to each other, but only when the parameters `L`, `eps`, and `M` are standard. This follows because `defchoose` is guaranteed to choose a standard witness for classical formulas and standard parameters. More precisely, the witness function is a classical formula is also classical, all all classical functions return standard values for standard inputs [@GC:acl2r-theory]. Once this basic equivalence is proved, it follows that both the classical and non-classical versions of Weierstrass’s criterion are equivalent. It is only necessary to add each of the remaining quantifiers one by one.
We note in passing that the two versions of Weierstrass’s criterion are *not* equivalent for non-standard values of the parameters `L`, `eps`, and `M`. Consider, for example, the case when `eps` is infinitesimally small. It is straightforward to define the sequence $\{a_n\}$ such that the partial sums are given by $\sum_{i=1}^{n}{a_i} = 1/n$, clearly converging to $0$. Indeed, for any $\epsilon>0$ there is an $N$ such that for all $m > N$, $\sum_{i=1}^{m}{a_i} = 1/m < 1/N < \epsilon$. However, for infinitesimally small $\epsilon$, the resulting $N$ is infinitesimally large. This is fine using the second (classical) version of Weierstrass’s criterion, but not according to the first, since for all standard $N$, $1/N > \epsilon$, so no standard $N$ can satisfy the criterion. However, the two criteria are equivalent when written as sentences, i.e., when they have no free variables.
Note that the only difference between the two versions of Weierstrass’s criterion is that one of them features only standard variables, whereas the other features arbitrary values for all quantified variables. Using the shorthand $\forall^\text{st}$ and $\exists^\text{st}$ to introduce quantifiers for standard variables, the two versions of the criteria can be written as follows:
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists^\text{st} M) (\forall^\text{st} n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
- $(\exists L) (\forall \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
It is obvious that these two statements are extreme variants, and that there are other possibilities mixing the two types of quantifiers. Indeed, we verified with ACL2 that the following versions are also equivalent to the above:
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists^\text{st} M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
The last two versions of Weierstrass’s criterion are useful, because they are easier to show equivalent to the typical non-standard criterion for convergence: $(\exists L) (\forall n) (large(n) \Rightarrow \sum_{i=0}^{n}{a_i} \approx L)$, i.e., for large values of $n$, $\sum_{i=0}^{n}{a_i}$ is infinitely close to $L$. This is the convergence criterion used in [@Gam:dissertation], for example, where power series are used to introduce functions such as $e^x$.
There is another statement of the non-standard convergence criterion that appears weaker: $$(\exists L) (\exists M) (large(M) \wedge (\forall n) (n > M
\Rightarrow \sum_{i=0}^{n}{a_i} \approx L)).$$ This version does not require that $\sum_{i=0}^{n}{a_i} $ is close to $L$ for all large $n$, only that this is true for $n$ larger than some large $M$. We have shown in ACL2 that these statements are in fact equivalent to Weierstrass’s criterion for convergence. In fact, since $\{a_n\}$ is a classical sequence, the value of $L$ is guaranteed to be standard, so we can replace $(\exists L)$ with $(\exists^\text{st} L)$ in both of the non-classical convergence criteria given above and still retain equivalence.
When the sequence is composed of non-negative numbers, we can make even stronger guarantees. Let $\{b_n\}$ be such a sequence, which we introduce into ACL2 as the constrained function `Ser1a`. All the previous results about `Ser1`—i.e., about $\{a_n\}$—apply to `Ser1a`, and we can carry over these proofs in ACL2 by using functional instantiation.
Using the non-standard criterion for convergence, we can easily see that if $\sum_{i=0}^{\infty}b_n$ converges, then $\sum_{i=0}^{N}{b_i}$ is not infinitely large, where $N$ is a fixed but arbitrary large integer[^2]. This simply follows from the facts that $\sum_{i=0}^{N}{b_i}\approx L$ and $L$ is standard.
The converse of this fact is also true: if $\sum_{i=0}^{N}{b_i}$ is not infinitely large, then $\sum_{i=0}^{\infty}b_n$ converges. This is harder to prove formally. The key idea is as follows. Since $\sum_{i=0}^{N}{b_i}$ is not infinitely large, then $\sum_{i=0}^{N}{b_i}$ must be close to an unique standard real number, i.e., $\sum_{i=0}^{N}{b_i} \approx L$ for some standard $L$. $\sum b_i$ is monotonic, so for any standard $n$, $\sum_{i=0}^{n}{b_i} \le \sum_{i=0}^{N}{b_i}$. And since $L$ is the unique real number that is close to $\sum_{i=0}^{N}{b_i}$, we can conclude that $\sum_{i=0}^{n}{b_i} \le L$ for all standard $n$. Using the non-standard transfer principle, this is sufficient to conclude that $\sum_{i=0}^{n}{b_i} \le L$ for all $n$, not just the standard ones. Using monotonicity once more, it follows that whenever $n>N$, $\sum_{i=0}^{n}{b_i} \approx L$, which is precisely the (weak) non-standard convergence criterion above. Thus, the series $\sum_{i=0}^{N}{b_i}$ converges, according to any of the criteria above.
Similar results hold for divergence to positive infinity. Let $\{c_n\}$ be an arbitrary sequence. Weierstrass’s criterion is given by $(\forall^\text{st} B) (\exists^\text{st} M) (\forall^\text{st} n) (n > M \Rightarrow \sum_{i=0}^{n}{c_i} > B)$. As before, for classical $\{c_n\}$ this is equivalent to a criterion with quantifiers over all reals, not just the standard ones: $(\forall B) (\exists M) (\forall n) (n > M \Rightarrow \sum_{i=0}^{n}{c_i} > B)$. And just as before, other variants (with $B$ and $M$ standard or just $B$ standard) are also equivalent. Moreover, these are equivalent to the non-standard criterion for divergence to positive infinity, namely that $(\forall n) (large(n) \Rightarrow
large(\sum_{i=0}^{n}{c_i}))$. A seemingly weaker version of this criterion is also equivalent, where it is only necessary that $c_n$ is large for all $n$ beyond a given large integer: $(\exists M) (large(M) \wedge (\forall n) (n > M \Rightarrow large(\sum_{i=0}^{n}{c_i})))$. Finally, if the sequence $\{c_n\}$ consists of non-negative reals, then it is even easier to show divergence. It is only necessary to test whether $large(\sum_{i=0}^{N}{c_i})$ where $N$ is an arbitrary large integer, and as before we choose the ACL2 constant `i-large-integer` for this purpose.
Limits of Functions {#limits}
===================
In this section, we consider the notion of limits. In particular, we show that the following three notions are equivalent (for standard functions and parameters):
- The non-standard definition (for standard parameters $a$ and $L$): $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall x) (x \approx a \wedge x \ne a \Rightarrow f(x) \approx L\right)).$$
- The traditional definition over the classical reals: $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall^\text{st} \epsilon > 0) (\exists^\text{st} \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - L| < \epsilon)\right).$$
- The traditional definition over the hyperreals: $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall \epsilon > 0) (\exists \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - L| < \epsilon)\right).$$
We begin by assuming the non-standard definition, which can be introduced in ACL2(r) by encapsulating the function $f$, its domain, and the limit function $L$, so that $\lim_{x \rightarrow a} f(x) =
L(a)$. The first step is to observe that $a\approx b$ is a shorthand notation for the condition that $|a-b|$ is infinitesimally small. Moreover, if $\epsilon>0$ is standard, then it must be (by definition) larger than any infinitesimally small number. Thus, we can prove that $$(\forall^\text{st} \epsilon > 0) \left((\forall x) (x \approx a \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)).$$ Similarly, if $\delta>0$ is infinitesimally small, then $|x - a| <
\delta$ implies that $x \approx a$. It follows then that $$(\forall^\text{st} \epsilon > 0)
(\forall \delta>0)
\left(small(\delta) \Rightarrow(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ It is an axiom of ACL2(r) that there exists a positive infinitesimal, namely `(/ (i-large-integer))`. Consequently, we can specialize the previous theorem with the constant $\delta_0$ (i.e., `(/ (i-large-integer))`). $$(\forall^\text{st} \epsilon > 0)
\left(0<\delta_0 \wedge small(\delta_0) \wedge (\forall x) \left(0< |x - a| < \delta_0 \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ Using ACL2 terminology, the specific number $\delta_0$ can be generalized to yield the following theorem: $$(\forall^\text{st} \epsilon > 0)
(\exists \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ Note that the statement inside the $\forall^\text{st}$ is classical; i.e., it does not use any of the notions from NSA, such as standard, infinitesimally close, infinitesimally small, etc. Consequently, we can use the transfer principle so that the quantifier ranges over all reals instead of just the standard reals. This results in the traditional definition of limits over the hyperreals: $$(\forall \epsilon > 0)
(\exists \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ The transfer can also be used in the other direction. The introduction of the existential quantifier is done via `defun-sk`, and ACL2(r) introduces such quantifiers by creating a Skolem choice function $\delta(a,\epsilon)$ using `defchoose`. Since the criteria used to define this Skolem function are classical, `defchoose` introduces the Skolem function itself as classical. That means that when $a$ and $\epsilon$ are standard, so is $\delta(a, \epsilon)$. This observation is sufficient to show that $\lim_{x\rightarrow a} f(x) = L(a)$, using the traditional definition over the classical reals: $$(\forall^\text{st} \epsilon > 0)
(\exists^\text{st} \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$
It is worth noting that this last theorem is not obviously weaker or stronger than the previous one, where the quantifiers range over all reals, not just the standard ones. The reason is that the $\forall$ quantifier ranges over more values than $\forall^\text{st}$, so it would appear that using $\forall$ instead of $\forall^\text{st}$ yields a stronger result. However, this advantage is lost when one considers the $\exists$ quantifier, since $\exists^\text{st}$ gives an apparently stronger guarantee. In actual fact, the two statements are equivalent, since the transfer principle can be used to guarantee that the value guaranteed by $\exists$ can be safely assumed to be standard.
To complete the proof, we need to show that if $\lim_{x\rightarrow a}
f(x) = L(a)$, using the traditional definition over the standard reals, then $\lim_{x\rightarrow a} f(x) = L(a)$ using the non-standard definition. To do this, we introduce a new `encapsulate` where $f$ is constrained to have a limit using the traditional definition over the standard reals. We then proceed as follows. First, fix $\epsilon$ so that it is positive and standard. From the (standard real) definition of limit, it follows that $$(\exists^\text{st} \delta > 0) (\forall x) \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right).$$ Now suppose that $\delta_0$ is a positive, infinitesimally small number. It follows that $\delta_0 < \delta$ for any positive, standard $\delta$. In particular, this means that $$0 < \delta_0 \wedge (\forall x) \left(0<|x-a|<\delta_0
\Rightarrow | f(x) - L(a)| < \epsilon\right).$$ Since $\delta_0$ is an arbitrary positive infinitesimal, we can generalize it as follows: $$(\forall \delta > 0) \left(small(\delta) \Rightarrow (\forall x) \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right)\right).$$ Next, we remove the universal quantifier on $x$. This step does not have a dramatic impact on the mathematical statement, but it is more dramatic in ACL2(r), since it opens up a function introduced with `defun-sk`: $$(\forall \delta > 0) \left(small(\delta) \Rightarrow \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right)\right).$$ Recall that $x \approx a$ is a shorthand for $|x-a|$ is infinitesimally small. Thus, the theorem implies the following $$(\forall \delta > 0) \left(small(\delta) \Rightarrow \left(x \approx
a \wedge x \ne a \Rightarrow | f(x) - L(a)| <
\epsilon\right)\right).$$ At this point, the variable $\delta$ is unnecessary, so we are left with the following: $$x \approx a \wedge x \ne a \Rightarrow | f(x) - L(a)| <
\epsilon.$$ Now, recall that we fixed $\epsilon$ to be an arbitrary, positive, standard real. This means that what we have shown is actually the following: $$(\forall^\text{st} \epsilon) \left(x \approx a \wedge x \ne a \Rightarrow |f(x) - L(a)| <
\epsilon\right).$$ To complete the proof, it is only necessary to observe that if $|x-y|<\epsilon$ for all standard $\epsilon$, then $x \approx y$. We prove this in ACL2(r) by finding an explicit standard $\epsilon_0$ such that if $x \not\approx y$, then $|x-y| > \epsilon_0$. The details of that proof are tedious and not very elucidating, so we omit them from this discussion[^3]. Once that lemma is proved, however, it follows that $\lim_{x\rightarrow a} f(x) = L(a)$ using the non-standard definition: $$x \approx a \wedge x \ne a \Rightarrow f(x) \approx
L(a).$$
These results show that the three definitions of limit are indeed equivalent, at least when $f$ and $L$ are classical, and $a$ is standard.
Continuity of Functions {#continuity}
=======================
Now we consider the notion of continuity. The function $f$ is said to be continuous at $a$ if $\lim_{x \rightarrow a} f(x) = f(a)$. Since this uses the notion of limit, it is no surprise that there are three different characterizations which are equivalent (for standard functions and parameters):
- The non-standard definition (for standard parameter $a$): $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall x) x \approx a \wedge x \ne a \Rightarrow f(x) \approx
f(a)\right).$$
- The traditional definition over the classical reals: $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall^\text{st} \epsilon > 0) (\exists^\text{st} \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - f(a)| < \epsilon)\right).$$
- The traditional definition over the hyperreals: $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall \epsilon > 0) (\exists \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - f(a)| < \epsilon)\right).$$
What this means is that the notion of continuity can be completely reduced to the notion of limits. In particular, the results from Section \[limits\] can be functionally instantiated to derive the results for continuity. It is only necessary to instantiate both functions $f(x)$ and $L(x)$ to the same function $f(x)$.
Differentiability of Functions {#differentiability}
==============================
Next, we consider differentiability. At first sight, it appears that we can also define differentiability in terms of limits. After all, $f'$ is the derivative of $f$ iff $$\lim_{\epsilon \rightarrow 0} \frac{f(x+\epsilon) - f(x)}{\epsilon} = f'(x).$$ The problem, however, is that the difference quotient on the left of the equation is a function of both $x$ and $\epsilon$, and having free variables complicates functional instantiation when non-classical functions are under consideration. So we chose to prove this result essentially from scratch, although the pattern is very similar to the equivalence of limits.
Before proceeding, however, it is worth noting one other equivalence of interest. The non-standard definition of differentiability is as follows: $$\begin{gathered}
standard(a) \wedge x_1 \approx a \wedge x_1 \ne a \wedge x_2 \approx a \wedge x_2 \ne a \Rightarrow \\
\qquad\left(\neg large\left(\frac{f(x_1) - f(a)}{x_1 - a}\right) \wedge
\frac{f(x_1) - f(a)}{x_1 - a} \approx \frac{f(x_2) - f(a)}{x_2 - a}\right).\end{gathered}$$ The form of this definition was chosen because it does not have a dependency on $f'$, so it can be applied to functions even when their derivative is unknown. However, when $f'$ is known, a simpler definition can be used: $$standard(a) \wedge x \approx a \wedge x \ne a \Rightarrow \left(\frac{f(x) - f(a)}{x - a} \approx f'(a)\right).$$ In fact, this latter form is the definition of differentiability that was used in [@ReGa:automatic-differentiator]. In that context, ACL2(r) was able to automatically define $f'$ from the definition of $f$, so $f'$ was always known and the simpler definition was appropriate.
So the first result we show is to relate the definitions of differentiable and derivative. To do so, we can begin with a differentiable function $f$ and define $f'$ (for standard $a$) as follows: $$f'(a) \equiv standard\text{ }part\left(\frac{f(a+\epsilon) - f(a)}{\epsilon}\right)$$ where $\epsilon$ is a fixed but arbitrary, positive, small real, e.g., `(/ (i-large-integer))`. By assumption, the difference quotient at $a$ is not large for $x_1 = a +
\epsilon$. Since $f'(a)$ is defined as the standard part of the difference quotient, it follows that it really is close to the difference quotient, so $f'$ really is the derivative of $f$.
Conversely, suppose $f'$ is the derivative of $f$. Since $f'$ is classical and $a$ is standard, it follows that $f'(a)$ is standard, and in particular it is not large. Therefore, for any $x_1$ such that $x_1 \approx a$ and $x_1 \ne a$, the difference quotient at $x_1$ must be close to $f'(a)$ (by definition of derivative). It follows then that the difference quotient at $x_1$ is not large, since it’s close to something that is not large. Moreover, since $\approx$ is transitive, if $x_2$ is also such that $x_2 \approx a$ and $x_2 \ne a$, then the difference quotients at $x_1$ and $x_2$ are both close to $f'(a)$, so they must also be close to each other. Thus, $f$ is differentiable according to the non-standard criterion. This simple argument is sufficient to combine the results of differentiability in ACL2(r) with the automatic differentiator described in [@ReGa:automatic-differentiator], making the automatic differentiator much more useful, since the notion of differentiability it uses is now consistent with the main definition in ACL2(r).
Next, we show that the non-standard definition of derivative is equivalent to the traditional definition (both for the hyperreals and for the standard reals). The proof is nearly identical to the corresponding proof about limits, so we omit it here.
Discussion {#discussion .unnumbered}
----------
There is a possible misconception that needs to be corrected. We have shown that the three different notions of differentiability are equivalent in principle. However, this is far from sufficient in practice.
To understand the problem, consider a function such as $x^n$, which may be represented in ACL2(r) as `(expt x n)`. In a real application of analysis, we may want to show that $f(x) = x-x^{2n}$ achieves its maximum value at $x=1/\sqrt[2n-1]{2n}$. ACL2(r) has the basic lemmas that are needed to do this:
- $\frac{d(x^n)}{dx} = n \cdot x^{n-1}$ (at least for standard $n$)
- Chain rule
- Extreme value theorem (EVT)
- Mean value theorem (MVT)
But these lemmas cannot be used directly. Consider the chain rule, for example. Its conclusion is about the differentiability of $f
\circ g$, and the notion of differentiability is the non-standard definition. What this means is that the functions $f$ and $g$ cannot be instantiated with pseudo-lambda expressions, so $f$ and $g$ must be unary, and that rules out $x^n$ which is formally a binary function, even if we think of it as unary because $n$ is fixed.
Moreover, suppose that we have a stronger theorem, namely that $$\frac{d(x^n)}{dx} = n \cdot x^{n-1}$$ for all $n$, not just the standard ones. It’s possible to prove this using induction and the hyperreal definition of differentiability (since it’s a purely classical definition, so induction can be used over all the naturals, not just the standard ones). Suppose we want to invoke the MVT on $x^n$ over some interval $[a,b]$. It is not possible to use the equivalence of the hyperreal and non-standard definitions. The reason, again, is that the non-standard definition is non-classical, so we cannot use pseudo-lambdas in functional instantiations. Even though the two definitions of differentiability are equivalent for arbitrary (unary) $f(x)$, they are not equivalent for the function $x^n$ (which is binary).
It may seem that this is an unnecessary limitation in the part of ACL2(r). But actually, it’s just part of the definition. The non-standard definition says that the difference quotient of $f$ is close to $f'$ at standard points $x$. It says nothing about non-standard points. But when a binary function is considered, e.g., $x^n$, what should happen when $x$ is standard but $n$ is not? In general, the difference quotient need *not* be close to the derivative.
This fact can be seen quite vividly by fixing $x=2$ and $N$ an arbitrary (for now), large natural number. Is the derivative with respect to $x$ of $x^n$ close to the difference quotient when $x=2$ and $n=N$? The answer can be no, as the following derivation shows: $$\begin{aligned}
\frac{(2+\epsilon)^N - 2^N}{\epsilon} &= \frac{(2^N + N \epsilon
2^{N-1} + {N \choose 2} \epsilon^2 2^{N-2} + \cdots + \epsilon^N) - 2^N}{\epsilon}\\
& = \frac{N \epsilon 2^{N-1} + {N \choose 2} \epsilon^2 2^{N-2} + \cdots + \epsilon^N}{\epsilon}\\
& = \frac{\epsilon(N 2^{N-1} + {N \choose 2} \epsilon 2^{N-2} + \cdots + \epsilon^{N-1}}{\epsilon}\\
& = N 2^{N-1} + {N \choose 2} \epsilon 2^{N-2} + \cdots + \epsilon^{N-1}\\\end{aligned}$$ All terms except the first have a factor of $\epsilon$, so if $N$ were limited, those terms would be infinitesimally small, and thus the derivative would be close to the difference quotient. But if $N$ is large, ${N \choose 2} = \frac{N(N-1)}{2}$ is also large. And if $N =
{\lceil 1/\epsilon \rceil}$, then ${N\choose2}\epsilon$ is roughly $N/2$, which is large. So the difference between the difference quotient and the derivative is arbitrarily large!
This shows that it is not reasonable to expect that we can convert from the traditional to the non-standard definition of derivative in all cases. Therefore, we cannot use previously proved results, such as the MVT directly.
A little subterfuge resolves the practical problem. What must be done is to prove a new version of the MVT (and other useful theorems about differentiability) for functions that are differentiable according to the $\epsilon$-$\delta$ criterion for reals or hyperreals, as desired. Of course, the proofs follow directly from the earlier proofs. For instance, suppose that $f(x)$ is differentiable according to the hyperreal criterion. Then, we can use the equivalence theorems to show that $f(x)$ is differentiable according to the non-standard criterion. In turn, this means that we can prove the MVT for $f(x)$ using functional instantiation. Now, the MVT is a classical statement, so we instantiate it functionally with pseudo-lambda expressions. E.g., we can now use the MVT on $f(x) \rightarrow
(\lambda (x) x^n)$. So even though we cannot say that $x^n$ satisfies the non-standard criterion for differentiability, we can still use the practical results of differentiability, but only after proving analogues of these theorems (e.g., IVT, MVT, etc.) for the classical versions of differentiability. The proof of these theorems is a straightforward functional instantiation of the original theorems. We have done this for the key lemmas about differentiation (e.g., MVT, EVT, Rolle’s Theorem, derivative composition rules, chain rule, derivative of inverse functions). We have also done this for some of the other equivalences, e.g., the Intermediate Value Theorem for continuous functions.
Integrability of Functions {#integrability}
==========================
The theory of integration in ACL2(r) was first developed in [@Kau:ftc], which describes a proof of a version of the Fundamental Theorem of Calculus (FTC). The version of the FTC presented there is sometimes called the First Fundamental Theorem of Calculus, and it states that if $f$ is integrable, then a function $g$ can be defined as $g(x) = \int_{0}^{x}{f(t) dt}$, and that $g'(x) =
f(x)$. As part of this proof effort, we redid the proof in [@Kau:ftc], and generalized the result to what is sometimes called the Second Fundamental Theorem of Calculus. This more familiar form says that if $f'(x)$ is continuous on $[a,b]$, then $\int_{a}^{b}{f'(x) dx} = f(b) - f(a)$.
The integral formalized in [@Kau:ftc] is the Riemann integral, and the non-standard version of integrability is as follows: $$\int_{a}^{b}{f(x) dx} = L \Leftrightarrow (\forall P)
\left(P \text{ is a partition of } [a, b] \wedge small(||P||) \Rightarrow \Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) \approx L \right)$$ $P$ is a monotonically increasing partition of $[a,b]$ if $P$ is given by a list $P = [ x_1,
x_2, \dots, x_n]$ with $x_1=a$ and $x_n=b$. The term $||P||$ denotes the maximum value of $x_i -x_{i+1}$ in the partition $P$.
The traditional definition uses limits instead of the notion of infinitesimally close. It can be written as follows: $$\int_{a}^{b}{f(x) dx} = L \Leftrightarrow
\lim_{||P|| \rightarrow 0}
\left(\Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) \right) \approx L.$$ The notion of limit is strange here, because what approaches 0 is $||P||$. Many partitions can have the same value of $||P||$, so this limit ranges over all such partitions at the same time.
Opening up the definition of limits, integrals can be expressed as follows: $$\begin{gathered}
\int_{a}^{b}{f(x) dx} = L \Leftrightarrow \\
\qquad (\forall \epsilon>0)(\exists
\delta>0)(\forall P) \\
\qquad\qquad
\left(P \text{ is a partition of } [a, b] \wedge ||P|| < \delta \Rightarrow \left|\Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) - L\right| < \epsilon\right).\end{gathered}$$ Once integrals are viewed in this way, the remainder of the proof is clear. Specifically, it follows the same line of reasoning as in Section \[limits\]. First, the $\delta$ that exists depends on $a$, $b$, and $\epsilon$, so it is standard when those are standard. Second, since there is a standard $\delta$ that is sufficient, any infinitesimal can take the place of $\delta$, and then the condition $||P||<\delta$ can be recast as $small(||P||)$. Finally, since the Riemann sum is within $\epsilon$ of $L$, for an arbitrary, positive, standard $\epsilon$, it must be that the Riemann sum is infinitesimally close to $L$. So the two definitions are, in fact, equivalent.
Conclusions
===========
In this paper, we showed how the non-standard definitions of traditional concepts from analysis are in fact equivalent to the traditional $\epsilon$-$\delta$ definitions. The results are especially important in ACL2(r) because the non-standard definitions feature non-classical notions, such as “infinitely close” and “infinitely small.” Consequently, they are limited in the use of induction and functional instantiation. However, the traditional notions are (by definition) classical, so they are unencumbered by such limitations.
This presents an interesting dilemma. In our experience, analysis style proofs are much easier to do and automate using non-standard analysis. However, *using* those results in subsequent proof attempts is much easier to do with the traditional (i.e., classical) statements. The distinction we’re making is between *proving* the correctness of Taylor’s Theorem, say, and actually *using* Taylor’s Theorem in a larger verification effort. For example, the formalization of Taylor’s Theorem in [@SaGa:sqrt] took extreme care to push free variables (including what were really summation indexes for the series) all the way into the original `encapsulate` introducing the function to be approximated. However, now that the equivalences are proved, a more elegant approach can be followed: First, prove a “clean” version of Taylor’s Theorem using NSA, then use that result to show that Taylor’s Theorem also holds using the traditional definition of derivative. The “traditional” version of Taylor’s Theorem would then be used with no restrictions during functional instantiation, so free variables would no longer present a problem. We plan to pursue this idea for Taylor’s Theorem in the near future, as part of a comprehensive verification effort into the implementation of hardware algorithms for square root and various trigonometric and exponential functions.
[^1]: Readers who attended the ACL2 Workshop in 2013 will recognize many of the results in this section, because they were presented in a Rump Session there.
[^2]: The ACL2 constant `(i-large-integer)` is often used to denote an otherwise unspecified large integer, and that is what we use in this case.
[^3]: The interested reader can consult the definition of `standard-lower-bound-of-diff` which produces the constant $\epsilon_0$ mentioned above, and the lemmas `standards-are-in-order-2`, `standards-are-in-order`, `rlfn-classic-has-limits-step-3`, and the more trivial lemmas leading up to the main theorem `rlfn-classical-has-a-limit-using-nonstandard-criterion`.
|
---
abstract: 'We have carried out systematic Macroscopic Quantum Tunneling (MQT) experiments on Nb/Al-AlO$_x$/Nb Josephson junctions (JJs) of different areas. Employing on-chip lumped element inductors, we have decoupled the JJs from their environmental line impedances at the frequencies relevant for MQT. This allowed us to study the crossover from the thermal to the quantum regime in the low damping limit. A clear reduction of the crossover temperature with increasing JJ size is observed and found to be in excellent agreement with theory. All junctions were realized on the same chip and were thoroughly characterized before the quantum measurements.'
author:
- Christoph Kaiser
- Roland Schäfer
- Michael Siegel
bibliography:
- 'MQT\_PRB.bib'
title: Dependence of the Macroscopic Quantum Tunneling Rate on Josephson Junction Area
---
Introduction
============
Since the gauge-invariant phase over a Josephson junction (JJ) $\varphi$ is a macroscopic variable, circuits containing JJs have been used as model systems for the investigation of quantum dynamics on a macroscopic scale. This research has recently led to the development of different types of superconducting quantum bits [@Nakamura_first_charge_qubit; @Friedman_first_qubit; @Martinis_first_current_biased_qubit; @Vion_hybrid_qubit; @Chiorescu_first_flux_qubit], which are promising candidates for the implementation of quantum computers. The starting point of this field was the observation of Macroscopic Quantum Tunneling (MQT) in Josephson junctions in the 1980s [@MQT_Voss_Webb; @MQT_Martinis_1987]. In such experiments, the macroscopic variable $\varphi$ is trapped in the local minimum of a tilted washboard potential, before it tunnels through the potential barrier and starts rolling down the sloped potential. Since this running state is equivalent to the occurrence of a voltage drop over the junction, such tunneling events can be experimentally detected. MQT—often referred to as secondary quantum effect—is the manifestation of the quantum mechanical behavior of a single macroscopic degree of freedom in a complex quantum system. Furthermore, it is the main effect on which all quantum devices operated in the phase regime (such as phase qubits and flux qubits) are based. Consequently, the detailed understanding of MQT is not only interesting by itself, but also important for current research on superconducting qubits operated in the phase regime. In this article, we report on a systematic experimental study of the dependence of the macroscopic quantum tunneling rate on the Josephson junction area, which to our knowledge has never been performed before. As usual[@MQT_Voss_Webb; @MQT_Martinis_1987; @Wallraff2003], we measure the rate at which the escape of $\varphi$ out of the local minimum of the washboard potential occurs as a function of temperature. At high temperatures, the escape is driven by thermal fluctuation over the barrier while it is dominated by tunneling at low temperatures. This leads to a characteristic saturation of the temperature dependent tunneling rate below a crossover temperature $T_\mathrm{cr}$, which is the hallmark of MQT. The rates above and below crossover are affected by the dissipative coupling to the environment of the JJ, which is commonly accounted for by a quality factor $Q$ in theoretical descriptions. A major goal of the presented study was to keep the influence of $Q$ on the rate constant while varying the junction area, so that a change in the observed escape rates could be clearly assigned to the changed JJ size. For this purpose, we work in the underdamped regime of large $Q$, which is only possible if the JJ is to some extent decoupled from its low-impedance environment (i.e. the transmission line leading to the JJ). We achieve this by employing on-chip lumped element inductors.
This article is organized as follows: First, the physical model of MQT is discussed and the theoretical expectations for varying junction size are given. Second, the procedure and setup of measurement are described. Afterwards, the investigated Josephson junctions are characterized carefully, and finally, the results of the MQT measurements are presented and discussed.
Model and Macroscopic Quantum Tunneling
=======================================
General Model
-------------
The dynamics of a JJ is usually described by the RCSJ (resistively and capacitively shunted junction) model [@RCSJ_Stewart; @RCSJ_McCumber]. The current flowing into the connecting leads comprises in addition to the Josephson current $I_J=I_c\sin\varphi$ ($I_c$ denotes the critical current of the junction) a displacement current due to a shunting capacitance $C$ and a dissipative component due to a frequency dependent shunting resistance $R$. For a complete description, the electromagnetic environment given by the measurement setup can be included in the model parameters. In our case, $R$ will be influenced by the environmental impedance while $C$ can be regarded as solely determined by the plate capacitor geometry of the JJ itself. In any case, the bias current $I$ is composed of $$\label{RCSJ}
I = I_c\sin \varphi + \frac{1}{R}\frac{\Phi_0}{2\pi}\dot\varphi +
C\frac{\Phi_0}{2\pi}\ddot\varphi\, ,
$$ where $\varphi$ is the gauge-invariant phase difference across the junction and $\Phi_0=h/2e$ is the magnetic flux quantum. The dynamics of $\varphi$ as expressed by (\[RCSJ\]) is equally described by the well-studied Langevin equation $$\label{Langevin}
M\ddot{\varphi}+\eta{}M\dot{\varphi}+\frac{\partial{}U}{\partial{}\varphi}=
\xi(t)\, ,
$$ which describes a particle of mass $M=C(\Phi_0/2\pi)^2$ in a tilted washboard potential $$\label{potential}
U\left(\varphi\right) = E_J \left(1
- \cos\varphi - \gamma\varphi\right)\, ,
$$ exposed to damping $\eta=1/RC$ and under the influence of a fluctuating force $\xi(t)$. The strength of $\xi(t)$ is linked to temperature and damping by the fluctuation-dissipation theorem. Furthermore, $\gamma=I/I_c$ denotes the normalized bias current while $E_J=\Phi_0 I_c/2\pi$ is called the Josephson coupling energy. For $\gamma<1$, if thermal and quantum fluctuations are ignored, the particle is trapped behind a potential barrier $$\label{DeltaU}
\Delta U = 2E_J \left(\sqrt{1-\gamma^2} - \gamma\arccos\gamma \right)\, ,$$ and the JJ stays in the zero-voltage state. In the potential well, the phase oscillates with the bias current dependent plasma frequency $$\label{omegap}
\omega_p = \omega_{p0}\left(1-\gamma^2\right)^{1/4} = \sqrt{\frac{2\pi
I_c}{\Phi_0 C}} \left(1-\gamma^2\right)^{1/4}\, .
$$ To complete the list of important system parameters given in this section, we introduce the quality factor $$\label{equ_Qdamp}
Q=\omega_p/\eta=\omega_pRC\, ,
$$ which is conventionally used to quantify the damping in the JJ.
At finite temperatures, the thermal energy $k_BT$ ($k_B$ being Boltzmann’s constant) described by $\xi(t)$ in (\[Langevin\]) can lift the phase particle over the potential barrier before the critical current $\gamma=1$ is reached, so that the particle will start rolling down the potential. This is called premature switching and the observed maximal supercurrent $I_\mathrm{sw}<I_c$ is called the switching current. When the phase particle is rolling, the JJ is in the voltage state, since a voltage drop according to $\dot{\varphi}=(2\pi/\Phi_0)V$ is observed. The thermal escape from the potential well occurs with a rate [@Kramers1940; @Haenggi1990] $$\label{Gamma_th}
\Gamma_\mathrm{th}=a_t\frac{\omega_p}{2\pi}\exp\left(-\frac{\Delta
U}{k_BT}\right)\, ,
$$ where $a_t$ is a temperature and damping dependent prefactor, which will be discussed in more detail in Sec. \[sec\_damping\].
For $T\rightarrow 0$, where $\Gamma_\mathrm{th}\rightarrow 0$, premature switching will still be present due to quantum tunneling through the potential barrier. As the phase difference over the JJ is a macroscopic variable, this phenomenon is often referred to as “Macroscopic Quantum Tunneling” (MQT). This means that by measuring the switching events of a JJ for decreasing temperature, one will see a temperature dependent behavior (dominated by the Arrhenius factor $\exp(-\Delta{}U/k_BT)$ in (\[Gamma\_th\])) until a crossover to the quantum regime is observed. The crossover temperature $T_\mathrm{cr}$ is approximately given by [@Affleck_Tcr; @MQT_Martinis_1987] $$\label{Tcr}
T_\mathrm{cr}=\frac{\hbar\omega_p}{2\pi k_B}=
\frac{\hbar\omega_{p0}}{2\pi{}k_B}\cdot(1-\gamma^2)^{1/4}\, ,
$$ where $\hbar$ is Planck’s constant. We can write the quantum tunneling rate for temperatures well below crossover as [@CaldeiraLeggett; @MQT_Martinis_1987; @Grabert87; @Freidkin88] $$\label{Gamma_qu}
\Gamma_q=a_q\frac{\omega_p}{2\pi}\exp(-B)\, ,
$$ where $a_q=\sqrt{864\pi\Delta{}U/\hbar{}\omega_p}\exp(1.430/Q)$ and $B=(36\Delta U/5\hbar\omega_p)(1+0.87/Q).$ In the limit of large $Q$, the escape rate is expected to approach the temperature independent expression (\[Gamma\_qu\]) quickly[@Grabert87; @Freidkin86; @*Freidkin87] once the temperature falls below $T_\mathrm{cr}.$ The rates in (\[Gamma\_th\]) and (\[Gamma\_qu\]) are functions of the normalized bias current $\gamma$ via (\[DeltaU\]) and (\[omegap\]). The crossover to the quantum regime can be nicely visualized by measuring the bias current dependence of the escape rate $\Gamma(I)$ for a sequence of falling temperatures. The data are then described over the whole temperature range by the thermal rate (\[Gamma\_th\]) with the temperature as a fitting parameter. In this way, one obtains a virtual “escape temperature” $T_\mathrm{esc}$, which can be compared to the actual bath temperature $T$. In the thermal regime, one should obtain $T_\mathrm{esc}=T$ while in the quantum regime, one should get $T_\mathrm{esc}=T_\mathrm{cr}=\mathrm{const}$.
Influence of JJ Size on MQT {#sec_JJsize}
---------------------------
![\[PI\_rates\]Theoretically calculated switching current distributions $P(I)$ (solid curves) and quantum tunneling rates $\Gamma_q$ (dashed curves) for samples B1 (left) to B3 (right) having different diameters $d$ (for parameters see Tab. \[table\_exp\_Tcr\]). The difference in $\gamma_\mathrm{cr}$ (maximum position of $P(I)$), where quantum tunneling leads to escape from the potential well, is significant. The tunneling rates at these points (marked by arrows) are of the order of $\approx100$ kHz for all samples.](fig1.eps){width="\linewidth"}
The crucial element of a Nb-based Josephson junction as employed in this work is the Nb/Al-AlO$_x$/Nb trilayer. For a JJ, $I_c=j_c\cdot A$ and $C=c\cdot A$, where the critical current density $j_c$ and the specific capacitance $c$ are constant for a given trilayer and $A$ is the area of the junction. Hence, by reformulating (\[omegap\]), we find that $\omega_{p0}=\sqrt{2\pi j_c/\Phi_0 c}$, meaning that for JJs fabricated with the same trilayer, the plasma frequency does not depend on their size. So at first sight, the crossover temperature (\[Tcr\]) should also be independent of the JJ size. In reality, however, the problem is more subtle, as one needs to take into account at which normalized bias current $\gamma_\mathrm{cr}$ the quantum tunneling rate (\[Gamma\_qu\]) becomes significant. Since the height of the potential barrier $\Delta U\propto
E_J\propto A$ is proportional to the JJ size, a significant tunneling rate should be reached at different $\gamma_\mathrm{cr}$ values for junctions of different size. These points can be estimated by theoretically calculating (\[Gamma\_qu\]) and converting it into a switching current histogram, as it would be observed in a real experiment. The probability distributions of switching currents $P(\gamma)$ can be obtained from the quantum rate by equating [@Fulton_D] $$\label{PIfromGamma}
P(\gamma)=\Gamma_q\left(\frac{{\rm d}\gamma}{{\rm
d}t}\right)^{-1}\left(1-\int_0^I P(u){\rm d}u\right)\, ,
$$ where ${\rm d}\gamma/{\rm d}t=(1/I_c)\cdot({\rm d}I/{\rm d}t)$ is a constant for the linear current ramp chosen in our experiment. For the parameters of the junctions investigated in this work (see Tab. \[table\_exp\_Tcr\]), the switching current distributions $P(\gamma)$ were determined with a quality factor of $Q=100$ and a current ramp rate of 100 Hz, according to our experiments (see below). They are shown in Fig. \[PI\_rates\], where it can be seen that the $\gamma_\mathrm{cr}$ values (the positions of the maxima of the distributions) significantly and systematically increase with the junction size. Evaluation of the $\Gamma_q(\gamma_\mathrm{cr})$ values for the samples indicates that quantum tunneling will be experimentally observable at a rate of around $\Gamma_q\approx 10^5$ Hz.
Subsequently, the expected crossover temperature was calculated from (\[Tcr\]) with $\gamma=\gamma_\mathrm{cr}$. The sample parameters as well as the expected $\gamma_\mathrm{cr}$ and $T_\mathrm{cr}$ values are given in Table \[table\_exp\_Tcr\]. It can be seen that due to the term in parenthesis on the right hand side of (\[Tcr\]), the crossover temperature systematically decreases for increasing junction size. The change in $T_\mathrm{cr}$ is large enough to be observed experimentally. However, such a systematic study of the size-dependence of $T_\mathrm{cr}$ has never been carried out before.
[cdcc]{} Sample & & $\gamma_\mathrm{cr}$ & $T_\mathrm{cr}$ (mK)\
B1 & 1.9 & $0.965$ & $371$\
B2 & 2.55 & $0.977$ & $323$\
B3 & 3.6 & $0.988$ & $291$\
B4 & 3.8 & $0.988$ & $277$\
Influence of Damping on MQT {#sec_damping}
---------------------------
The quality parameter $Q$ is frequently employed to describe the strength of the hysteresis in the current-voltage characteristics of a JJ. In this case, one often takes $Q=\omega_{p0}R_\mathrm{sg}C$ with $R_\mathrm{sg}$ being the subgap resistance of the junction. Here, $Q$ is size-independent, as $R_\mathrm{sg}\propto 1/A$ and $C\propto A$. In the context of MQT, however, the dynamics takes place at a frequency of $\omega_p$, so that a complex impedance at that frequency $Z(\omega_p)$ has to be considered. For an MQT experiment, where the phase and not the charge is the well-defined quantum variable, the admittance $Y(\omega_p)$ will be responsible for damping [@Ingold_Nazarov], so that $R$ in (\[equ\_Qdamp\]) will be given by $R=1/{{\rm Re}}{(Y)}$.
If the junction was an isolated system, the value of $R$ in the context of MQT would be determined by the intrinsic damping in the zero-voltage state. The value which is typically taken as a measure for this is the maximal subgap resistance $R_\mathrm{sg,max}$, which is simply the maximal resistance value which can be extracted from the nonlinear subgap branch of the current-voltage characteristics [@Milliken-subgap; @Subap_Leakage_Gubrud_2001]. In most experiments however, the electromagnetic environment of the JJ can be assumed to have an impedance that is real and accounts for $Z_0\approx 100$ $\Omega$, corresponding to typical transmission lines [@MQT_Martinis_1987]. As furthermore $Z_0\ll R_\mathrm{sg,max}$ and both contributions are in parallel (see Fig. \[impedance\]a), we can simply write $Q=\omega_pZ_0
C$ in this case.
Evidently, for junctions having a small capacitance (as in our experiment), the quality factor $Q=\omega_pZ_0 C$ will be limited to $Q \lesssim 10$ and additionally depend on the JJ size like $C\propto A$. As we want to investigate the pure influence of the JJ size on MQT, we would like to obtain very low damping as well as similar damping for all investigated junctions. In the implementation of phase qubits, current biased Josephson junctions have been inductively decoupled from their environment by the use of circuits containing lumped element inductors and an additional filter junction [@Martinis_first_current_biased_qubit]. In order to keep our circuits simple, we attempted to reach a similar decoupling by only using on-chip lumped element inductors right in front of the JJs (see Fig. \[impedance\]b). This setup leads to an admittance $$\label{YwithL}
Y=1/R_\mathrm{sg,max}+1/(Z_0+i\omega L)\, .
$$
As for (\[YwithL\]), we find ${{\rm Re}}{(Y)}\rightarrow 1/R_\mathrm{sg,max}$ in the limit $\omega L\rightarrow \infty$, big enough lumped element inductances should decouple the JJ from the $Z_0$ environment and result in a high intrinsic quality factor $Q=\omega_pR_\mathrm{sg,max}C$ even for switching experiments. Although it might be difficult to reach this limit in a real experiment, decoupling inductors should definitively help to increase the quality factor and move towards a JJ-size independent damping.
![\[impedance\]a) Typical impedance environment for switching experiments in a JJ. As $Z_0\ll R_\mathrm{sg}$, the junction sees the impedance $Z_0$ at the plasma frequency. b) Lumped element inductors $L$ can be used to decouple the JJ from the line impedance $Z_0$, as discussed in the text. c) SEM micrograph of the electrode design used for the investigated junctions.](fig2.eps){width="0.8\linewidth"}
The damping in the JJ influences the thermal escape rate (\[Gamma\_th\]) via the prefactor $a_t<1$, which has been calculated for the first time by Kramers in 1940[@Kramers1940]. In the limiting case $Q\rightarrow0$ (moderate to high damping), he found: $$a_t=\alpha_\mathrm{KMD}=\sqrt{1+\left(\frac{1}{2Q}\right)^2}-\frac{1}{2Q}\,
,$$ while in the opposite limit $Q\rightarrow\infty$ (very low damping limit), he found: $$a_t=\alpha_\mathrm{KLD}=\frac{36\Delta{}U}{5Qk_BT}\, .$$ More recently, Büttiker, Harris and Landauer[@Buettiker] extended the very low damping limit to the regime of low to moderate damping finding the expression[^1] $$\label{a_t}
a_t=\frac{4}{(\sqrt{1+4/\alpha_\mathrm{KLD}}+1)^2}\, .
$$ Additionally, damping reduces the crossover temperature according to [@Grabert84; @Haenggi1990] $$\label{Tcr_Q}
T_\mathrm{cr,Q}=\frac{\hbar\omega_p}{2\pi k_B}\cdot\alpha_\mathrm{KMD}\, .
$$
A possible way to determine the quality factor $Q$ for such quantum measurements is to extract it from spectroscopy data [@MQT_Martinis_1987]. Unfortunately, for samples with such a high critical current density as used in our experiments described here, this turns out to be experimentally very hard. Hence, we will limit the analysis of the damping in our experiments to the MQT measurements. However, other groups have found a good agreement between the $Q$ values determined by spectroscopy and by MQT [@MQT_Martinis_1987; @Wallraff2003] and we hope to observe such a major increase in $Q$ due to the decoupling inductors that minor uncertainties in $Q$ should not play a role.
Setup and Procedure of Measurement
==================================
![\[figure\_messsystem\_IFP\]Schematic overview of the measurement system. The superconducting coil and the sample are inside a magnetic shield consisting of three nested cylindrical beakers, the middle one made from Pb, the two remaining ones from *Cryoperm*. Furthermore, the entire dilution refrigerator is placed inside a $\mu$-metal shield at room temperature. The $\pi$-symbols denote commercial $\pi$-filters.](fig3.eps){width="\linewidth"}
All samples were fabricated by a combined photolithography / electron beam lithography process based on Nb/Al-AlO$_x$/Nb trilayers. The trilayer deposition was optimized carefully in order to obtain stress-free Nb films. For the definition of the Josephson junctions, an Al hard mask is created employing electron beam lithography. This hard mask acts as an ideal etch stopper during the JJ patterning with reactive-ion-etching. Furthermore, it allows the usage of anodic oxidation even for small junctions, which would not be possible if a resist mask was used. After the anodic oxidation, the Al hard mask is removed by a wet etching process. Details of this Al hard mask technique and the entire fabrication process are discussed elsewhere [@Kaiser_fabrication].
![\[PI\_Fit\]Top: The measured switching current histograms for sample B3 for selected temperatures. For increasing $T$, the switching currents decrease and the histograms broaden. Bottom: The plot obtained by applying (\[formulafit\]) for the same sample. The fits allow to extract $T_\mathrm{esc}$ as well as $I_c$.](fig4.eps){width="\linewidth"}
Our measurement setup can be seen in Fig. \[figure\_messsystem\_IFP\]. Special care has been taken in design of the filtering stages in order to reach a low-noise measurement environment. The goal of the measurement is to determine the escape rate $\Gamma$. In order to do so, we have measured the probability distribution $P(I)$ of switching currents. This was done by ramping up the bias current with a constant rate $\dot{I}={\rm d}I/{\rm
d}t$ and measuring the time $t_\mathrm{sw}$ between $I=0$ and the switching to the voltage state with a Stanford Research 620 Counter, so that $I_\mathrm{sw}=\dot{I}\cdot t_\mathrm{sw}$ could be calculated. An Agilent 33250A waveform generator was used to create a sawtooth voltage signal with a frequency of 100 Hz, which was converted into the bias current by a resistor of $47\,{\rm k}\Omega$. In this way, for each temperature, $I_\mathrm{sw}$ could be measured repeatedly. After doing so 20,000 times, the switching current histograms $P(I)$ with a certain channel width $\Delta I$ were attained as shown in the upper part of Fig. \[PI\_Fit\]. These histograms were then used to reconstruct the escape rate out of the potential well as a function of the bias current by employing [@MQT_Martinis_1987; @Fulton_D] $$\label{FultonD}
\Gamma(I)=\frac{\dot{I}}{\Delta I} \ln\frac{\sum_{i\geq
I}{P(i)}}{\sum_{i\geq I+\Delta I}{P(i)}}\, .
$$
With $\Gamma$ at hand, we could now determine the escape temperature $T_\mathrm{esc}$ by employing (\[Gamma\_th\]). In order to be able to rearrange this formula, we approximate the potential barrier in the limit $\gamma\rightarrow 1$ as $\Delta U=4\sqrt{2}/3\cdot
E_J\cdot(1-\gamma)^{3/2}$, so that we find $$\label{formulafit}
\left(\ln\frac{2\pi\Gamma(I)}{a_t(I)\omega_p(I)}\right)^{2/3}=
\left(\frac{4\sqrt{2}E_J}{3k_BT_\mathrm{esc}}\right)^{2/3}
\frac{I_c-I}{I_c}\,.
$$
Hence, by plotting the left side of (\[formulafit\]) over the bias current $I$, we should obtain straight lines (see bottom part of Fig. \[PI\_Fit\]). Consequently, we can extract the theoretical critical current $I_c$ in the absence of any fluctuations as well as the escape temperature $T_\mathrm{esc}$ by applying a linear fit with slope $a$ and offset $b$. We then find $$I_c=-\frac{b}{a}\qquad\mathrm{and}\qquad
T_\mathrm{esc}=-\frac{4\sqrt{2}\Phi_0}{6\pi k_Ba\sqrt{b}}\, .$$ Since $I_c$ enters (\[formulafit\]) via $E_J$ and $\omega_p$, this fitting procedure has to be iteratively repeated until the value of $I_c$ converges. So strictly speaking, this procedure involves two fitting parameters, namely $T_\mathrm{esc}$ and $I_c$. However, it turns out that $I_c$ is temperature independent within the expected experimental uncertainty (for all our measurements, the fit values of $I_c$ vary over the entire temperature range with a standard deviation of only around 0.09 %). Furthermore, the found $I_c$ values agree very well with the expected ones from the critical current density $j_c$ of the trilayer and the junction geometry. Altogether, it can be said that the results for the main fitting parameter $T_\mathrm{esc}$ should be very reliable.
Sample Characterization
=======================
![\[char-SampleB3\]Characterization of sample B3. The $IV$ curve shows the high quality regarding the $I_cR_N$ ratio as well as low subgap currents. The inset shows a magnification of the subgap branch achieved by a voltage bias. The blue line illustrates how the value for $R_\mathrm{sg,max}$ was determined. The fact that the current rises for decreasing voltage at $V\approx 0.5$ V is due to the fact that the junction jumps back to a supercurrent $I\neq0$ for $V=0$.](fig5.eps){width="\linewidth"}
The JJs were circular in shape and their geometries are given in Table \[table\_exp\_Tcr\]. In order to characterize the samples, $IV$ curves with current bias as well as $IV$ curves with voltage bias were recorded (an example can be seen in Fig. \[char-SampleB3\]). The quality parameters for all samples are given in Table \[table\_samples\_B\] and indicate a very high quality. In the voltage bias measurements, two major current drops at voltages $2\Delta/2$ and $2\Delta/3$ could be seen and attributed to Andreev reflections [@Arnold-MAR]. Below $2\Delta/3$, we were able to extract values of the maximal subgap resistance $R_\mathrm{sg,max}$ as illustrated by the blue line in Fig. \[char-SampleB3\].
Sample $I_c$ (A) $V_\mathrm{gap}$ (mV) $I_cR_N$ (mV) $R_\mathrm{sg,max}$ (k$\Omega$)
-------- ----------- ----------------------- --------------- ---------------------------------
B-1 $19.1$ 2.88 $1.75$ $54.0$
B-2 $31.9$ 2.88 $1.86$ $73.0$
B-3 $68.1$ 2.92 $1.93$ $21.1$
B-4 $70.8$ 2.90 $1.91$ $31.5$
: \[table\_samples\_B\]Experimentally determined parameters for all investigated JJs. The theoretical critical currents $I_c$ were extracted from the MQT measurements. The $R_\mathrm{sg,max}$ values were obtained as shown in the inset of Fig. \[char-SampleB3\]a.
Results and Discussion
======================
Damping in the Junctions
------------------------
In order to decouple the JJs from their environmental impedance, the electrodes leading to the junctions were realized as lumped element inductors, as can be seen in Fig. \[impedance\]c. This design was based on the layout that we recently used to successfully realize lumped element inductors for $LC$ circuits in the GHz frequency range [@Kaiser_Dielectric_Losses]. Furthermore, simulations with Sonnet [^2] confirmed that the meandered electrodes indeed act as lumped element inductors at the relevant frequencies $\omega_p(\gamma_\mathrm{cr})$. The complex simulation with Sonnet gives an inductance of $L/2\approx 1.65\,$nH (for one electrode) while the much simpler analysis with FastHenry[^3] yields $L/2\approx 1.8\,$nH.
For each sample, the data were analyzed using a number of different $Q$ values in order to see if we could determine the experimentally observed damping. This was done by calculating the deviation of $T_\mathrm{esc}$ from the bath temperature $T$ in the thermal regime: $$\label{form_leastsquares}
\Delta T^2 = \sum_{T>500\,{\rm mK}}\left(T_\mathrm{esc}-T\right)^2
$$ and finding its minimum value regarding $Q$. The corresponding values were then used for the sample analysis. It can be seen in Fig. \[Fig\_LeastSquares\] that the points of experimentally observed damping could be clearly identified. The evaluated $Q$ values are given in Table \[table\_MQT\_damping\].
In a preliminary experiment, we investigated MQT in a junction with a diameter of $d=1.9$ m, a critical current of $I_c\approx
12$ A and low-inductance electrodes, which were simple wide lines and can be imagined as the envelope of the electrodes in Fig. \[impedance\]c. We carried out a similar analysis to determine the damping and obtained a quality factor of $Q=4$. Subsequently, we evaluated (\[equ\_Qdamp\]) and calculated an impedance of $R = 99.8$ $\Omega$, which is very close to the expected value of $Z_0\approx100$ $\Omega$ for typical transmission lines [@MQT_Martinis_1987]. This means that with this simple preliminary design, the junction was in no way decoupled from the electromagnetic environment.
![\[Fig\_LeastSquares\]Determination of the experimentally observed quality factor $Q$ for all samples. The curves are minimal when the determined $T_\mathrm{esc}$ values deviate the least from the corresponding bath temperatures $T$ in the thermal regime $T>500$ mK.](fig6.eps){width="\linewidth"}
The $Q$ values obtained by using inductive electrodes (see Tab. \[table\_MQT\_damping\]), however, show that we have drastically increased the quality factors with respect to the preliminary measurement. If we calculate the $R$ values using (\[equ\_Qdamp\]), we find that they are clearly above the typical line impedance of $Z_0\approx100$ $\Omega$ as well as the vacuum impedance of 377 $\Omega$, which shows that we were indeed able to inductively decouple the JJ from its usual impedance environment. As expected, the determined $R$ values are still clearly below the subgap resistance $R_\mathrm{sg,max}$, indicating that we have not reached the limit $\omega L\rightarrow \infty$. Instead, we are in the intermediate regime $Z_0\ll R\ll R_\mathrm{sg,max}$, leading to the fact that $Q$ still exhibits a slight dependence on the JJ size (see Table \[table\_MQT\_damping\]). However, all JJs are in the low-damping regime, so that no influence of damping on the results should be present and differences in the experimental results should indeed be due to the JJ size. This can be seen by the fact that the damping related correction in $T_\mathrm{cr}$ according to equation (\[Tcr\_Q\]) is smaller than 1 % for all experimentally observed $Q$ values. Altogether, we can state that we will be able to carry out our investigation of the size dependence of MQT with very low and nearly size-independent damping.
Sample Q $R$ ($\Omega$) $L_\mathrm{calc}$ (nH)
-------- ------- ---------------- ------------------------
B-1 $76$ $1571$ $1.33$
B-2 $98$ $1327$ $1.39$
B-3 $132$ $1015$ $1.36$
B-4 $143$ $1041$ $1.43$
: \[table\_MQT\_damping\]The experimentally determined values characterizing the damping for all samples. The $L_\mathrm{calc}$ values were determined from $Q$, the values given in Table \[table\_samples\_B\] and equation (\[YwithL\]). They are in good agreement with the design value of $L\approx 3.3\,$nH.
In addition to the rather qualitative considerations above, we performed a quantitative analysis employing equation (\[YwithL\]). If we use $\omega=\omega_p(\gamma_\mathrm{cr})$, take $R_\mathrm{sg,max}$ from Table \[table\_samples\_B\] and assume that $Z_0=100\,\Omega$, we can calculate the decoupling inductance $L_\mathrm{calc}$ for all samples. The values, given in Table \[table\_MQT\_damping\], are a factor of around $2.3-2.5$ smaller than the simulation value of $L\approx 3.3\,$nH, but of the right order of magnitude. For such a complex system, this is a surprisingly good agreement between simulation and theory on the one side and experimentally determined values on the other side. In summary, we conclude that we have successfully demonstrated that decoupling of the Josephson junction from its environment is also possible using only lumped element inductors.
Crossover to the Quantum Regime
-------------------------------
![\[Fig\_MQT\_magnet\]Calculated escape temperatures for sample B3 with and without an applied magnetic field. The inset shows the $I_c(\Phi)$ modulation of this junction; the arrows indicate where the MQT data was obtained. For $I_\mathrm{magn}=31.6\,$mA, a clear reduction of the observed crossover temperature $T_\mathrm{cr}$ is observed.](fig7.eps){width="\linewidth"}
We now turn to the investigation of the crossover point from the thermal to the quantum regime and the influence of JJ size on it. As can be seen in Table \[table\_exp\_Tcr\], we expect a clear reduction of $T_\mathrm{cr}$ with increasing JJ size. However, an experimental observation of lower crossover temperatures for smaller JJs having smaller critical currents could simply be due to current noise in our measurement setup. In order to exclude this, we artificially reduced the critical current of sample B3 by applying a magnetic field in parallel to the junction area. While unwanted noise should now lead to an increase in the observed $T_\mathrm{cr}$, the physical expectation is a significantly reduced $T_\mathrm{cr}$ due to the lower plasma frequency according to (\[Tcr\]). The result of this measurement can be seen in Fig. \[Fig\_MQT\_magnet\] and Table \[table\_MQT\_results\]. We found an agreement between calculated and observed crossover temperature down to $T_\mathrm{cr}\approx 140$ mK, which was the lowest temperature we examined. Hence, it is clear that we have a measurement setup exhibiting low noise, where the electronic temperature is indeed equal to the bath temperature. The lowest investigated temperature of 140 mK is clearly below any temperature needed for the comparison of the JJs of different sizes with each other.
![\[Fig\_MQT\_sizes\]Calculated escape temperatures for all samples. The crossover to the quantum regime is very clear in each measurement. The inset shows a magnification of the quantum regime. The reduction of the crossover temperature $T_\mathrm{cr}$ with increasing JJ size is clearly visible.](fig8.eps){width="\linewidth"}
Finally, we measured the switching histograms for the four JJs of different sizes and evaluated the escape temperature $T_\mathrm{esc}$ and the theoretical critical current $I_c$. This allowed us to determine the crossover temperature $T_\mathrm{cr}$ and the normalized crossover current $\gamma_\mathrm{cr}=I_\mathrm{sw,cr}/I_c$. We indeed found a clear dependence of the crossover temperature on the JJ size as can be seen in Fig. \[Fig\_MQT\_sizes\]. To compare the experimental $\gamma_\mathrm{cr}$ and $T_\mathrm{cr}$ values with the ones expected by theory, we now performed the theoretical calculation described above using the experimentally determined $Q$ values and equation (\[Tcr\_Q\]). All experimentally determined values are in excellent agreement with theory, as can be seen in Table \[table\_MQT\_results\].
[cdcccc]{} Sample & & $\gamma_\mathrm{cr,theo}$ & $\gamma_\mathrm{cr,exp}$ & $T_{\mathrm{cr,}Q\mathrm{,theo}}$ & $T_\mathrm{cr,exp}$\
& & & & (mK) & (mK)\
B-1 & 1 & $0.965$ & $0.970$ & $368$ & $362$\
B-2 & 1 & $0.977$ & $0.981$ & $322$ & $321$\
B-3 & 1 & $0.988$ & $0.990$ & $290$ & $294$\
B-3 & 0.52 & $0.984$ & $0.987$ & $223$ & $236$\
B-3 & 0.27 & $0.979$ & $0.983$ & $172$ & $176$\
B-3 & 0.13 & $0.973$ & $0.975$ & $129$ & $147$\
B-4 & 1 & $0.988$ & $0.990$ & $276$ & $278$\
Conclusions
===========
We have carried out systematic Macroscopic Quantum Tunneling (MQT) experiments with varying Josephson junction area. Our samples were fabricated on the same chip. Thorough characterization before the actual quantum measurements revealed that the junctions exhibit a very high quality. We showed that we could significantly decrease the damping at frequencies relevant for MQT by using lumped element inductors, which allowed us to perform our study in the low damping limit. The crossover from the thermal to the quantum regime was found to have a clear and systematic dependence on junction size, which is in perfect agreement with theory.
This work was partly supported by the DFG Center for Functional Nanostructures, project number B1.5. We would like to thank A. V. Ustinov for useful discussions.
[^1]: Equation (\[a\_t\]) does not describe the turnover from low damping to high damping. This turnover problem has been addressed by several authors (see e.g. @Haenggi1990 and references therein). In general, more precise expressions agree with (\[a\_t\]) in the parameter regime of our samples to within experimental resolution.
[^2]: Sonnet Software Inc., 1020, Seventh North Street, Suite 210, Liverpool, NY 13088, USA
[^3]: Fast Field Solvers, http://www.fastfieldsolvers.com
|
---
address: |
$^1$CNR-INFM-S$^{3}$ National Research Center on nanoStructures and bioSystems at Surfaces\
$^2$Dipartimento di Fisica, Università di Modena e Reggio Emilia, via Campi 213/a, 41125 Modena, Italy;\
author:
- 'F. Troiani$^1$, V. Bellini$^1$, A. Candini$^1$, G. Lorusso$^{1,2}$ and M. Affronte$^{1,2}$.'
title: ' Spin Entanglement in supramolecular structures.'
---
Molecular spin clusters are mesoscopic systems whose structural and physical features can be tailored at the synthetic level. Besides, their quantum behavior is directly accessible in laboratory and their magnetic properties can be rationalized in terms of microscopic spin models. Thus they represent an ideal playground within solid state systems to test concepts in quantum mechanics. One intriguing challenge is to control entanglement between molecular spins. Here we show how this goal can be pursued by discussing specific examples and referring to recent achievements.
Introduction
============
Entanglement is a peculiarity of quantum systems and it represents one of the most fascinating aspects of quantum mechanics. It essentially consists in the impossibility of describing a quantum object without some knowledge on the rest of the system. More formally, it expresses the impossibility of factorizing the wavefuncion of a composite system into the product of the wavefunctions of the components. For photons or cold atoms, as well as for few solid state systems, entanglement is largely investigated, both theoretically and experimentally [@Horodecki; @fazio]. These achievements underpin and stimulate exploitation of this property for new applications like quantum cryptography, teleportation and computation. Besides, the controlled generation of entanglement between nanoscaled objects allows to explore the boundary between quantum and classical behaviour. Molecular spin clusters represent a very interesting test bed in this context. In fact, they represent complex but finite systems whose structural and physical features can be tailored at the synthetic level and whose collective properties can be predicted by microscopic, albeit demanding, models. Recent achievements on supramolecular chemistry, experiments and modeling appear extremely encouraging in this field.\
Here, we briefly review suitable molecules and linkers and illustrate methods used for the experimental determination and rationalization of supramolecular systems. With the help of specific examples, we discuss different issues including the possibility of quantifying and probing entanglement in supramolecular systems; besides, we provide hints to understand and control the inter-molecular coupling.
![Supramolecular structures based on Cr$_7$Ni rings. $a)$ two *purple* Cr$_7$Ni rings linked by bipyridine [@AngewChemGlued]; $b)$ two *green* Cr$_7$Ni rings linked by a metallorganic group containing a metal ion [@NanoNature]; $c)$ a tetramer formed *purple* Cr$_7$Ni rings [@AngewChemGlued]; $d)$ chain alternating Cr$_7$Ni rings with Cu ($s$=1/2) ions.[]{data-label="structures"}](fig1){width="15cm"}
Molecular spin clusters
-----------------------
Molecular spin clusters are molecules consisting of a magnetic core and an external non-magnetic shell. Typically, the inner part is made of transition metal (hydro-) oxides bridged and chelated by organic ligands (typically chemical groups comprising light elements like carbon, oxygen, hydrogen, nitrogen, etc.). Once synthesized, magnetic molecules are generally stable and they can be dissolved in solutions. From these, bulk crystals, comprising a macroscopic number of identical units aligned along specific crystallographic directions, can be obtained. In general, molecules are not interacting with each other and the behavior of a bulk crystal turns out to be that of a collection of non-interacting, identical molecules. This allows to use conventional solid state experimental techniques to investigate molecular features, which is certainly one of the keys for success of these molecular objects. In the recent years, part of the interest in the field has turned at developing protocols to graft and study arrays of molecules on suitable substrates, aiming at addressing few or - eventually - single units.\
Within each molecule, uncompensated electron spins are well localized on transition metals with quenched orbital moments (Fe, Mn, Cr, Ni, Cu...) and interact with each other by (super-)exchange coupling. These ferro- or antiferro-magnetic coupling dominates the intramolecular interactions and determines the pattern of magnetic eigenstates. Typically, the molecular spectra are well resolved at liquid-helium temperatures, while multiple level crossings can be observed at magnetic fields of few Teslas, that are easily achieved in laboratory. Anisotropy and antisymmetric terms in the spin Hamiltonian of the single molecule may arise from reduced local symmetries.\
In the last years, most of the interest has been devoted to molecules like the prototypical Mn$_{12}ac$ or Fe$_8$, with high-spin ground state and high anisotropy barrier, that exhibit a characteristic hysteresis loop of the magnetization, justifying the name of single molecule magnets (SMM) [@mmagnets]. Intermolecular interactions can be reduced by diluting molecules in solid crystals [@Ga6; @Fe18] or in frozen liquid solution. Intermolecular dipolar interaction is limited in the case of antiferromagnetic molecular clusters, characterized by low-spin ground states. Among these, molecules with S=1/2 ground state, like V$_{15}$ [@V15a; @V15b] or the heterometallic Cr$_7$Ni rings [@QC1] represent prototypical examples of mesoscopic effective two-level systems.\
A relevant aspect is the coherence of the molecular spin dynamics. Generally speaking, SMM represent an ideal playground to observed quantum phenomena at mesoscopic scale [@mmagnets]. The spectral definition of the SMM ground multiplet allowed to perform electron spin resonance experiments in Fe$_8$[@pulsedFe8], Ni$_4$ [@Ni4] and Fe$_4$ [@pulsedFe4]; these capabilities inspired schemes for performing quantum algorithms in Mn$_{12}ac$ or Fe$_8$ [@grover], based on the massive exploitation of linear superpositions and quantum intereference. A special case of coherent spin dynamics is that observed in single rare earth ions diluted in a crystalline matrix [@pulsedEr], that, however, do not represent a mesoscopic system. More recently, time resolved experiments have shown that molecular electron spins can be coherently manipulated. In the case of antiferromagnetic clusters, Rabi oscillations in the 10$^{-1}\,\mu$s time scale have been observed in V$_{15}$ while decoherence time $\tau_d$ as long as 3$\,\mu$s at 2K have been directly measured in molecular Cr$_7$Ni rings [@AA]. Since the gate time $\tau_g$ to manipulate the effective S=1/2 in real experimental conditions is of the order of 10$\, ns$, it turns out that the figure of merit $Q=\tau_d/\tau_g$ exceeds 100 at 2K for Cr$_7$Ni. For an isolated molecule the main source of decoherence remains the interaction with the nuclear spins both at the metal sites (specific isotopes) or in the organic environment (protons, fluoride, etc.). Molecules typically comprise few hundreds of atoms in well defined positions, so the interactions between the electron and the nuclear spins can be rationalized for each molecule [@deco].
chemical routes for linking molecules
-------------------------------------
Entangling spins in supramolecular structures, such as nanomagnet dimers or oligomers, requires at least two separate steps: 1) the individuation of molecular building blocks with well defined features; 2) the establishment of inter-molecular magnetic coupling. Concerning the first step, the synthesis and the characterization of separate molecular units should be considered as prerequisite. Ideally, each of the molecular units should be individually addressable; this implies that they should be either spatially or spectrally resolvable.\
Different kinds of magnetic coupling between the units are compatible with the controlled generation of entangled states. Dipolar interaction is long-range and might be desirable if one searches entanglement of a large collection of objects [@Ghosh], but it is detrimental to control entanglement between few molecular units within an oligomer, for it tends to couple molecules belonging to different oligomers. Therefore, local types of magnetic interaction, such as exchange, are preferable. In practice, when organic linkers are used to exchange couple magnetic molecules there are two main risks: 1) to form polymeric networks that tend to undergo long range magnetic order; 2) magnetic states of the single moiety can be heavily perturbed by the chemical link. Recently different aromatic groups have been successfully used to selectively link molecular spin clusters. G. Timco and R.E.P. Winpenny in Manchester are currently using piridyne and pyrazole groups [@AngewChemGlued] while the group of G. Aromi is using $\beta$-diketonates ligands [@aromiMn4; @aromiCuNi].
Probably the first case of molecular dimer reported in the literature is the \[Mn$_4$\]$_2$ [@Mn4WW; @Mn4SH]. The individual moiety, \[Mn$_4$O$_3$Cl$_4$(O$_2$CEt)$_3$(py)$_3$\][@Mn4] comprises three Mn$^{+3}$ and one Mn$^{+4}$ coupled together to give a S=9/2 ground molecular state and a uniaxial anisotropy. Two Mn$_4$ are linked through hydrogen bonds to forms Mn$_4$ dimer in which the magnetic states of each moiety are antiferromagnetically coupled to each other. The true problem of entanglement however was not considered there.\
Another important case is that of heterometallic Cr$_7$Ni rings. Two species of Cr$_7$Ni rings have been synthesized: *green* [@Cr7M] and *purple*[@AngewChemGlued] Cr$_7$Ni, after their respective colour. The first attempt of linking two *green* Cr$_7$Ni rings was through the internal amine and different metallorganic groups [@AngewChemCr7Ni]. From the chemical point of view this was successfull since two rings have been selectively linked. Yet, the magnetic coupling resulted vanishingly small except in the case where a Ru$_2$ dimer was introduced in the linker [@Ru2]. That was interesting since this Ru$_2$ dimer has redox properties and in principle its magnetic features can be switched by an external electrical stimulus; however, the effectiveness of such a scheme still needs to be proved. Important progress have been recently obtained exploiting the fact that the chemical reactivity of the extra Ni is much faster than that of the rest of the Cr ions in the rings. Firstly, a chemical group was attached to the carboxylate at Ni site in the *green* Cr$_7$Ni [@NanoNature]; more recently, nitrogen of heterocyclic aromatic groups was directly linked to the Ni in the *purple* Cr$_7$Ni [@AngewChemGlued]. Starting from these, the choice of the linker is virtually infinite [@Timco]. In a first series of linked *green* Cr$_7$Ni rings, transition metal ions (M) or dimers were inserted in the linker thus forming Cr$_7$Ni-M$_{x}$-Cr$_7$Ni with x=1,2 [@NanoNature]. By using *purple* Cr$_7$Ni, a family of \[Cr$_7$Ni\]$_2$ with short or longer linkers was obtained, thus allowing to tune the strength of the intramolecular coupling [@AngewChemGlued]. This strategy can also be used to synthesize molecular trimers, tetramers (with or without central metal ions) or chains alternating Cr$_7$Ni and metal ions or dimers (see Fig. \[structures\]) [@AngewChemGlued; @Timco].
measuring and quantifying the magnetic coupling
-----------------------------------------------
The magnetic effectiveness of the intramolecular link can be experimentally evaluated. According to what was previously discussed, we firstly check the integrity of each molecular sub-unit and then we quantify the strength of the coupling. This may require the use of complementary experimental techniques and, possibly, the systematic comparison within a series of derivatives, from the individual molecule to complex aggregates. Magnetic susceptibility and magnetization loops are primarily used to clarify the nature of the ground state of the system while specific heat measurement directly evaluates the energy splitting of the lowest multiplets. Both need to be extended to very low temperatures (typically T$\, < \,$1K) where the magnetic coupling becomes observable. Electron paramagnetic resonance (EPR) spectra allow to evidence transitions that are permitted only when the magnetic coupling is effective and they are sensitive to the anisotropy of the $g$-factor.\
As an example, Figure \[magnetization\] shows the magnetization loop $M(T,B)$ for a \[*purple*-Cr$_7$Ni\]$_2$ dimer with a trans-1,2-dipyridylethene ligand between two rings [@ent09]. The $M(T,B)$ curves presented in the upper panels show the butterfly behaviour, typical of the phonon-bottleneck regime, that becomes clearer as the sweeping rate dB/dt increses. Zooming the magnetization curves $M(B)$ (lower panel), we can observe the presence of fleeble knees, that are clearly evident by taking the derivative of magnetization $dM/dB$ as shown in the insets. These features are not present in the single purple-Cr$_7$Ni ring and they are clearly due to the intra-molecular coupling.
![ Experimental magnetization curves M(T, B) taken for \[Cr$_7$NiF$_3$(Etglu)(O$_2$CtBu)$_{15}$\]$^{2-}$ (dipyet) (Etglu=N-ethyl-d-glucamine and dipyet= trans-1,2-dipyridylethene). a) Data are taken at T=40 mK and different sweeping rates of the magnetic field [@ent09]. b) Magnification of a). (Inset) dM/dB vs B curve taken for dB/dt=0.28 T/s.[]{data-label="magnetization"}](fig2){width="10cm"}
In Fig. \[HC\] we consider another typical case, the Cr$_7$Ni-Cu-Cr$_7$Ni molecular trimer, for which the specific heat C(T) provided direct evidence and quantification of the supramolecular coupling [@NanoNature]. This system comprises two Cr$_7$Ni rings with an S=1/2 ground state doublet and an S=3/2 first excited multiplet, and one Cu ion with S=1/2. The bumps in the C(T) curve are the Schottky anomalies related to the energy splitting of specific multiplets. In 5T the main anomaly is essentially related to the splitting between the S=1/2 and S=3/2 multiplets, typical of the individual Cr$_7$Ni. The overlap between the specific heat of Cr$_7$Ni-Cu-Cr$_7$Ni (circles in Fig. \[HC\]) and that of two times the C(T) of individual Cr$_7$Ni rings (dotted lines in Fig. \[HC\]) is a direct evidence of the integrity of the molecular rings. In zero field, a Schottky anomaly clearly appears below 1K for Cr$_7$Ni-Cu-Cr$_7$Ni but it is not present for individual rings for which the ground state is a Kramer doublet. This low temperature anomaly is a consequence of the coupling between the three effective spins S=1/2 in Cr$_7$Ni-Cu-Cr$_7$Ni.
![Low temperature specific heat of Cr$_7$Ni-Cu-Cr$_7$Ni molecular trimer (circles). The -experimental- specific heat of two individual rings per unit cell is plotted as dotted lines. Continuos lines are calculated by spin hamiltonian (see text) and they perfectly reproduce the experimental data.[]{data-label="HC"}](fig3){width="10cm"}
It’s worth stressing the sophisticated level of description of these mesoscopic systems provided by microscopic spin Hamiltonians. Briefly, the spin Hamiltonian of a single Cr$_7$Ni ring reads: $$\begin{aligned}
{\cal H} & = &
J \sum_{i=1}^8 {\bf s}_{i} \cdot {\bf s}_{i+1} +
\sum_{i=1}^8 d_i\, [s_{z,i}^2-s_i (s_i +1)/3] \nonumber \\
& + &
\sum_{i<j=1}^8 {D}_{ij} [2 s_{z,i} s_{z,j}-s_{x,i} s_{x,j}-s_{y,i} s_{y,j}]
+ \mu_B \sum_{i=1}^8 {\bf B}\cdot {\bf g}_i \cdot {\bf s}_{i} ,
\label{eq1}\end{aligned}$$ where the $z$ axis coincides with the ring axis, site 8 corresponds to the Ni$^{2+}$ ($s=1$) ion, sites 1-7 are occupied by Cr$^{3+}$ ($s=3/2$) ions, and $ {\bf s}_9 \equiv {\bf s}_1 $. The first term accounts for the isotropic exchange interaction, while the second and third ones are the dominant axial contributions to the crystal-field and the intracluster dipole-dipole interactions, respectively. The last term represents the Zeeman coupling to an external magnetic field. The parameters entering the above Hamiltonian are determined by fitting the experiments performed with (ensembles of) single rings (see Fig.\[HC\], for instance). Intra-ring interactions are also responsible for the anomalies above few K in the supramolecular structures; the analisis of these features shows that the parameters are not affected by the intermolecular coupling introduced in the ring dimers and oligomers. Then, low temperature anomalies are described at a microscopic level by considering the interaction of Cu spin centre with Ni and Cr spins of each rings [@NanoNature]. Considering also the projection of the rings dipolar and crystal fields, the effective interaction can be written as: $$\label{effham}
\mathcal{H}= J^* {\bf S}^{Cr7Ni} \cdot {\bf S}^{Cu}+D_{ex}^* [2S^{Cr7Ni}_z{S}^{Cu}_z -S^{Cr7Ni}_x{S}^{Cu}_x
- S^{Cr7Ni}_y{S}^{Cu}_y]$$ for each Cr$_7$Ni - Cu pair. The $J^*$ and $D_{ex}$ parameters are evaluated by simultaneously fitting complementary experimental results [@NanoNature].
understanding the magnetic coupling
------------------------------------
How the organic linkers actually transmit spin information is an interesting issue that may help in designing organic linkers and new experiments. The series of \[Cr$_7$Ni\]$_2$ dimers discussed in a previous section is quite instructive from this point of view. The linker in those cases belongs to heteroaromatic organic groups (C-based benzene-like rings containing one or more nitrogen) that have been long studied and intensively used in the ’80s and ’90s in order to carry electronic and magnetic interactions between active molecular units through long (nm) distances, as compared to standard organic bridges as single O or F atoms, hydroxides or carboxylates that, conversely, work at atomic scale. Here the figure of merit, which discriminates between “good” and “bad” linker, is the level of conjugation/delocalization of the electrons that carry the information. $p$ electrons are distributed over two types of orbitals, the ones that bind the linker atoms together ($sp^3$ hybrids, with label $\sigma$ ) and $\pi$ electrons that occupies resonant and delocalized bonds. Magnetic interaction is optimal when large overlap (both in space and in energy) between the spin polarized orbitals of the magnetic centers and orbitals of the linker atom anchored to the magnetic centers is found; symmetry matching is also important. In principle, both $\sigma$ and $\pi$ electrons can carry magnetic interactions, although only with $\pi$ electrons delocalization is strong enough to drive this interactions over long distances. Experimental observation of such interactions have been supported by numerical calculations, mostly performed by Hückel (extended) molecular orbital methods. Some general rules have been suggested in the literature: an interesting observation is that spin polarization of $\pi$ electrons is found to proceed with an oscillating character, moving from one atom to the other through aromatic groups. This results in ferromagnetic or antiferromagnetic interaction between magnetic centers at the edges depending on where they anchor [@cargillthompson1996; @mccleverty1998]. The strength of the interaction also obeys such alternation rule, as discussed by Richardson and Taube [@richardson1983], that has been also interpreted as arising from a quantum interference over magnetic paths with different lenghts [@marvaud1993]. As a matter of fact, alternation in spin and charge polarization through an aromatic linker can be theoretically explained by superexchange mechanism discussed by McConnell [@mcconnell1963] or, alternatively, by resonant theories, as found by Longuet-Higgins [@longuet-higgins1950a]. Another fact that has to be taken into account is that both occupied and unoccupied orbitals can play a role, like in charge transport [@browne2006]. Either $\pi$ occupied or unoccupied linker orbitals can be close in energy to the magnetic frontier orbitals, depending if the heteroaromatic linker is $\pi$ rich or $\pi$ poor, thus either HOMO- or LUMO-driven magnetic superexchange interaction can be promoted.\
In order to illustrate this mechanism we present *ab-initio* DFT calculations on model bicycle organic linkers, namely on bipyridine, both in the (4,4$^{\prime}$ and 4,3$^{\prime}$ configuration), and bipyrazole. Calculations have been performed with the NWChem quantum chemistry package [@nwchem]; an Ahlrichs valence double zeta (VDZ) contracted gaussian basis set has been used in conjunction with the hybrid B3LYP exchange and correlation functional. Let us focus on the 4,4$^{\prime}$ bipyridine bridge depicted in Fig. \[spindensity\], and assume that the anchoring site is the N atom, more electronegative than C. Let us suppose also that the overlap between the magnetic frontier orbitals of the metal atoms anchored to the N sites with the N orbitals is such that a small spin polarization of $\pm 0.1\mu_B$ is induced on the N atoms (the signs refer to a ferro- or antiferromagnetic coupling between these two moments). We impose such spin moment by means of the constrained DFT method as discussed in Ref. [@wu2005]. We obtained spin polarized states in N with both $\sigma$ and $\pi$ characters, so that both symmetries can contribute to the magnetic interaction, whereas most of the interaction is reasonably associated to the conjugated $\pi$ electron system. In Fig. \[spindensity\] we plot the spin-polarized electron density isosurfaces for isovalues of $\pm 0.001$ electrons/a.u. in case of antiferromagnetic and ferromagnetic coupling between the N spin moments. We can clearly see the alternation of spin polarization when moving from one atom to the next, following the bond paths. In order to demonstrate the rules discussed above, we plot in Fig. \[spindensity\] the spin densities also for the 4,3$^{\prime}$ bipyridine and for bipyrazole organic bridges, always assuming that the metal is anchored to N sites, and that a spin moment of $\pm0.1 \mu_B$ is transferred to N atoms. Changing from 4,4$^{\prime}$ bipyridine to 4,3$^{\prime}$ bipyridine, optimal coupling is attained when the spin polarization on the two N atoms has the same sign, i.e. magnetic centers are more favourable ferromagnetically coupled, as compared to the antiferromagnetic coupling attained for 4,4$^{\prime}$ bipyridine, as demonstrated by the larger spin polarization of the inner C atoms at the frontier between the two pyridines. In bipyrazole, the spin densities, for both the antiferro- and ferromagnetic states, do not reach the inner region and interference between the spin paths hinder magnetic interaction between the two sides of the bridge. In Fig. \[spindensity\] we report also the total energy difference between the antiferromagnetic and ferromagnetic states, which indicates clearly how the size and the sign of the coupling are completely consistent with the reasonings above.
![Spin density isosurfaces for isovalue of +0.001 electrons/a.u. (blue color), and -0.001 electrons/a.u. (red color), and FM-AFM energy splitting for 4,4$^{\prime}$“-bipyridine, 4,3$^{\prime}$”-bipyridine and bipyrazole bridges (see text for details). []{data-label="spindensity"}](fig4){width="12cm"}
In order to predict the behaviour of real dimeric complexes, the full systems, not only the bridge, have to be simulated. We analyze three supramolecular dimers of \[Cr$_7$Ni\]$_2$, which are characterized by identical magnetic molecular centers, two *purple* Cr$_7$Ni, but three different organic linkers, i.e. bipyridine, bipyrazole and bipyridylethylene [@tobepublished]. Magnetic frontier orbitals are supplied by Ni ions, and anchoring sites in the linkers are always N atoms. Ni (II) ions have nominally a 3$d^8$ electronic configurations, so that only $d$ orbitals with *e$_g$* symmetry are spin-polarized. Although this in principle should imply that only $\sigma$ orbitals of the extended molecule are responsible of the spin interaction between the two rings, we observe that polarization of both $\sigma$ and $\pi$ orbitals in the linker is present. $\sigma$ polarization retains values only for the C atoms in the vicinity of the N atoms, while for more distant C atoms, only $\pi$ orbitals attain a (small) spin-polarization. Intramolecular Heisenberg $J^*$ parameters, which quantify the interaction between the two Cr$_7$Ni molecules can be estimated from several experimental methods (as described in previous section) or calculated by total energy (obtained by means of, e.g., DFT-B3LYP calculations) difference methods. Here the microscopic interaction arises through the organic bridges between the two Ni spin moments, so that the relevant microscopic Hamiltonian is given by
$$H = J^{*} S_{Ni^1} \cdot S_{Ni^2} \,\, ,$$
where the labels $1$ and $2$ indicate the Ni atoms belonging to different rings, and $S_{Ni^1}=
S_{Ni^2}=1$; $J^{*}$ is then given by 1/2 of the total energy difference between the singlet and triplet states of the (Cr$_7$Ni)$_2$ dimer, and positive values are relative to a preferred antiferromagnetic coupling between the rings, i.e. a singlet spin ground state. The calculated $J^{*}$ values evidence a stronger magnetic coupling for bipyridine-bridged dimers ($J^{*}$=0.021meV) while for bipyrazole- ($J^{*}$=0.004meV) or bipyridylethylene- ($J^{*}$=0.002meV) bridged dimers interaction is sensibly smaller, in agreement with specific heat measurements that provide an estimate of the energy gap between the singlet and the barycenter of the triplet of 0.009meV for bipyrazole-bridged and weaker ones (0.005meV and 0.004meV) for bipyrazole- and bipyridylethylene-bridged respectively. In case of bipyrazole, as anticipated above, quantum interference between the two paths seems to be the responsible for the small J, despite the fact that the two Ni centers are closer to each other, as compared to bipyridine, because of the shorter length of bipyrazole. In case of the bipyridylethylene bridge (not shown), the larger number of bonds that such interaction should travel through plays a role, so that only a small fraction of spin-polarization survives in the two facing C atoms in the center of the bridge [@tobepublished]. These findings pave the way for a whole series of possible experimental investigations, by systematically varying the organic bridges and the magnetic frontier atoms, in order to tune and choose the appropriate magnetic coupling for entanglement. The reasonings above, that have been derived in dimeric complexes, apply as well for trimeric or tetrameric systems, as the ones described in previous section; in these cases, some additional difficulties might be represented by the many possible and simultaneous interaction paths, a circumstance that might prevent to prefigure the magnetic properties of the systems by simple general conjectures, requiring that a full theoretical characterization has to be necessarily carried out.
Switchable molecular links.
---------------------------
Although switchability is not mandatory for entanglement and spin manipulation can also be obtained between permanently coupled spins [@QC2], we briefly discuss switchable organic linkers. We focus on three different switching mechanisms, namely a mechanical, an electric field-induced and a photocromic one. Critical issues like the switching rate or preservation of coherence are far beyond current discussion but, at the end, they will constitute possible bottlenecks for switchable linkers.\
Transport through aromatic bicycles linkers have been demonstrated to depend on the structural conformation of the linker [@venkataraman2006]. In the recent work of Quek et al. [@quek2009], it has been demonstrated that transport properties of bipyridine-based molecular junction are modified by elonging or compressing the junction; theoretical investigations have helped in attributing this finding to modification in the internal angles of the linkers, and in bond lengths and angles defining the pyridine-gold contact geometry. As discussed above, magnetic properties of supramolecular systems depend similarly on the structural confomation of the linker and of the linker-molecule contacts, leading to the idea of mechanical switching of the magnetic interactions. Another approach on the same line is the use of molecular shuttles [@molmotor] as possible switches.\
Another possibility is to exploit a local electric field to rearrange the molecular orbitals and to disrupt/enhance energy matching between orbitals of the linker and of the magnetic center. As discussed by Diefenbach and Kim [@diefenbach2007] one can exploit the different spatial distribution of the molecular orbitals in the linker, and more precisely their different polarizability; upon the application of a (strong enough) electric field, the energetic order of the different orbitals might be modified, since second-order Stark response might be very different for the different orbitals. In case of low lying excited spin states, crossings between excited and ground states can be induced, which means that a different magnetic ground state can be fostered, that is, magnetic swithing to on/off states can be achieved. Switchability of the linker is often used in other solid state systems, like, for instance, quantum dots. In this respect, an interesting case was proposed considering a molecular poly-oxometallate \[PMo$_{12}$O$_{40}$(VO)$_2$\]$^{q-}$ consisting of two (VO)$^{+2}$ moieties with spin 1/2 separated by Mo$_{12}$ cage [@POM]. The cage may have different valence states and it can therefore be charged providing a switcheabe link between the two S=1/2 spins. The implementation of a square-root-of-swap gate has been proposed [@POM] and experimental work is in progress in this direction.\
The latter switching method is the one exploiting photoexcitation processes. Photocromic linkers belonging to the family of diarylethenes [@matsuda2000], undergoes reversible conformational changes upon irradiation in the visible or ultraviolet frequency range. They are optimal candidates because of their resistance, rapid photo-response (in the range of picoseconds), and thermal stability of the two different isomers (up to 100 $^{\circ}$C). Bonds form or brake, and conjugation is suppressed or enhanced, upon irradiation when moving from one isomer to the other; magnetic interaction paths efficiency can be in this way controlled by photoirradiation. These molecular switches are excellent candidates for large-scale integration too, since photocromic complexes have been demostrated to react both in solution and in the crystalline phase, and last but not least, to be compatible with coordination-driven self-assembly synthetic approaches.
quantify and measuring entanglement in molecular spin clusters
--------------------------------------------------------------
The existence of a magnetic coupling between the molecular spin clusters doesn’t guarantee per se that these are in an entangled state, but its form plays a crucial role in the controlled generation of entanglement. Therefore, the high degree of flexibility with which such coupling can be engineered through supramolecular chemistry represents a fundamental resource. To illustrate how the main concepts apply to the molecular systems, we consider two different approaches to the generation of entanglement in coupled Cr$_7$Ni rings, the first one based on equilibrium states $ \rho $ at low temperature, the second one on coherent manipulation of the system state by electron-paramagnetic resonance (EPR) pulses. Being our interest focused on entanglement between the total spins of the nanomagnets that compose the supramolecular structure, we shall refer to the spin Hamiltonian approach. Here, if the intermolecular interaction is small as compared to the intramolecular exchange coupling $J$, it can be treated at a perturbative level, and mapped onto an effective Hamiltonian $ \mathcal{H}_{eff}^{AB} $ that depends only on the total spins $ {\bf S}_\alpha $ ($ \alpha = A , B , \dots $) of the coupled molecules. Both the expression of $ \mathcal{H}_{eff}^{AB} $ and the values of the effective parameters are deduced from the underlying microscopic model.
In order for the equilbrium density matrix to be entangled, one typically needs an intermolecular coupling Hamiltonian $ \mathcal{H}_{eff}^{AB} $ with a non factorizable ground state, and such that the energy separation from the first excited state is significantly larger than the lowest temperature at which relevant experiments can be performed. In the case of the (Cr$_7$Ni)$_2$ dimer ($ S_A = S_B = 1/2 $), the former condition can be achieved if the dominant term in the coupling Hamiltonian is an antiferromagnetic exchange interaction. Anisotropic intramolecular interactions give rise to additional effective terms, resulting in the following Hamiltonian: $ \mathcal{H}_{eff}^{AB} = (J_{AB}-D_{AB}) {\bf S}_A \cdot {\bf S}_B + 3 D_{AB} S^A_z S^B_z $. For temperatures comparable with $J_{AB}$, the equilibrium state $ \rho $ includes contributions from all four lowest eigenstates $ | S , M \rangle $ (being $S$ and $M$ the total spin and its projection along $z$, orthogonal to the plane of the molecules), with Boltzmann probabilities $P^S_M$. The entanglement between two 1/2 spins can be quantified by the concurrence ($\mathcal{C}$), whose value ranges from 0 for a factorizable $ \rho $ to 1 for maximally entangled states [@fazio]. In the present case, the expression of $ \mathcal{C} $ corresponding to the equilibrium state reads: $$\mathcal{C} ( P^S_M ) = \left\{ \begin{array}{ll}
\max \{ | P^1_0 - P^0_0 | - 2\sqrt{P^1_1 P^1_{-1}}, 0\}
& {\rm for} \max\{ P^1_0 , P^0_0 \} > \sqrt{P^1_{-1} P^1_1} \\
0 & {\rm otherwise}
\end{array}\right.$$ In the presence of a magnetic field applied along the ring axis, the expression of the concurrence reads: $$\mathcal{C}(\rho_{eq}^{AB}) =
\frac{ 1 - e^{-\frac{J_{AB}}{k_BT}} \left( e^{\frac{D_{AB}}{k_BT}} + 2e^{-\frac{D_{AB}}{2k_BT}} \right) }
{ 1 + e^{-\frac{J_{AB}}{k_BT}} \left[ e^{\frac{D_{AB}}{k_BT}} + 2e^{-\frac{D_{AB}}{2k_BT}} \cosh \left(\frac{\bar{g}_{zz}\mu_B B}{k_BT}\right) \right] } ,$$ where $ \bar{g}_{zz} $ is the $z$ component of the effective g factor in the ground state $ S = 1/2 $ doublet of the Cr$_7$Ni ring. According to this expression, that holds as long as $ | S, M \rangle $ are the dimer eigenstates, the molecular spin clusters $A$ and $B$ are entangled if the occupation of either $ |0,0\rangle $ or $ |1,0\rangle $ is sufficiently larger than all the others. In particular, in the limit $ k_B T \ll ( J_{AB} - D_{AB} ) $, the equilibrium state tends to the singlet ground state and $ \mathcal{C} \simeq 1 $. Therefore, the larger $J_{AB}$, the wider the temperature range in which thermal entanglement persists. In the present case, the range of desirable values of $J_{AB}$ is however bounded from above by the characteristic energy of the intramolecular spin excitations. If this condition is not fulfilled, each nanomagnet within the dimer can no longer be regarded as effective two-level systems, for intramolecular excitations corresponding to higher spin multiplets enter the composition of the dimer lowest eigenstates. The concurrence exponentially decreases with the magnetic field, that we assume for simplicity oriented along $z$. In fact, the field energetically favours the factorizable ferromagnetic state ($M=1$) and reduces the occupation of the singlet state. At zero temperature, an abrupt transition takes place as a function of $B$, at the level crossing between $ | 1, 1 \rangle $ and $ | 0, 0 \rangle $. In general, the concurrence cannot be easily expressed in terms of observable quantities. Its evaluation requires the knowledge of the system density matrix, that is either derived experimentally from quantum state tomography or indirectly through the determination and diagonalization of the system Hamiltonian. The latter approach is in general viable in the case of a few coupled molecular spin clusters, where a detailed knowledge of the system Hamiltonian can be achieved by simulating a number of experimental techniques, including specific heat, torque magnetometry, inelastic neutron scattering, electron paramagnetic resonance as discussed in a previous paragraph.\
The demonstration of quantum entanglement, however, can also be directly derived from experiments, without requiring the knowledge of the system state. This can be done by using specific operators - the so-called [*entanglement witnesses*]{} - whose expectation value is always positive if the state $ \rho $ is factorizable. It is quite remarkable that some of these entanglement witnesses coincide with well known magnetic observables, such as energy or magnetic susceptibility $ \chi = dM / dB $. In particular, the magnetic susceptibility of $N$ spins $s$, averaged over three orthogonal spatial directions, is always larger than a threshold value if their equilibrium state $ \rho $ is factorizable: $ \sum_\kappa \chi_\kappa > N s / k_B T $ [@Wie]. This should not be surprising, since magnetic susceptibility is proportional to the variance of the magnetization, and thus it may actually quantify spin-spin correlation. The advantage in the use of this criterion consists in the fact that it doesn’t require the knowledge of the system Hamiltonian, provided that this commutes with the Zeeman terms corresponding to the three orthogonal orientations of the magnetic field $ \beta = x,y,z $. As already mentioned, in the case of the (Cr$_7$Ni)$_2$ dimer, the effective Hamiltonian includes, besides the dominant Heisenberg interaction, smaller anisotropic terms, due to which the above commutation relations are not fulfilled. This might in principle result in small differences between the magnetic susceptibility and the entanglement witness $ \bar{\chi}_{EW} \equiv
\sum_\beta [ \sum_{\alpha , \beta} \langle S_z^\alpha S_z^\beta \rangle -
\langle \sum_\alpha S_z^\alpha \rangle ]$. Such difference is however negligibly small if $ D_{AB} $ is small compared to $ J_{AB} $ and to the temperature (see Fig. \[fig7\]). Magnetic susceptibility $\chi$ was used as entanglement witness in the case of Cr$_7$Ni dimers [@ent09]. In figure \[entwit2\] the product $\chi$T is plotted vs temperature and compared with the expected threshold. In fact, in this system the ratio $ J_{AB} / D_{AB} \simeq 4 $ is large enough to make the difference between the magnetic susceptibility and the entanglement witness negligible.\
![(a) Concurrence of the ring dimer as a function of temperature and of the Zeeman splitting induced by the applied magnetic field. The values of the physical parameters entering the effective Hamiltonian $ \mathcal{H}^{AB}_{eff}$ are: $ J_{AB} = 40\, $mK and $ D_{AB} =
10\,$mK. (b) Difference between concurrence in the presence of anisotropy and concurrence without anisotropy ($ D_{AB} = 0$). []{data-label="entwit"}](fig5){width="15cm"}
![Magnetic susceptibility $\chi$ used as entanglement witness in the case of Cr$_7$Ni dimers. Temperature dependence of the measured $\chi$T product (triangles). $ \chi_\perp $ (blue) is the component perpendicular to the largest surface of the crystal; this direction forms on average an angle of $16^\circ$ with the $z$-axis, perpendicular to the rings plane. $ \chi_\parallel$ (green) refers to the directions parallel to the crystal plane; rotation of magnetic field within this plane does not evidence changes in the magnetic response. The average $ ( \chi_\perp + 2 \chi_\parallel ) / 3 $ (black dots) is compared with the threshold for a mole of dimers, $ N_A \mu_B^2 / 3 k_B $, in order to identify the temperature range (T$\leq$50mK) where the two rings are entangled [@ent09].[]{data-label="entwit2"}](Fig6){width="15cm"}
An alternative approach to the generation of entangled states is represented by the application of suitable EPR pulse sequences to an initially unentangled state. Broadly speaking, this requires the implementation of a conditional dynamics, where the effect produced by a given pulse sequence of a (target) nanomagnet $A$ depends non-trivially on the state of a (control) nanomagnet $B$. In the case where the dimer consists of two identical and equally oriented molecular spin clusters, limitations arise from the impossibility of individually addressing $A$ and $B$. In fact, it’s easy to verify that an effective Hamiltonian such as $ \mathcal{H}_{eff} = \mathcal{H}_{eff}^{AB} + \sum_{\alpha = A,B} {\bf B} \cdot
\bf{g}_\alpha \cdot {\bf S}_\alpha $, with $ {\bf g}_A = {\bf g}_B = {\rm diag }
(g_\perp , g_\perp , g_\parallel ) $ doesn’t allow to generate an entangled state such as $ | S, 0 \rangle $, starting from a factorized one such as $ | 1 , \pm 1
\rangle $. These limitations can be overcome in the case of an asymmetric system, where the coupling of the two effective spins $A$ and $B$ with the magnetic field are different, due either to the different chemical composition of the two molecular spin clusters or to their different spatial orientation, combined with the anisotropy of the $g$ tensor ($ g_\parallel \neq g_\perp $). Alternatively, the asymmetry of the intermolecular coupling can be exploited, such as that between the green and the purple derivatives of Cr$_7$Ni [@Timco].
Analogous features allow the controlled generation of entangled states in tripartite systems. The (Cr$_7$Ni)-Cu-(Cr$_7$Ni) molecule, for example, behaves as a system of three effective 1/2 spins ($ S_A = S_B = S_{Cu} = 1/2 $) [@NanoNature]. Entanglement between three parties can manifest itself in fundamentally different forms. In fact, two classes of equivalence have been defined, whose prototypical states are the so-called GHZ and W states, respectively. The GHZ states, whose expression in the $ | M_A, M_B, M_C \rangle $ bases reads: $ | \Psi_{GHZ} \rangle = ( | 1/2, 1/2, 1/2 \rangle + |-1/2,-1/2,-1/2 \rangle ) / \sqrt{2} $, maximize the genuinely tripartite entanglement, i.e. the one that cannot be reduced to pairwise correlations. The expression of the W states reads instead: $ | \Psi_{W} \rangle = ( | 1/2, 1/2,-1/2 \rangle + | 1/2,-1/2, 1/2 \rangle
+ |-1/2, 1/2, 1/2 \rangle ) / \sqrt{3} $, and coincides with that of the $ | S, M \rangle = | 3/2 , 1/2 \rangle $. In order for the controlled generation of both $ | \Psi_{GHZ} \rangle $ and $ | \Psi_{W} \rangle $ to be possible, by applying suitable pulse sequences to an initial ferromagnetic state $ | 3/2 , 3/2 \rangle $, the degeneracy between the two transitions $ | \Delta M | = 1 $ within the $ S = 3/2 $ quadruplet needs to be broken. This is indeed the case for the (Cr$_7$Ni)-Cu-(Cr$_7$Ni) system, thanks to the anisotropic terms in the effective Hamiltonian (see Eq. \[effham\]) and to the resulting zero-field splittings.
It’s finally interesting to note that quantum correlations can also be present in the equilibrium state of the tripartite system described by the above effective Hamiltonian $ \mathcal{H}_{eff}^{AC} + \mathcal{H}_{eff}^{BC} $, for suitable values of the parameters $ J_{AC}=J_{BC} $ and $ D_{AC} = D_{BC} $. If the inter-ring interaction is dominated by the exchange term ($ J_{AC} \gg
D_{AC} $), the anisotropy can be perturbatively included in first order, and the system eigenstates coincide with the vectors $ | S_{AB}, S, M \rangle $ ($ {\bf S}_{AB} = {\bf S}_{A} + {\bf S}_{B} $). In the case of a ferromagnetic coupling ($ J_{AC} < 0 $), the density matrix in the low-temperature limit ($ k_B T \ll | J_{AC} | $) is given by a statistical mixture of the $S=3/2$ eigenstates. If $ D_{AC} < 0 $ [@NanoNature], the three pairs of subsystems are all unentangled. In the case of an antiferromagnetic coupling between the rings, the ground state coincides with the state $ | S_{AB} = 1, S=1/2, M \rangle = (|1/2,-1/2,1/2 \rangle +
|-1/2,1/2,1/2 \rangle -2|1/2,1/2,-1/2 \rangle ) / \sqrt{6} $. If the tripartite system is cooled down to this state ($ k_BT \ll J_{AC},
g\mu_B B$), each subsystem is entangled with the other two. In fact, the reduced density matrix of any two subsystems is real and takes the form: $$\rho_{red}^{\alpha\beta} = \left(
\begin{array}{cccc}
\rho_{11} & 0 & 0 & 0 \\
0 & \rho_{22} & \rho_{23} & 0 \\
0 & \rho_{23} & \rho_{33} & 0 \\
0 & 0 & 0 & \rho_{44}
\end{array}
\right) ,$$ where we refer to the basis $ \{ | 1/2, 1/2\rangle , | 1/2,-1/2\rangle ,
|-1/2, 1/2\rangle , |-1/2,-1/2\rangle \} $ and $ \alpha, \beta = A, B, C $. For $ \alpha\beta = AB $, the above matrix elements are: $\rho_{11}=2/3$, $\rho_{22}=\rho_{33}=\rho_{23}=1/6$, and $\rho_{44}=0$. The resulting entanglement between the two rings is given by $ \mathcal{C} (\rho_{red}^{AB}) = 1/6 $. Each ring is also entangled with the Cu ion. In fact, for $\alpha\beta = AC$, the matrix elements are: $\rho_{11}=1/6$, $\rho_{23}=-1/3$, $\rho_{22}=2/3$, $\rho_{22}=1/6$, and $\rho_{44}=0$. This results in a finite concurrence, namely $ \mathcal{C} (\rho_{red}^{AC}) = 2/3 $.
![Difference between the entanglement witness $ \chi_{EW} $ and the magnetic susceptibility for the ring dimer, derived from the effective Hamiltonian $ \mathcal{H}_{eff}^{AB} $ in the limit $ B \rightarrow 0 $.[]{data-label="fig7"}](fig7){width="15cm"}
conclusions and perspectives.
-----------------------------
A quick look at the list of works cited here below tell us that entanglement in supramolecular systems has just appeared as possible emerging topic, but the earliest results show great potentialities. It is clear that advancements in this field may arrive only from the combined effort of chemists, experimentalists and theoreticians.\
From synthetic point of view, the list of suitable molecular building blocks and of organic ligands working as efficient linkers is - if not infinite - certainly very long. We discussed the reasons why molecular Cr$_7$Ni rings on one side and heteroaromatic ligands on the other side represent a very good starting point to build weakly interacting molecular complexes. The combination of the two (i.e. molecule + linker) is just limited by the rules of coordination chemistry, that may well bring to several interesting cases.\
Experiments to characterize systems are certainly not routine but quite accessible. The range of molecular energies indeed spans between 0.01 to 10K that correspond to the energy of one electron in magnetic field up to 10 Teslas and frequencies ranging from 0.01 to 20 cm$^{-1}$, that is microwaves with low wavenumbers. Molecular spin clusters also represent an ideal test bed to perform experiments targeted at directly probing and quantify entanglement in spin systems. Here we have just mentioned the use of magnetic susceptibility, independently measured along its three components, as entanglement witness but other quantities, like specific heat or neutron scattering, may well do this job. In the next future it will be certainly interesting to use pulsed electron spin resonance to address selectively molecular subensembles. Here the possibility of spectroscopically discern different molecules will be certainly of interest. Design of specific pulse sequences will lead to implement quantum algorithms.\
From the theoretical point of view, finite arrays of molecular spins are very appealing to develop models. Here one may wonder which conditions (forms of spin hamiltonian, values of the spin S$\neq$1/2, number of spin centers, etc.) maximize/minimize entanglement. As mesoscopic systems, molecular spin clusters are paradigmatic cases to study crossover between quantum and classical behavior. In particular it will be very instructive to study the role of decoherence mechanisms in details.
acknowledgements
----------------
We are indebted to Dr. Grigore Timco and Prof. Richerd Winpenny (University of Manchester, UK) for sharing and discussing their results with us. Magnetization measurements were taken by microSQUID in collaboration with Dr. Wolfgang Wernsdorfer in Grenoble (F). We thank Dr. Alberto Ghirri and Christian Cervetti (CNR and University of Modena, I) for contributing at low temperature characterization and Dr. S. Carretta, Prof. P. Santini and Prof. G. Amoretti (Univerity of Parma, I) for stimulating discussion. This work is partially supported by the European projcet FP7-ICT FET Open “MolSpinQIP” project, contract N.211284.
References {#references .unnumbered}
==========
[10]{}
R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki. , 81:865, 2009.
L. Amico, R. Fazio, A. Osterloh, and V. Vedral. , 80:517, 2008.
G. A. Timco, E. J. L. McInnes, R. J. Pritchard, F. Tuna, and R. E. P. Winpenny. , 47(50):9681, 2008.
G. A. Timco, S. Carretta, F. Troiani, F. Tuna, R. J. Pritchard, C. A. Muryn, E. J. L. McInnes, A. Ghirri, A. Candini, P. Santini, G. Amoretti, M. Affronte, and R. E. P. Winpenny. , 4(3):173, 2009.
D. Gatteschi, R. Sessoli, and J. Villain. . Oxford University Press, 2007.
G. Abbati, L. C. Brunel, H. Casalta, A. Cornia, A. C. Fabretti, D. Gatteschi, A. K. Hassan, A. G. M. Jansen, A. Maniero, L. Pardi, C. Paulsen, and U. Segre. , 7:1796, 2001.
J. J. Henderson, C. M. Ramsey, E. [del Barco]{}, T. C. Stamatatos, and G. Christou. , 78:214413, 2008.
W. Wernsdorfer, A. Muller, D. Mailly, and B. Barbara. , 66:861, 2004.
S. Bertaina, S. Gambarelli, T. Mitra, B. Tsukerblat, A. Muller, and B. Barbara. , 453:203, 2008.
F. Troiani, M. Affronte, S. Carretta, P. Santini, and G. Amoretti. , 94:190501, 2005.
M. Bal, J. R. Friedman, Y. Suzuki, K. M. Mertes, E. M. Rumberger, D. N. Hendrickson, Y. Myasoedov, H. Shtrikman, N. Avraham, and E. Zeldov. , 70:100408(R), 2004.
E. [del Barco]{}, A. D. Kent, E. C. Yang, and D. N. Hendrickson. , 93:157202, 2004.
C. Schlegel, J. [van Slageren]{}, M. Manoli, E. K. Brechin, and M. Dressel. , 101:147203, 2008.
M. L. Leuenberger and D. Loss. , 410:789, 2001.
S. Bertaina, S. Gambarelli, A. Tkachuk, I. N. Kurkin, B. Malkin, A. Stepanov, and B. Barbara. , 2:39, 2007.
A. Ardavan, O. Rival, J. J. Morton, S. J. Blundell, A. M. Tyryshkin, G. A. Timco, and R. E. P. Winpenny. , 98:057201, 2007.
F. Troiani, V. Bellini, and M. Affronte. , 77:054428, 2008.
S. Ghosh, T. F. Rosenbaum, G. Aeppli, and S. N. Coppersmith. , 48:425, 2003.
E. C. Sanudo, T. Cauchy, E. Ruiz, R. H. Laye, O. Roubeau, S. J. Teat, and G. Aromi. , 46:9045, 2007.
L. A. Barrios, D. Aguil, O. Roubeau, P. Gamez, J. Ribas-Arino, S. J. Teat, and G. Aromi. , 15:11235, 2009.
W. Wernsdorfer, N. Aliaga-Alcalde, D. N. Hendrickson, and G. Christou. , 416:406, 2002.
S. Hill, R. S. Edwards, N. Aliaga-Alcalde, and G. Christou. , 302:1015, 2003.
D. N. Hendrickson, G. Christou, E. A. Schmitt, E. Libby, J. S. Bashkin, S. Wang, H. L. Tsai, J. B. Vincent, and [P. D. W. Boyd]{}. , 114:2455, 1992.
F. K. Larsen, E. J. L. McInnes, H. El Mkami, J. Overgaard, S. Piligkos, G. Rajaraman, E. Rentschler, A. A. Smith, G. M. Smith, V.Boote, M. Jennings, G. A. Timco, and R. E. P. Winpenny. , 42:101, 2003.
M. Affronte, I. Casson, M. Evangelisti, A. Candini, S. Carretta, C. A. Muryn, S. J. Teat, G. A. Timco, W. Wernsdorfer, and R. E. P. Winpenny. , 44:6496, 2005.
M. Affronte, F. Troiani, A. Ghirri, S. Carretta, P. Santini, R. Schuecker, G. Timco, and R.E.P. Winpenny. , 310:E501, 2007.
G. Timco and R.E.P. Winpenny. Private communications.
A. Candini, G. Lorusso, F. Troiani, A. Ghirri, S. Carretta, P. Santini, G. Amoretti, W. Wernsdorfer, F. Tuna, G. Timco, E. J. L. McInnes, R. E. P. Winpenny, and M. Affronte. Phys. Rev. Lett. (2010), in press.
A. M. W. [Cargill Thompson]{}, D. Gatteschi, J. A. McCleverty, J. A. Navas, E. Rentschler, and M. D. Ward. , 35:2701, 1996.
J. A. McCleverty and M. D. Ward. , 31:842, 1998.
D. E. Richardson and H. Taube. , 105:40, 1983.
V. Marvaud, J.-P. Launay, and C. Joachim. , 177:23, 1993.
H. M. McConnell. , 39:1910, 1963.
H. C. Longuet-Higgins. , 18:265, 1950.
W. R. Browne, R. Hage, and J. G. Vos. 250:1653, 2006.
E. J. Bylaska, W. A. [de Jong]{}, N. Govind, K. Kowalski, T. P. Straatsma, M. Valiev, D. Wang, E. Apra, T. L. Windus, J. Hammond, P. Nichols, S. Hirata, M. T. Hackler, Y. Zhao, P.-D. Fan, R. J. Harrison, M. Dupuis, D. M. A. Smith, J. Nieplocha, V. Tipparaju, M. Krishnan, Q. Wu, T. Van Voorhis, A. A. Auer, M. Nooijen, E. Brown, G. Cisneros, G. I. Fann, H. Fruchtl, J. Garza, K. Hirao, R. Kendall, J. A. Nichols, K. Tsemekhman, K. Wolinski, J. Anchell, D. Bernholdt, P. Borowski, T. Clark, D. Clerc, H. Dachsel, M. Deegan, K. Dyall, D. Elwood, E. Glendening, M. Gutowski, A. Hess, J. Jaffe, B. Johnson, J. Ju, R. Kobayashi, R. Kutteh, Z. Lin, R. Littlefield, X. Long, B. Meng, T. Nakajima, S. Niu, L. Pollack, M. Rosing, G. Sandrone, M. Stave, H. Taylor, G. Thomas, J. [van Lenthe]{}, A. Wong, and Z. Zhang. . Pacific Northwest National Laboratory, Richland, Washington 99352-0999, USA., 2007.
Q. Wu and T. [van Voorhis]{}. , 72:024502, 2005.
V. Bellini [et al.]{} To be submitted.
F. Troiani, A. Ghirri, M. Affronte, S. Carretta, P. Santini, G. Amoretti, S. Piligkos, G. Timco, and R. E. P. Winpenny. , 94:207208, 2005.
L. Venkataraman, J. E. Klare, C. Nuckolls, M. S. Hybertsen, and M. L. Steigerwald. , 442:904, 2006.
S. Y. Quek, M. Kamenetska, M. L. Steigerwald, H. J. Choi, S. G. Louie, M. S. Hybertsen, J. B. Neaton, and L. Venkataraman. , 4:230, 2009.
C.-F. Lee, D. A. Leigh, R. G. Pritchard, D. Schultz, S. J. Teat, G. A. Timco, and R. E. P. Winpenny. , 458:314, 2009.
M. Diefenbach and K. S. Kim. , 46:7640, 2007.
J. Lehmann, A. Gaita-Arino, E. Coronado, and D. Loss. , 2:312, 2007.
K. Matsuda and M. Irie. , 122:7195, 2000.
M. Wie[ś]{}niak, V. Vedral, and C. Brukner. , 7:258, 2005.
|
---
abstract: 'We review various approaches to the calculation of QCD condensates and of the nucleon characteristics in nuclear matter. We show the importance of their self-consistent treatment. The first steps in such treatment appeared to be very instructive. It is shown that the alleged pion condensation anyway can not take place earlier than the restoration of the chiral symmetry. We demonstrate how the finite density QCD sum rules for nucleons work and advocate their possible role in providing an additional bridge between the condensate and hadron physics.'
author:
- |
E.G. Drukarev, M.G. Ryskin and V.A. Sadovnikova\
Petersburg Nuclear Physics Institute\
Gatchina, St. Petersburg 188300, Russia
title: '**QCD CONDENSATES AND HADRON PARAMETERS IN NUCLEAR MATTER: SELF-CONSISTENT TREATMENT, SUM RULES AND ALL THAT**'
---
**Contents**
1. [Introduction]{}
2. [Condensates in nuclear matter]{}
2.1. Lowest order condensates in vacuum\
2.2. Gas approximation\
2.3. Physical meaning of the scalar condensate in a hadron\
2.4. Quark scalar condensate in the gas approximation\
2.5. Gluon condensate\
2.6. Analysis of more complicated condensates\
2.7. Quark scalar condensate beyond the gas approximation\
3. [Hadron parameters in nuclear matter]{}
3.1. Nuclear many-body theory\
3.2. Calculations in Nambu-Jona-Lasinio model\
3.3. Quark-meson models\
3.4. Skyrmion models\
3.5. Brown-Rho scaling\
3.6. QCD sum rules
4. [First step to self-consistent treatment]{}
4.1. Account of multi-nucleon effects in the quark scalar condensate\
4.2. Interpretation of the pion condensate\
4.3. Quark scalar condensate in the presence of the pion condensate\
4.4. Calculation of the scalar condensate
5. [QCD sum rules]{}
5.1. QCD sum rules in vacuum\
5.2. Proton dynamics in nuclear matter\
5.3. Charge-symmetry breaking phenomena\
5.4. EMC effect\
5.5. The difficulties
6. [Summary. A possible scenario]{}
Introduction
============
The nuclear matter, i.e. the infinite system of interacting nucleons was introduced in order to simplify the problem of investigation of finite nuclei. By introducing the nuclear matter the problems of $NN$ interaction in medium with non-zero baryon density and those of individual features of specific nuclei were separated. However, the problem of the nuclear matter is far from being solved. As we understand now, it cannot be solved in consistent way, being based on conception of $NN$ interactions only. This is because the short distances, where we cannot help considering nucleons as composite particles, are very important.
There is limited data on the in-medium values of nucleon parameters. These are the quenching of the nucleon mass $m$ and of the axial coupling constant $g_A$ at the saturation value $\rho_0$ with respect to their vacuum values. The very fact of existing of the saturation point $\rho_0$ is also the “experimental data”, which is the characteristics of the matter as a whole. The nowadays models succeeded in reproducing the phenomena although the quantitative results differ very often.
On the other hand, the knowledge about the evolution of hadron parameters is important for understanding the evolution of the medium as a whole while the density $\rho$ of distribution of the baryon charge number increases. (When $\rho$ is small enough, it is just the density of the distribution of nucleons). There can be numerous phase transitions. At certain value of density $\rho=\rho_a$ the Fermi momenta of the nucleons will be so large, that it will be energetically favourable to increase $\rho$ by adding heavier baryons instead of new nucleons. The nuclear or, more generally, hadronic matter may accumulate excitations with the pion quantum numbers, known as pion (or even kaon) condensations. Also the matter can transform to the mixture of hadrons and quark-gluon phase or totally to the quark-gluon plasma, converting thus to baryon matter. The last but not the least is the chiral phase transition. The chiral invariance is assumed to be one of the fundamental symmetries of the strong interactions.
The chiral invariance means that the Lagrangian as well as the characteristics of the system are not altered by the transformation $\psi\to\psi e^{i\alpha\gamma_5}$ of the fermion fields $\psi$. The model, suggested by Nambu and Jona-Lasinio (NJL) [@1] provides a well-known example. The model describes the massless fermions with the four-particle interactions. In the simplest version of NJL model the Lagrangian is $$L_{NJL}=\ \bar\psi i\partial_\mu\gamma^\mu\psi+\frac G2\left[(\bar\psi
\psi)^2+(\bar\psi i\gamma_5\psi)^2\right]\ .$$ If the coupling constant $G$ is large enough, the mathematical (empty) vacuum is not the ground state of the system. Due to the strong four-fermion interaction in the Dirac sea the minimum of the energy of the system is reached at a nonzero value of the fermion density. This is the physical vacuum corresponding to the expectation value $\langle0|\bar\psi\psi|0\rangle\neq0$.
This phenomenon is called “spontaneous chiral symmetry breaking”. In the physical vacuum the fermion obtains the mass $$m\ =\ -2G\langle0|\bar\psi\psi|0\rangle$$ caused by the interaction with the condensate. On the other hand, the expectation value $\langle0|\bar\psi\psi|0\rangle$ is expressed through the integral over the Dirac sea of the fermions. Of course, we have to introduce a cutoff $\Lambda$ to prevent the ultraviolet divergence caused by the four-fermion interaction $$\langle0|\bar\psi\psi|0\rangle\ =\ -\frac m{\pi^2}\int^\Lambda_0
dp\frac{p^2}{(p^2+m^2)^{1/2}}\ .$$ Thus Eqs.(2) and (3) compose self-consistent set of equations which determine the values of the condensate and of fermion mass $m$ in the physical vacuum.
Originally the NJL model was suggested for the description of the nucleons. Nowadays it is used for the quarks. The quark in the mathematical vacuum, having either vanishing or very small mass is called the “current” quark. The quark which obtained the mass, following Eq.(2) is called the “constituent” quark. In the nonrelativistic quark model the nucleon consists of three constituent quarks only.
Return to the nuclear matter. To understand, which of the hadron parameters are important, note that we believe nowadays most of the strong interaction phenomena at low and intermediate energies to be described by using effective low-energy pion-nucleon or pion-constituent quark Lagrangians. The $\pi N$ coupling constant is: $$\label{3.1}
\frac{g}{2m}\ =\ \frac{g_A}{2f_\pi}$$ with $f_\pi$ being the pion decay constant. This is the well-known Goldberger- Treiman (GT) relation [@103]. It means, that the neutron beta decay can be viewed as successive strong decay of neutron to $\pi^{-}p$ system and the decay of the pion. Thus, except the nuclear mass $m^*(\rho)$, the most important parameters will be the in-medium values $g_A^*(\rho)$, $f^*_\pi(\rho)$ and $m^*_\pi(\rho)$.
On the other hand, the baryonic matter as a whole is characterised by the values of the condensates, i.e. by the expectation values of quark and gluon operators. Even at $\rho=0$ some of the condensates do not vanish, due to the complicated structure of QCD vacuum. The nonzero value of the scalar quark condensate $\langle0|\bar\psi\psi|0\rangle$ reflects the violation of the chiral symmetry. In the exact chiral limit, when $\langle0|\bar\psi\psi|0\rangle=0$ (and the current quark masses vanish also), the nucleon mass vanishes too. Thus, it is reasonable to think about the effective nucleon mass $m^*(\rho)$ and about the other parameters as the functions of the condensates. Of course, the values of the condensates change in medium. Also, some condensates which vanish in vacuum may have the nonzero value at finite density.
At the same time, while calculating the expectation value of the quark operator $\bar\psi\psi$ in medium, one finds that the contribution of the pion cloud depends on the in-medium values of hadron parameters. Hence, the parameters depend on condensates and vice versa. Thus we came to the idea of self-consistent calculation of hadron parameters and of the values of condensates in medium. The idea of self-consistency is, of course, not a new one. We have seen just now, how NJL model provides an example. We shall try to apply the self-consistent approach to the analysis of more complicated systems.
The paper is organized as follows. In Sec.2 we review the present knowledge on the in-medium condensates. In Sec.3 we present the ideas and results of various approaches to calculation of the hadron parameters in medium. We review briefly the possible saturation mechanisms provided by these models. In Sec.4 we consider the first steps to self-consistent calculation of scalar condensate and hadron parameters. The experience appeared to be very instructive. Say, the analysis led to the conclusion, that in any case the chiral phase transition takes place at the smaller values of density than the pion condensation. Hence, the Goldstone pions never condense. However, analysis of the behaviour of the solutions of the corresponding dispersion equation at larger densities appears to be useful.
Suggesting QCD sum rules at finite density as a tool for a future complete self-consistent investigation, we show first how the method works. This is done in Sec.5. In Sec.6 we present more detailed self-consistent scenario.
We present the results for symmetric matter, with equal densities of protons and neutrons.
Everywhere through the paper we denote quark field of the flavour $i$ and colour $a$ as $\psi^a_i$. We shall omit the colour indices in most of the cases, having in mind averaging over the colours for colourless objects. As usually, $\sigma_i,\tau_j$ and $\gamma_\mu$ are spin and isospin Pauli matrices and 4 Dirac matrices correspondingly. For any four-vector $A_\mu$ we denote $A_\mu\gamma^\mu=A^\mu\gamma_\mu=\widehat A$. The system of units with $\hbar=c=1$ is used.
Condensates in nuclear matter
=============================
Lowest order condensates in vacuum
----------------------------------
The quark scalar operator $\bar\psi\psi$ is the only operator, containing minimal number of the field operators $\psi$, for which the expectation value, in vacuum has a nonzero value. One can find in the textbooks a remarkable relation, based on partial conservation of axial current (PCAC) and on the soft-pion theorems $$\label{4}
m^2_{\pi b} f^2_\pi\ =\ -\frac13\langle0|\left[F^5_b(0)[\bar F^5_b(0),
H(0)]\right]|0\rangle$$ with $m_{\pi b},f_\pi$ standing for the mass and the decay constant of pion, $H$ being the density of the Hamiltonian of the system, while $F^5_b$ are the charge operators, corresponding to the axial currents, b is the isospin index.
Presenting (effective) Hamiltonian $$\label{5}
H\ =\ H_0+H_b$$ with $H_0(H_b)$ conserving (explicitly breaking) the chiral symmetry, one finds that only $H_b$ piece contributes to Eq.(4). In pure QCD $$\label{6}
H_b\ =\ H^{QCD}_b\ =\ m_u\bar uu+m_d \bar dd\ ,$$ with $m_{u,d}$ standing for the current quark masses. This leads to well known Gell-Mann–Oakes–Renner relation (GMOR) [@2] $$\label{GMOR}
\langle0|\bar uu+\bar dd|0\rangle\ =\ -\ \frac{2f^2_\pi
m^2_\pi}{m_u+m_d}\ .$$ Of course, assuming SU(2) symmetry, which is true with the high accuracy, one finds $\langle0|\bar uu|0\rangle = \langle0|\bar
dd|0\rangle$. Numerical value $\langle0|\bar
uu|0\rangle=(-240\rm\,MeV)^3$ can be obtained from Eq.(\[GMOR\]).
The quark masses can be obtained from the hadron spectroscopy relations and from QCD sum rules — see the review of Gasser and Leutwyler [@gl]. Thus the value of the quark condensate was calculated by using Eq.(\[GMOR\]). The data on the lowest order gluon condensate ($a$ is the colour index, $\alpha_s$ is the QCD coupling constant) $$\langle0|\ \frac{\alpha_s}\pi\ G^a_{\mu\nu}G^{\mu\nu}_a\ |0\rangle\
\approx\ (0.33\ \rm GeV)^4$$ was extracted by Vainshtein et al. [@3] from the analysis of leptonic decays of $\rho$ and $\varphi$ mesons and from QCD sum-rules analysis of charmonium spectrum [@4].
Gas approximation
-----------------
In this approximation the nuclear matter is treated as ideal Fermi gas of the nucleons. For the spin-dependent operators $A_s$ the expectation value in the matter $\langle M|A_s|M\rangle=0$, although for the separate polarized nucleons $\langle N_\uparrow|A_s|N_\uparrow\rangle$ may have a nonzero value. For the operators $A$ which do not depend on spin the deviation of the expectation values $\langle M|A|M\rangle$ from $\langle0|A|0\rangle$ is determined by incoherent sum of the contributions of the nucleons. Thus for any SU(2) symmetric spin-independent operator $A$ $$\label{9}
\langle M|A|M\rangle\ =\ \langle0|A|0\rangle +\rho\langle N|A|N\rangle$$ with $\rho$ standing for the density of nuclear matter and $$\label{10}
\langle N|A|N\rangle\ =\ \int d^3x\bigg(\langle N|A(x)|N\rangle-
\langle0|A(x)|0\rangle\bigg)\ .$$ Since $\langle0|A(x)|0\rangle$ does not depend on $x$, Eq.(\[10\]) can be presented as $$\label{11}
\langle N|A|N\rangle\ =\ \int d^3x\,\langle N|A(x)|N\rangle
-\langle0|A|0\rangle\cdot V_N$$ with $V_N$ being the volume of the nucleon.
The quark condensates of the same dimension $d=3$ can be built by averaging of the expression $\bar\psi B\psi$ with $B$ being an arbitrary $4\times4$ matrix over the ground state of the matter. However, any of such matrices can be presented as the linear combination of 5 basic matrices $\Gamma_A$: $$\label{12}
\Gamma_1=I, \quad \Gamma_2=\gamma_\mu, \quad \Gamma_3=\gamma_5, \quad
\Gamma_4=\gamma_\mu\gamma_5, \quad
\Gamma_5=\sigma_{\mu\nu}=\frac12(\gamma_\mu\gamma_\nu
-\gamma_\nu\gamma_\mu)$$ with $I$ being the unit matrix. One can see, that expectation value $\bar\psi\Gamma_5\psi$ vanishes in any uniform system, while those of $\bar\psi\Gamma_{3,4}\psi$ vanish due to conservation of parity.
The expectation value $$\label{13}
\sum_i\langle M|\bar\psi_i\gamma_\mu\psi_i|M\rangle\ =\ v_\mu(\rho)$$ takes the form $v_\mu(\rho)=v(\rho)\delta_{\mu0}$ in the rest frame of the matter. It can be presented as $$\label{14}
v(\rho)\ =\ \sum_i\frac{n^p_i+n^n_i}2\cdot\rho\ =\ \sum_i v_i$$ with $n^{p(n)}_i$ standing for the number of the valence quarks of the flavour $"i"$ in the proton (neutron). Due to conservation of the vector current Eq.(14) presents exact dependence of this condensate on $\rho$. For the same reason the linear dependence on $\rho$ is true in more general case of the baryon matter $$\label{15}
v_i(\rho)\ =\ \frac32\cdot\rho, \quad v(\rho)=3\cdot\rho.$$
As to the expectation value $\langle M|\bar\psi\psi|M\rangle$, it is quite obvious that Eq.(\[9\]) is true for the operator $A=\bar\psi_i
\psi_i$ if the nucleon density is small enough. The same refers to the condensates of higher dimension. The question is: when will the terms nonlinear in $\rho$ become important ?
Before discussing the problem we consider the lowest dimension condensates in the gas approximation.
Physical meaning of the scalar condensate in a hadron
-----------------------------------------------------
It has been suggested by Weinberg [@5] that the matrix element of the operator $\bar\psi_i\psi_i$ in a hadron is proportional to the total number of the quarks and antiquarks of flavour $"i"$ in that hadron. The quantitative interpretation is, however, not straightforward. It was noticed by Donoghue and Nappi [@6] that such identification cannot be exact, since the operator $\bar\psi_i
\psi_i$ is not diagonal and can add quark–antiquark pair to the hadron. It was shown by Anselmino and Forte [@7; @8] that reasonable assumptions on the quark distribution inside the hadron eliminate the non-diagonal matrix elements. However there are still problems of interpretation of the diagonal matrix elements.
Present the quark field of any flavour $$\label{16}
\psi(x)\ =\sum_s\frac{d^3p}{(2\pi)^3(2E)^{1/2}}
\left[b_s(p)u_s(p)e^{-i(px)}+d^+_s(p)v_s(p) e^{+i(px)}\right]$$ with $b_s(p)$ and $d^+_s(p)$ eliminating quarks and creating antiquarks with spin projection $s$, correspondingly. This leads to $$\label{17}
\langle h|\bar\psi\psi|h\rangle\ =\sum_s\int d^3p\left[ \frac{\bar
u_s(p)u_s(p)}{2E_i(p)}\,N^+_s(p)+\frac{\bar v_s(p)v_s(p)}{2E_i(p)}\,
N^-_s(p)\right].$$
Here $N^+_s$ and $N^-_s$ stand for the number of quarks and antiquarks. In the works [@7; @8] this formula was analysed for the nucleon in framework of quasi-free parton model for the quark dynamics. In this case the normalization conditions are $\bar u_s(p)u_s(p)=\bar
v_s(p)v_s(p)=2m_i$ with $m_i$ standing for the current mass. The further analysis required further assumptions.
In the nowadays picture of the nucleon its mass $m$ is mostly composed of the masses of three valence quarks which are caused by the interactions inside the nucleon. In the orthodox nonrelativistic quark model, in which possible quark–antiquark pairs are ignored, we put $E_i=m_i$ and find $\langle N|\sum_i\bar \psi_i\psi_i|N\rangle=3$. In more realistic, relativistic models, there is also the contribution of the quark–antiquark pairs. Note also that in some approaches, say, in the bag models [@9] or in the soliton model [@10] the motion of the valence quarks is relativistic. This reduces their contribution to the expectation value $\langle N|\sum_i\bar \psi_i\psi_i|N\rangle$ by about 30%, since $m_i/E_i<1$.
The conventional nowadays picture of the nucleon is that it is the system of three valence quarks with the constituent masses $M_i\approx
m/3$ and the number of quark–antiquark pairs $$\label{18}
\langle N|\sum_i\bar \psi_i\psi_i|N\rangle\ =\ 3+\sum_s\int d^3p\
\frac{a_s(p)}{2E(p)}\ N_s(p)$$ with $a_s(p)=\bar u_s(p)u_s(p)=\bar v_s(p)v_s(p)$, while $N_s(p)$ stands for the number of quark–antiquark pairs with momentum $p$. Thus, the right-hand side (rhs) of Eq.(\[18\]) can be treated as the total number of quarks and antiquarks only under certain assumptions about the dynamics of the constituents of $\bar qq$ pairs. They should remain light and their motion should be nonrelativistic, with $a_s\approx2m\approx2E$. In other models the deviation of the left-hand side (lhs) from the number 3 is a characteristic of the role of quark–antiquark pairs in the nucleon.
The value of $\langle N|\sum_i\bar \psi_i\psi_i|N\rangle$ is related to the observables. The pion–nucleon $\sigma$-term, defined by analogy with Eq.(\[4\]) [@11] $$\label{19}
\sigma\ =\ \frac13\sum_b\langle
N|\left[F^5_b(0)[F^5_b(0),H(0)]\right]|N\rangle$$ provides by using Eq.(\[6\]) $$\label{20}
\langle N|\bar qq|N\rangle\ =\ \frac{2\sigma}{m_u+m_d}$$ with $$\label{21}
\bar qq\ =\ \bar uu+\bar dd\ .$$ On the other hand, [@12; @13] the $\sigma$-term is connected to the pion–nucleon elastic scattering amplitude $T$. Denote $p,k (p',k')$ as momenta of the nucleon and pion before (after) scattering. Introducing the Mandelstam variables $s=(p+k)^2$, $t=(k'-k)^2$ we find the amplitude $T(s,t,k^2,k'^2)$ in the unphysical point to be $$\label{22}
T(m^2,0,0;0)\ =\ -\ \frac\sigma{f^2_\pi}\ .$$
The experiments provide the data on the physical amplitude $$\label{23}
T\left((m+m_\pi)^2,2m^2_\pi,m^2_\pi,m^2_\pi\right)\ =\ -\
\frac\Sigma{f^2_\pi}$$ leading to [@14; @15] $$\label{24}
\Sigma\ =\ (60\pm7)\ \rm MeV\ .$$ The method of extrapolation of observable on-mass shell-amplitude to the unphysical point was developed by Gasser et al. [@16; @17]. They found $$\label{25}
\sigma\ =\ (45\pm7)\ \rm MeV\ .$$ Note that from the point of chiral expansion, the difference $\Sigma-\sigma$ is of higher order, i.e. $(\Sigma-\sigma)/\sigma\sim
m_\pi$.
The value $\sigma=45$ MeV corresponds to $\langle N|\bar qq|N\rangle
\approx8$. This is the strong support of the presence of $\bar qq$ pairs inside the nucleon. However direct identification of the value $\langle N|\bar qq|N\rangle$ with the total number of quarks and antiquarks is possible only under the assumptions, described above.
Quark scalar condensate in gas approximation
--------------------------------------------
The formula for the scalar condensate in the gas approximation $$\label{26}
\langle M|\bar qq|M\rangle\ =\ \langle0|\bar qq|0\rangle
+\frac{2\sigma}{m_u+m_d}\ \rho$$ or $$\label{27}
\langle M|\bar qq|M\rangle\ =\ \langle0|\bar qq|0\rangle \left(
1-\frac\sigma{f^2_\pi\, m^2_\pi}\,\rho\right)$$ was obtained by Drukarev and Levin [@18; @19]. Of course, one can just substitute the semi-experimental value of $\sigma$, given by Eq.(\[25\]). However for the further discussion it is instructive to give a brief review of the calculations of the sigma-term.
Most of the early calculations of $\sigma$-term were carried out in the framework of NJL model — see Eq.(1). The results were reviewed by Vogl and Weise [@20]. In this approach the quarks with initially very small “current” masses $m_u\approx4\,$MeV, $m_d\approx7\,$MeV obtain relatively large “constituent” masses $M_i\sim300-400\,$MeV by four-fermion interaction, — Eq.(1). If the nucleon is treated as the weakly bound system of three constituent quarks, the $\sigma$-term can be calculated as the sum of those of three constituent quarks. The early calculations provided the value of $\sigma\approx34\,$MeV, being somewhat smaller, than the one, determined by Eq.(\[25\]). The latter can be reproduced by assuming rather large content of strange quarks in the nucleon [@6] or by inclusion of possible coupling of the quarks to diquarks [@20; @21].
In effective Lagrangian approach the Hamiltonian of the system is presented by Eq.(\[5\]) with $H_b$ determined by Eq.(\[6\]) while $H_0$ is written in terms of nucleon (or constituent quarks) and meson degrees of freedom. It was found by Gasser [@22] that $$\label{28}
\sigma\ =\ \hat m\ \frac{dm}{d\hat m}$$ with $$\label{29}
\hat m\ =\ \frac{m_u+m_d}2\ .$$ The derivation of Eq.(\[28\]) is based in Feynman–Hellmann theorem [@23]. The nontrivial point of Eq.(\[28\]) is that the derivatives of the state vectors in the equation $$\label{30}
m\ =\ \langle N|H|N\rangle$$ cancel.
Recently Becher and Leutwyler [@24] reviewed investigations, based on pion–nucleon nonlinear Lagrangian. In this approach the contribution of $\bar qq$ pairs is $$\label{31}
\sigma_{\bar qq}\ =\ \hat m\ \frac{\partial m}{\partial m^2_\pi}\
\frac{\partial m^2_\pi}{\partial\hat m}$$ with the last factor in rhs $$\label{32}
\frac{\partial m^2_\pi}{\partial\hat m}\ =\ \frac{m^2_\pi}{\hat m}$$ as follows from Eq.(\[GMOR\]). The calculations in this approach reproduce the value $\sigma\approx45\,$MeV.
Similar calculations [@25] were carried out in framework of perturbative chiral quark model of Gutsche and Robson [@26] which is based on the effective chiral Lagrangian describing quarks as relativistic fermions moving in effective self-consistent field. The $\bar qq$ pairs are contained in pions. The value of the $\sigma$-term obtained in this model is also $\sigma\approx45\,$MeV.
The Skyrme-type models provide somewhat larger values $\sigma=50\,$MeV [@27] and $\sigma=59.6\,$MeV [@28]. The chiral soliton model calculation gave $\sigma=54.3\,$MeV [@29].
The results obtained in other approaches are more controversial. Two latest lattice QCD calculations gave $\sigma=(18\pm5)\,$MeV [@30] and $\sigma=(50\pm5)\,$MeV [@31]. The attempts to extract the value of $\sigma$-term directly from QCD sum rules underestimate it, providing $\sigma=(25\pm15)\,$MeV [@32] and $\sigma=(36\pm5)\,$MeV [@33].
We shall analyse the scalar condensate beyond the gas approximation expressed by Eq.(\[26\]), in Subsection 2.7.
Gluon condensate
----------------
Following Subsec.2.2 we write in the gas approximation $$\label{33}
\langle M|\frac{\alpha_s}\pi\,G^2|M\rangle\ =\
\langle0|\,\frac{\alpha_s}\pi\,G^2|0\rangle+\rho\langle
N|\,\frac{\alpha_s}\pi\,G^2|N\rangle$$ with notation $G^2=G^a_{\mu\nu}G^{\mu\nu}_a$. Fortunately, the expectation value $\langle N|\frac{\alpha_s}\pi G^2|N\rangle$ can be calculated. This was done [@34] by averaging of the trace of QCD energy-momentum tensor, including the anomaly, over the nucleon state. The trace is $$\label{34}
\theta^\mu_\mu\ =\ \sum_i m_i\bar \psi_i\psi_i-\frac{b\alpha_s}{8\pi}\
G^2$$ with $b=11-\frac23n$, while $n$ stands for the total number of flavours. However, $\langle N|\theta^\mu_\mu|N\rangle$ does not depend on $n$ due to remarkable cancellation obtained by Shifman et al. [@34] $$\label{35}
\langle N|\sum_h m_h\bar \psi_h\psi_h|N\rangle\,-\frac23\,n_h\ \langle
N|\frac{\alpha_s}{8\pi}\, G^a_{\mu\nu}G^{\mu\nu}_a|N\rangle\ =\ 0\ .$$ Here $"h"$ denotes “heavy” quarks, i.e. the quarks, whose masses $m_h$ are much larger than the inverse confinement radius $\mu$. The accuracy of Eq.(35) is $(\mu/m_h)^2$. Thus we only have to consider the light flavors $u,d,s$ to give a reasonable approximation since $m_c\approx1.5\,$GeV$\,\approx0.3\,\rm
Fm^{-1}$. This leads to $$\label{36}
\langle N|\theta^\mu_\mu|N\rangle\ =\ -\frac98\ \langle
N|\frac{\alpha_s}\pi\, G^2|N\rangle+\Sigma m_i\langle N|\bar
\psi_i\psi_i |N\rangle$$ with $i$ standing for $u,d$ and $s$. Since on the other hand $\langle N|\theta^\mu_\mu|N\rangle=m$ one comes to $$\label{37}
\langle N|\frac{\alpha_s}\pi\ G^2|N\rangle\ =\ -\ \frac89\left(m-\sum_i
m_i \langle N|\bar \psi_i\psi_i|N\rangle\right).$$
For the condensate $$\label{38}
g(\rho)\ =\ \langle M|\ \frac{\alpha_s}\pi\ G^2|M\rangle$$ Drukarev and Levin [@18; @19] obtained in the gas approximation $$\label{39}
g(\rho)\ =\ g(0)-\frac89\ \rho\left(m-\sum_i m_i\langle N|\bar \psi_i
\psi_i|N\rangle\right).$$ In the chiral limit $m_u=m_d=0$ and $$\label{39a}
g(\rho)\ =\ g(0)-\frac89\ \rho\bigg(m-m_s\langle N|\bar
ss|N\rangle\bigg).$$
The expectation value $\langle N|\bar ss|N\rangle$ is not known definitely. Donoghue and Nappi [@6] obtained $\langle N|\bar
ss|N\rangle$ $\approx1$ assuming, that the hyperon mass splitting in SU(3) octet, is described by the lowest order perturbation theory in $m_s$. Approximately the same result $\langle N|\bar
ss|N\rangle\approx0.8$ was obtained in various versions of chiral perturbation theory with nonlinear Lagrangians [@17]. The lattice calculations provide larger values, e.g. $\langle N|\bar
ss|N\rangle\approx1.6$ [@35]. On the contrary, the Skyrme model [@10] and perturbative chiral quark model [@25] lead to smaller values, $\langle N|\bar ss|N\rangle\approx0.3$. Since $m\gg m_s$ it is reasonable to treat the second term in the brackets of rhs of Eq.(\[39a\]) as a small correction. Thus we can put $$\label{40}
g(\rho)\ =\
g(0)-\frac89\ \rho m\ ,$$ which is exact in chiral SU(3) limit in gas approximation.
One can estimate the magnitude of nonlinear contributions to the condensate $g(\rho)$. Averaging $\theta^\mu_\mu$ over the ground state of the matter one finds $$\label{41}
g(\rho)\ =\ g(0)-\frac89\,(m-m_s\langle N|\bar ss|N\rangle)\rho-
\frac89\,\varepsilon(\rho)\rho+\frac89\,m_s S_q(\rho)$$ with $\varepsilon(\rho)$ standing for the binding energy of the nucleon in medium, while $S_q(\rho)$ denotes nonlinear part of the condensate $\langle M|\bar ss|M\rangle$. One can expect the last factor to be small (otherwise we should accept that strange meson exchange plays large role in $N-N$ interaction). Hence we can assume $$\label{42}
g(\rho)\ =\ g(0)-\frac89\,(m+\varepsilon(\rho))\rho+\frac89\,
m_s\langle N|\bar ss|N\rangle\,\rho$$ with nonlinear terms caused by the binding energy $\varepsilon(\rho)$.
Thus, at least at the densities close to saturation value, corrections to the gas approximation are small. At $\rho\approx\rho_0$ the value of the condensate $g(\rho)$ differs from the vacuum value by about 6%.
Analysis of more complicated condensates
----------------------------------------
The condensates of higher dimension come from averaging of the products of larger number of operators of quark and (or) gluon fields. Such condensates appear also from the expansion of bilocal operators of lower dimension. Say, the simplest bilocal condensate $C(x)=\langle0|\bar \psi(0)\psi(x) |0\rangle$ is gauge-dependent (recall that the quarks interact with the vacuum gluon fields). To obtain the gauge-invariant expression one can substitute $$\label{43}
\psi(x)\ =\ \psi(0)+x_\mu D_\mu\psi(0)+\frac12\,x_\mu x_\nu D_\mu
D_\nu\psi(0)+\ \cdots$$ with $D_\mu$ being the covariant derivatives, which replaced the usual partial derivatives $\partial_\mu$ [@In]. Due to the Lorentz invariance the expectation value $C(x)$ depends on $x^2$ only. Hence, only the terms with even powers of $x$ survive, providing in the chiral limit $m_q=0$ $$\label{44}
C(x)\ =\ C(0)+x^2\cdot\frac1{16}\ \langle0|\bar
\psi\,\frac{\alpha_s}\pi\,\frac{\lambda^a}2\,G^{\mu\nu}_a\sigma_{\mu\nu}
\psi|0\rangle\ +\ \ldots\ ,$$ where $\lambda^a$ are Gell-Mann SU(3) basic matrices. The second term in right-hand side (rhs) of Eq.(\[44\]) can be obtained by noticing that $\langle0|\bar \psi D_\mu D_\nu\psi|0\rangle=\frac14g_{\mu\nu}\langle0|
\bar \psi D^2\psi|0\rangle$ and by applying the QCD equation of motion in the form $$\label{45}
\left(D^2-\frac12\ \frac{\alpha_s}\pi\ G^{\mu\nu}_a\sigma_{\mu\nu}
\cdot\frac{\lambda^a}2-m^2_q\right)\psi\ =\ 0\ .$$ The condensate $\langle0|\bar \psi\frac{\alpha_s}\pi G^{\mu\nu}_a
\sigma_{\mu\nu}\frac{\lambda^a}2\psi|0\rangle$ is usually presented “in units” of $\langle0|\bar \psi\psi|0\rangle$, i.e. $$\label{46}
\langle0|\bar \psi\,\frac{\alpha_s}\pi\,G^{\mu\nu}_a\sigma_{\mu\nu}
\frac{\lambda^a}2\,\psi|0\rangle\ =\ m^2_0\langle0|\bar \psi\psi|0
\rangle$$ with $m_0$ having the dimension of the mass. The QCD sum rules analysis of Belyaev and Ioffe [@36] gives $m^2_0\approx0.8\,\rm GeV^2$ for $u$ and $d$ quarks. However instanton liquid model estimation made by Shuryak [@37] provides about three times larger value.
The situation with expectation values averaged over the nucleon is more complicated. There is infinite number of condensates of each dimension. This happens because the nonlocal condensates depend on two variables $x^2$ and $(Px)$ with $P$ being the four-dimensional momentum of the nucleon. Thus, even the lowest order term of expansion in powers of $x^2$ $(x^2=0)$ contains infinite number of condensates. Say, $$\label{47}
\langle N|\bar \psi(0)\gamma_\mu\psi(x)|N\rangle\ =\ \frac{P_\mu}m
\tilde\varphi_a((Px),x^2)+ix_\mu m\tilde\varphi_b((Px),x^2)$$ with $\tilde\varphi(x)$ defined by expansion, presented by Eq.(\[43\]). The function $\tilde\varphi_a(0,0)$ is the number of the valence quarks of the fixed flavour in the nucleon. Presenting $$\label{48}
\tilde\varphi_{a,b}((Px),0)\ =\varphi_{a,b}((Px)); \quad
\varphi_{a,b}((Px))\ =\ \int\limits^1_0 d\alpha e^{-i\alpha(Px)}
\phi_{a,b}(\alpha)$$ we find the function $\phi_a(\alpha)$ to be the asymptotics of the nucleon structure function [@38] and the expansion of $\varphi_a$ in powers of $(Px)$ is expressed through expansion in the moments of the structure function. The next to leading order of the expansion of $\tilde\varphi_a$ in powers of $x^2$ leads to the condensate $$\label{49}
\langle N|\bar\psi(0)\widetilde G_{\mu\nu}\gamma_\nu\gamma_5\psi(0)
|N\rangle\ =\ 2P_\mu m\cdot\xi_a$$ with $\widetilde G_{\mu\nu}=\frac12\varepsilon_{\mu\nu\alpha\beta}
G^a_{\alpha\beta}\cdot\frac12\lambda^a$ and $$\label{50}
\xi_{a(b)}\ =\ \int\limits^1_0 d\alpha\theta_{a(b)}(\alpha,0); \quad
\theta_{a(b)}(\alpha,x^2)\ =\ \frac{\partial\tilde\phi_{a(b)}(\alpha,x^2)}{
\partial x^2}\ .$$ The QCD sum rules analysis of Braun and Kolesnichenko [@39] gave the value $\xi_a=-0.33\,\rm GeV^2$.
Using QCD equations of motion we obtain relations between the moments of the functions $\phi_a$ and $\phi_b$. Denoting $\langle
F\rangle=\int^1_0 d\alpha F(\alpha)$ for any function $F$ we find, following Drukarev and Ryskin [@40] $$\label{51}
\langle \phi_b\rangle =\frac14\langle \phi_a\alpha\rangle\,; \quad
\langle \phi_b\alpha\rangle =\frac15\left(\langle \phi_a\alpha^2
\rangle -\frac14\langle \theta_a\rangle \right); \quad
\langle \theta_b\rangle =\frac16\langle \theta_a\alpha\rangle\ .$$
Situation with the nonlocal scalar condensate is somewhat simpler, since all the matrix elements of the odd order derivatives are proportional to the current masses of the quarks. This can be shown by presenting $D_\mu=\frac12(\gamma_\mu \widehat D +\widehat D
\gamma_\mu)$ followed by using the QCD equations of motion. Hence, in the chiral limit such condensates vanish for $u$ and $d$ quarks. The condensate containing one derivative can be expressed through the vector condensate and thus can be obtained beyond the gas approximation $$\label{52}
\langle M|\bar\psi_iD_\mu\psi_i|M\rangle\ =\ m_iv_\mu(\rho)\ .$$ In the chiral limit $m_u=m_d=0$ this condensate vanishes for $u$ and $d$ quarks. The even order derivatives contain the matrix elements corresponding to expansion in powers of $x^2$ which do not contain masses. In the lowest order there is the expectation value $\langle
N|\bar\psi\frac{\alpha_s}\pi\frac{\lambda^a}2G^{\mu\nu}_a
\sigma_{\mu\nu}\psi|N\rangle$ — compare Eq.(\[46\]). It was estimated by Jin et al. [@41] in framework of the bag model $$\label{53}
\langle N|\bar\psi\frac{\alpha_s}\pi\frac{\lambda^a}2
G^{\mu\nu}_a\sigma_{\mu\nu}\psi|N\rangle\ \approx\ 0.6\,\rm GeV^2$$ together with another condensate of the mass dimension 5 $$\label{54}
\langle N|\bar\psi\frac{\alpha_s}\pi\frac{\lambda^a}2 \gamma_0
G^{\mu\nu}_a\sigma_{\mu\nu}\psi|N\rangle\ \approx\ 0.66\,\rm GeV^2\ .$$
Considering the four-quark condensates, we limit ourselves to those with colourless diquarks with fixed flavours. The general formula for such expectation values is $$\label{55}
Q^{AB}_{ij}\ =\ \langle M|\bar\psi_i\Gamma_A\psi_i\bar\psi_j
\Gamma_B\psi_j|M
\rangle$$ with $A,B=1\ldots5,$ matrices $\Gamma_{A,B}$ are introduced in Eq.(\[12\]). For two lightest flavours there are thus $5\cdot5\cdot4=100$ condensates. Due to SU(2) symmetry $Q^{AB}_{uu}=Q^{AB}_{dd}
=Q^{AB}$. Due to parity conservation only the diagonal condensates $Q^{AA}_{ij}$ and also $Q^{12}_{ij}=Q^{21}_{ij}$ and $Q^{34}_{ij}=Q^{43}_{ij}$ have nonzero value in uniform matter. Since the matter is the eigenstate of the operator $\bar\psi_i\Gamma_2\psi_i$, we immediately find $$\label{56}
Q^{12}_{ij}\ =\ \rho_i\ \langle M|\bar\psi_j \psi_j|M\rangle$$ with $\rho_i$ standing for the density of the quarks of i-th flavour. In the case, when the matter is composed of nucleons distributed with the density $\rho$, we put $\rho_i=n_i\rho$ with $n_i$ being the number of quarks per nucleon.
For the four-quark scalar condensate $Q^{11}$ we can try the gas approximation as the first step — see Eq.(\[9\]). Using Eq.(\[11\]) we find for each flavour $$\begin{aligned}
&& \langle N|\bar\psi\psi\bar\psi\psi|N\rangle \ =\ \int d^3x
\langle N|[\bar\psi(x)\psi(x) -\langle 0|\bar\psi\psi|0
\rangle]^2|N\rangle\ + \nonumber\\
+ && 2\langle 0|\bar\psi\psi|0\rangle\, \langle
N|\bar\psi\psi|N\rangle\ + \ V_N\left((\langle 0|\bar\psi\psi|0
\rangle )^2-\langle0|\bar\psi\psi\bar\psi\psi|0\rangle \right).
\label{57}\end{aligned}$$
One can immediately estimate the second term to be about $-0.09\,\rm
GeV^3$. This makes the problem of exact vacuum expectation value to be very important. Indeed, one of the usual assumptions is that [@4] $$\label{58}
\langle 0|\bar\psi\psi\bar\psi\psi|0\rangle \ \simeq\ (\langle
0|\bar\psi\psi|0\rangle )^2\ .$$ This means that we assume the vacuum state $|0\rangle\langle0|$ to give the leading contribution to the sum $$\label{59}
\langle 0|\bar\psi\psi\bar\psi\psi|0\rangle\ =\ \sum_n\langle
0|\bar\psi\psi|n\rangle\ \langle n|\bar\psi\psi|0\rangle$$ over the complete set of the states $|n\rangle $ with the quantum numbers of vacuum. Novikov et al. [@42] showed, that Eq.(\[58\]) becomes exact in the limit of large number of colours $N_c\to\infty$. However, the contribution of excited states, e.g. of the $\sigma$-meson $|\sigma\rangle \langle \sigma|$ can increase the rhs of Eq.(59). Assuming the nucleon radius to be of the order of 1 Fm we find the second and the third terms of the rhs of Eq.(\[57\]) to be of the comparable magnitude. This becomes increasingly important in view of the only calculation of the 4-quarks condensate in the nucleon, carried out by Celenza et al. [@44]. In this paper the calculations in the framework of NJL model show that about 75% of the contribution of the second term of rhs of Eq.(\[57\]) is cancelled by the other ones.
Quark scalar condensate beyond the gas approximation
----------------------------------------------------
Now we denote $$\label{60}
\langle M|\bar qq|M\rangle\ =\ \kappa(\rho)$$ and try to find the last term in the rhs of the equation $$\label{61}
\kappa(\rho)\ =\ \kappa(0)+\frac{2\sigma}{m_u+m_d}\cdot\rho+S(\rho)\ .$$ The first attempt was made by Drukarev and Levin [@18; @19] in the framework of the meson-exchange model of nucleon–nucleon (NN) interactions. In the chiral limit $m^2_\pi\to0$ (neglecting also the finite size of the nucleons) one obtains the function $S(\rho)$ as the power series in Fermi momenta $p_F$. The lowest order term comes from Fock one-pion exchange diagram (Fig.1). The result beyond the chiral limit was presented in [@40]
In spite of the fact that the contribution of such mechanism to the interaction energy is a minor one, this contribution to the scalar condensate is quite important, since it is enhanced by the large factor (about 12) in the expectation value $$\label{62}
\langle\pi|\bar qq|\pi\rangle\ =\ \frac{2m^2_\pi}{m_u+m_d}\ \approx\
2m_\pi\cdot12$$ obtained by averaging the QCD Hamiltonian over the pion state. Using the lowest order $\pi N$ coupling terms of the $\pi N$ Lagrangian, we obtain in the chiral limit $$\label{63}
S(\rho)\ =\ -3.2\ \frac{p_F}{p_{F0}}\ \rho$$ with $p_F$ being Fermi momentum of the nucleons, related to the density as $$\label{64}
\rho\ =\ \frac2{3\pi^2}\ p^3_F\ ;$$ $p_{F0}\approx268$ MeV is Fermi momentum at saturation point. Of course, the chiral limit makes sense only for $p_F^2\gg m_\pi^2$. This puts the lower limit for the densities, when Eq.(\[63\]) is true. The value, provided by one-pion exchange depends on the values of $\pi
N$ coupling $g=g_A/2f_\pi$ and of the nucleon mass in medium. If we assume that these parameters are presented as power series in $\rho$ (but not in $p_F$) at low densities, the contribution of the order $\rho^{5/3}$ comes from two-pion exchange with two nucleons in the two-baryon intermediate state — Fig.2.
In our paper [@45] we found for $p^2_F\gg\ m^2_\pi$ $$\label{65}
S(\rho)\ =\ -3.2\ \frac{p_F}{p_{F0}}\ \rho -3.1\left(\frac{p_F}{p_{F0}}
\right)^2\ \rho\ +O(\rho^2) .$$ Although at saturation point $m^2_\pi/p^2_{F0}\approx1/4$, the discrepancy between the results of calculation of one-pion exchange term in the chiral limit and that with account of finite value of $m^2_\pi$ is rather large [@40]. However, working in the chiral limit one should use rather the value of $\Sigma$, defined by Eq.(\[24\]) for the sigma–term, since the difference between $\Sigma$ and $\sigma$ terms contains additional powers of $m_\pi$ [@16]. This diminishes the difference of the two results strongly. Additional arguments in support of the use of the chiral limit at $\rho$ close to $\rho_0$ were given recently by Bulgac et al. [@46].
The higher order terms of the expansion, coming from the $NN$, $N\Delta$ and $\Delta\Delta$ intermediate states, compensate the terms, presented in rhs of Eq.(\[65\]) to large extent. However these contributions are much more model-dependent. The finite size of the nucleons should be taken into account to regularize the logarithmic divergence. Some of the convergent terms are saturated by the pion momenta of the order $k\sim(m(m_\Delta-m))^{1/2}\sim530\,$MeV, corresponding to the distances of the order 0.4 Fm, where the finite size of the nucleons should be included as well. Also, the result are sensitive to the density dependence of the effective nucleon mass $m^*$. This prompts, that a more rigorous analysis with the proper treatment of multi-nucleon configurations and of short distance correlations is needed. We shall return to the problem in Sec.4.
Anyway, the results for the calculation of the scalar condensate with the account of the pion cloud, produced by one- and two-pion exchanges looks as following. At very small values of density $\rho\la\rho_0/8$, e.g. $p^2_F\la m_\pi^2$, only the two-pion exchanges contribute and $$\label{66}
S(\rho)\ =\ 0.8\rho\cdot\frac\rho{\rho_0}\ .$$ Hence, $S$ is positive for very small densities. However, for $\rho\ga\rho_0$ we found $S<0$. The numerical results are presented in Fig.3. One can see the effects of interaction to slow down the tendency of restoration of the chiral symmetry, in any case requiring $\kappa(\rho)=0$. There is also the negative [@47] contribution to $\kappa(\rho)$ of the vector meson field. The sign of this term can be understood in the following way. It was noticed by Cohen et al. [@48] that the Gasser theorem [@22] expressed by Eq.(\[28\]), can be generalized for the case of the finite densities with $$\label{67}
S(\rho)\ =\ \frac{d\varepsilon(\rho)}{d\hat m}\ ,$$ while $\varepsilon(\rho)$ is the binding energy. The contribution of vector mesons to rhs of Eq.(\[65\]) is $\frac{dV}{dm_V}\frac{dm_V}{d\hat
m}$ with $m_V$ standing for the vector meson mass. Since the energy caused by the vector meson exchange $V>0$ drops with growing $m_V$, the contribution is negative indeed.
Another approach to calculation of the scalar condensate based on the soft pion technique was developed by Lyon group. Chanfray and Ericson [@49] expressed the contribution of the pion cloud to $\kappa(\rho)$ through the pion number excess in nuclei [@50]. The calculation of Chanfray et al. [@51] was based on the assumption, that GMOR relation holds in medium $$\label{68}
f^{*2}_\pi m^{*2}_\pi\ =\ -\hat m\ \langle M|\bar qq|M\rangle\ .$$ This is true indeed, as long as the pion remains to be much lighter than the other bosonic states of unnatural parity. Under several assumptions on the properties of the amplitude of $\pi N$ scattering in medium, the authors found $$\label{69}
\frac{\kappa(\rho)}{\kappa(0)}\ =\ \frac1{1+\ \rho\sigma/f^2_\pi
m^2_\pi}$$ and $\kappa(\rho)$ turns to zero at asymptotically large $\rho$ only. This formula was obtained also by Ericson [@52] by attributing the deviations from the linear law to the distortion factor, emerging because of the coherent rescattering of pions by the nucleons.
However, Birse and McGovern [@53] and Birse [@54] argued, that Eq.(\[69\]) is not an exact relation and results from the simplified model which accounts only for nucleon-nucleon interaction, mediated by one pion. In framework of linear sigma model, which accounts for the $\pi\pi$ interaction and the $\sigma$-meson exchange, the higher order terms of $\rho$ expansion differ from those, provided by Eq.(\[69\]). The further development of calculation of the scalar condensate in the linear sigma model was made by Dmitrašinović [@55].
In several works the function $\kappa(\rho)$ was obtained in framework of NJL model. In the papers of Bernard et al. [@56; @57] the function $\kappa(\rho)$ was calculated for purely quark matter. The approach was improved by Jaminon et al. [@58] who combined the Dirac sea of quark–antiquark pairs with Fermi sea of nucleons. In all these papers there is a region of small values of $\rho$, where the interaction inside the matter is negligibly small and thus $\kappa(\rho)$ changes linearly. However, the slope is smaller than the one, predicted by Eq.(\[26\]). In the modified treatment Cohen et al. [@48] fixed the parameters of NJL model to reproduce the linear term. All the NJL approaches provide $S>0$.
Recently Lutz et.al [@LFA] suggested another hadronic model, based on chiral effective Lagrangian. The authors calculated the nonlinear contribution to the scalar condensate, provided by one-pion exchange. The value of $S(\rho_0)$ appeared to be close to that, obtained in one-pion approximation of [@40]. Hence, all the considered hadronic models provide $S<0$ except for very small values of $\rho$.
There is a common feature of all the described results. Near the saturation point the nonlinear term $S(\rho)$ is much smaller than the linear contribution. Thus, Eq.(\[26\]) can be used for obtaining the numerical values of $\kappa(\rho)$ at $\rho$ close to $\rho_0$. Hence, the condensate $|\kappa(\rho_0)|$ drops by about 30% with respect to $|\kappa(0)|$.
Hadron parameters in nuclear matter
===================================
Nuclear many-body theory
------------------------
Until mid 70-th the analysis of nuclear matter was based on nonrelativistic approach. The Schrödinger phenomenology for the nucleon in nuclear matter employed the Hamiltonian $$\label{70}
H_{NR}\ =\ -\ \frac{\Delta^2}{2m^*_{NR}}+U(\rho)$$ and the problem was to find the realistic potential energy $U(\rho)$. The deviation of nonrelativistic effective mass $m^*_{NR}$ from the vacuum value $m$ can be viewed as the dependence of potential energy on the value of three-dimensional momenta or “velocity dependent forces” [@59]. The results of nonrelativistic approach were reviewed by Bethe [@60] and by Day [@61].
Since the pioneering paper of Walecka [@62] the nucleon in nuclear matter is treated as a relativistic particle, moving in superposition of vector and scalar fields $V_\mu(\rho)$ and $\Phi(\rho)$. In the rest frame of the matter $V_\mu=\delta_{\mu0}V_0$ and Hamiltonian of the nucleon with the three-dimensional momentum $\bar p$ is $$\label{71}
H\ =\ (\bar\alpha\bar p)+\beta(m+\Phi(\rho))+V_0(\rho)\cdot I$$ with $\bar\alpha=\left({0\ \bar\sigma \atop \bar\sigma\ 0}\right)$ and $\beta=\left({1\quad 0 \atop 0\ -1}\right)$ being standard Dirac matrices.
Since the scalar $"\sigma$-meson" is rather an effective way to describe the system of two correlated pions, its mass, as well as the coupling constants of interaction between these mesons and the nucleons are the free parameters. They can be adjusted to fit either nuclear data or to reproduce the data on nucleon–nucleon scattering in vacuum. The numerous references can be found, e.g. in [@63]. In both cases the values of $V_0$ and $\Phi$ appeared to be of the order of 300–400 MeV at the density saturation point.
The large values of the fields $V_0$ and $\Phi$ require the relativistic kinematics to be applied for the description of the motion of the nucleons.
In the nonrelativistic limit the Hamiltonian (\[71\]) takes the form of Eq.(\[70\]) with $m^*_{NR}$ being replaced by Dirac effective mass $m^*$, defined as $$\label{72}
m^*\ =\ m+\Phi$$ and $$\label{73}
U\ =\ V_0+\Phi\ .$$ At the saturation point the fields $V_0$ and $\Phi$ compensate each other to large extent, providing $U\approx-60\,$MeV. This explains the relative success of Schrödinger phenomenology. However, as shown by Brockmann and Weise [@64], the quantitative description of the large magnitude of spin-orbit forces in finite nuclei requires rather large values of both $\Phi$ and $V_0$.
In the meson exchange picture the scalar and vector fields originate from the meson exchange between the nucleons of the matter. The model is known as quantum hadrodynamics — QHD. In the simplest version (QHD-1) only scalar $\sigma$-mesons and vector $\omega$ mesons are involved. In somewhat more complicated version, known as QHD-2 [@65] some other mesons, e.g. the pions, are included. The matching of QHD-2 Lagrangian with low energy effective Lagrangian was done by Furnstahl and Serot [@66].
The vector and scalar fields, generated by nucleons, depend on density in different ways. For the vector field $$\label{74}
V(\rho)\ =\ 4\int \frac{d^3p}{(2\pi)^3}\ N_V(p)g_V\theta(p_F-p)$$ with $g_V$ the coupling constant, while $N_V=(\bar u_N\gamma_0u_N)/2E$ with $u_N(p)$ standing for nucleon bispinors while $$\label{75}
\varepsilon(p,\rho)\ =\ V_0(\rho)+\left(p^2+m^{*2}(\rho)\right)^{1/2}\ .$$ One finds immediately that $N_V=1$ and thus $V(\rho)$ is exactly proportional to the density $\rho$. On the other hand, in the expression for the scalar field $$\label{76}
\Phi(\rho)\ =\ 4\int\frac{d^3p}{(2\pi)^3}\ N_s(p)\cdot g_s\theta(p_F-p)$$ the factor $N_s=m^*/E$. Thus, the scalar field is a complicated function of density $\rho$.
The saturation value of density $\rho_0$ can be found by minimization of the energy functional $$\label{77}
{\cal E}(\rho)\ =\ \frac1\rho\int\limits^\rho_0
\varepsilon_F(\rho)d\rho$$ with $\varepsilon_F(\rho)=\varepsilon(p_F,\rho)$ being the single-particle energy at the Fermi surface. Thus, in QHD the saturation is caused by nonlinear dependence of the scalar field $\Phi$ on density.
The understanding of behaviour of axial coupling constant in nuclear matter $g_A(\rho)$ requires explicit introduction of pionic degrees of freedom. The quenching of $g_A$ at finite densities was predicted by Ericson [@67] from the analysis of the dispersion relations for $\pi N$ scattering. The result was confirmed by the analysis of experimental data on Gamow–Teller $\beta$-decay of a number of nuclei carried out by Wilkinson [@68] and by investigation of beta decay of heavier nuclei — see, e.g., [@69] $$\label{78}
g_A(0)\ =\ 1.25\ ; \qquad g_A(\rho_0)\ =\ 1.0\ .$$ The quenching of $g_A$ as the result of polarization of medium by the pions was considered by Ericson et al. [@70]. The crucial role of isobar-hole excitations in this phenomena was described by Rho [@71].
Turning to the characteristics of the pions, one can introduce effective pion mass $m^*_\pi$ by considering the dispersion equation for the pion in nuclear matter (see, e.g., the book of Ericson and Weise [@72]): $$\label{79}
\omega^2-k^2-\Pi_p(\omega,k)-m^{*2}_\pi\ =\ 0\ .$$ Here $\omega$ and $k$ are the pion energy and three-dimensional momenta, $\Pi_p$ is the $p$-wave part of the pion polarization operator. Hence, $\Pi_p$ contains the factor $k^2$. The pion effective mass is $$\label{80}
m^{*2}_\pi\ =\ m^2_\pi+\Pi_s(\omega,k)$$ with $\Pi_s$ being the $s$-wave part of polarization operator.
Polarization operator $\Pi_s$ (as well as $\Pi_p$) is influenced strongly by the nucleon interactions at the distances, which are much smaller than the average inter-nucleon distances $\approx m^{-1}_\pi$. Strictly speaking, here one should consider the nucleon as a composite particle. However, there is a possibility to consider such correlations in framework of hadron picture of strong interactions by using Finite Fermi System Theory (FFST), introduced by Migdal [@73]. In framework of FFST the amplitudes of short-range baryon (nucleons and isobars) interactions are replaced by certain constant parameters. Hence, behaviour of $m^*_\pi$ can be described in terms of QHD and FFST approaches.
As well as any model based on conception of NN interaction, QHD faces difficulties at small distances. The weak points of the approach were reviewed by Negele [@74] and by Sliv et al. [@75]. Account of the composite structure of nucleon leads to the change of some qualitative results. Say, basing on the straightforward treatment of the Dirac Hamiltonian Brown et al. [@76] found a significant term in the equation of state, arising from virtual $N\bar N$ pairs, generated by vector fields. The term would have been important for saturation. However, Jaroszewicz and Brodsky [@77] and also Cohen [@78] found that the composite nature of nucleon suppresses such contributions.
Anyway, to obtain the complete description, we need a complementary approach, accounting for the composite structure of hadrons. For pions it is reasonable to try NJL model.
Calculations in Nambu–Jona–Lasinio model
----------------------------------------
In NJL model the pion is the Goldstone meson, corresponding to the breaking of the chiral symmetry. The pion can be viewed as the solution of Bethe–Salpeter equation in the pseudoscalar quark–antiquark channel. The pion properties at finite density were investigated in frameworks of SU(2) and SU(3) flavour NJL model [@56; @57]. It was found that the pion mass $m^*_\pi(\rho)$ is practically constant at $\rho\la\rho_0$, increasing rapidly at larger densities, while $f^*_\pi(\rho)$ drops rapidly. These results were obtained rather for the quark matter. Anyway, as we mentioned in Subsec.2.7, at small $\rho$ the condensate $\kappa(\rho)$, obtained in this approach, does not satisfy the limiting law, presented by Eq.(\[26\]).
However, the qualitatively similar results were obtained in another NJL analysis, carried out by Lutz et al. [@79]. The slope of the function $\kappa(\rho)$ satisfied Eq.(\[26\]). The pion mass $m^*_\pi(\rho)$ increased with $\rho$ slowly, while $f^*_\pi(\rho)$ dropped rapidly. The in-medium GMOR relation, expressed by Eq.(\[68\]) was satisfied as well.
Jaminon and Ripka [@80] considered the modified version of NJL model, which includes the dilaton fields. This is the way to include effectively the gluon degrees of freedom. The results appeared to depend qualitatively on the way, the dilation fields are included into the Lagrangian. The pion mass can either increase or drop with growing density. Also the value of the slope of $\kappa(\rho)$ differs strongly in different versions of the approach. In the version, which is consistent with Eq.(\[26\]) the behaviour of $f^*_\pi(\rho)$ and $m^*_\pi(\rho)$ is similar to the one, obtained in the other papers, mentioned in this subsection.
Note, however, that the results which predict the fast drop of $f^*_\pi(\rho)$ have, at best, a limited region of validity. This is because the pion charge radius $r_\pi$ is connected to the pion decay constant by the relation obtained by Carlitz and Creamer [@81] $$\label{81}
\langle r^2_\pi\rangle^{1/2}\ =\ \frac{\sqrt3}{2\pi f_\pi}$$ providing $\langle r^2_\pi\rangle^{1/2}\approx0.6$ Fm. Identifying the size of the pion with its charge radius, we find that at $\langle
r^2_\pi(\rho)\rangle^{1/2}$ becoming of the order of the confinement radius $r_c\sim1\,$Fm, the confinement forces should be included and straightforward using of NJL is not possible any more. Thus, NJL is definitely not true for the densities, when the ratio $f^*_\pi(\rho)/f_\pi$ becomes too small. Anyway, one needs $$\label{82}
\frac{f^*_\pi(\rho)}{f_\pi}\ \ga\ 0.6\ .$$ For the results, obtained in [@79] this means that they can be true for $\rho\le1.3\rho_0$ only.
Quark–meson models
------------------
This class of models, reviewed by Thomas [@9] is the result of development of MIT bag model, considering the nucleon as the system of three quarks in a potential well. One of the weak points of the bag-model approach is the absence of long-ranged forces in NN interactions. In the chiral bag model (CBM) the long-ranged tail is caused by the pions which are introduced into the model by requirement of chiral invariance. In the framework of CBM the pions are as fundamental degrees of freedom as quarks. In the cloudy bag model these pions are considered as the bound states of $\bar qq$ pairs. The model succeeded in describing the static properties of free nucleons.
Another model, suggested by Guichon [@82] is a more straightforward hybrid of QHD and QCD. The nucleon is considered as a three-quark system in a bag. The quarks are coupled to $\sigma$- and $\omega$-mesons directly. Although this quark-meson coupling model (QMC) was proposed by its author as “a caricature of nuclear matter”, it was widely used afterwards. The parameters of $\sigma$- and $\omega$-mesons and the bag radius, which are the free parameters of the model were adjusted to describe the saturation parameters of the matter. The fields $\Phi$ and $V$ appear to be somewhat smaller than in QHD. Thus, the values of $m^*/m$ and $g^*_A/g$ are quenched less than in QHD [@83]. On the other hand, the unwanted $N\bar N$ pairs are suppressed. The nonlinearity of the scalar field is the source of saturation.
The common weak point of these models are well known. Say, there is no consistent procedure to describe the overlapping of the bags. It is also unclear, how to make their Lorentz transformations.
Skyrmion models
---------------
This is the class of models with much better theoretical foundation. They originate from the old model, suggested by Skyrme [@84]. The model included the pions only, and the nucleon was the soliton. Later Wess and Zumino [@85] added the specific term to the Lagrangian, which provided the current with the non-vanishing integral of the three-dimensional divergence. That was the way, how the baryon charge manifested itself.
Thus, in framework of the approach most of the nucleon characteristics are determined by Dirac sea of quarks and by the quark–antiquark pairs, which are coupled into the pions. The model can be viewed as the limiting case $R\to0$ of the chiral bag model, where the description in terms of the mesons at $r>R$ is replaced by description in terms of the quarks at $r<R$ [@86].
In the framework of the Skyrme model Adkins et al. [@87; @88] calculated the static characteristics of isolated nucleons. A little later Jackson et al. [@89] investigated NN interaction in this model. The model did not reproduce the attraction in NN potential. It was included into modified Skyrme Lagrangian by Rakhimov et al. [@90] in order to calculate the renormalization of $g_A$, $m$ and $f_\pi$ in nuclear matter. The magnitude of renormalization appeared to be somewhat smaller than in QHD.
The approach was improved by Diakonov and Petrov — see a review paper [@91] and references therein. The authors build the chiral quark–soliton model of the nucleon. It is based on quark-pion Lagrangian with the Wess–Zumino term and with spontaneous chiral symmetry breaking. The nucleon appeared to be a system of three quarks, moving in a classical self-consistent pion field. The approach succeeded in describing the static characteristics of nucleon. It provided the proper results for the parton distributions as well. However, the application of the approach to description of the values of nucleon parameters in medium is still ahead.
Brown-Rho scaling
-----------------
Brown and Rho [@92] assumed that all the hadron characteristics, which have the dimension of the mass change in medium in the same manner. The universal scale was assumed to be $$\label{83}
\chi(\rho)\ =\ (-\kappa(\rho))^{1/3}\ .$$ Thus, the scaling which we refer to as BR1 is $$\label{84}
\frac{m^*(\rho)}m\ =\ \frac{f^*_\pi(\rho)}{f_\pi}\ =\
\frac{\chi(\rho)}{\chi(0)}\ .$$ The pion mass was assumed to be an exception, scaling as $$\label{85}
\frac{m^*_\pi(\rho)}{m_\pi}\ =\
\left(\frac{\chi(\rho)}{\chi(0)}\right)^{1/2}\ .$$ Thus, BR1 is consistent with in-medium GMOR relation. Also, in contrast to NJL, the pion mass drops with density.
Another point of BR1 scaling is the behaviour $$\label{86}
g^*_A(\rho)\ =\ g_A(0)\ =\ \rm const\ .$$ Consistency of Eqs. (\[78\]) and (\[86\]) can be explained in such a way. Renormalization expressed by Eq.(\[78\]) is due to $\Delta$-hole polarization of medium. It takes place at moderate distances of the order $m^{-1}_\pi$, reflecting rather the properties of the medium, but not the intrinsic properties of the nucleon, which are discussed here.
Another version of Brown–Rho scaling [@93], which we call BR2 is based on the in-medium GMOR relation, expressed by Eq.(\[68\]). It is still assumed that $$\label{87}
\frac{m^*}m\ =\ \frac{f^*_\pi}{f_\pi}\ ,$$ but the pion mass is assumed to be constant $$\label{88}
m^*_\pi\ \approx\ m_\pi\ ,$$ and thus $$\label{89}
\frac{f^*_\pi}{f_\pi}\ =\
\left(\frac{\kappa(\rho)}{\kappa(0)}\right)^{1/2}$$ instead of 1/3 law in BR1 version — Eq.(\[84\]). Note, however, that assuming $m^*_\pi(\rho_0)=1.05m_\pi$ [@93] we find, using the results of subsection 2.6 $$\label{90}
\frac{f^*_\pi(\rho_0)}{f_\pi}\ =\ 0.76\ .$$ This is not far the limit determined by Eq.(\[82\]). At larger densities the size of the pion becomes of the order of the confinement radius. Here the pion does not exist as a Goldstone boson any more. In any case, some new physics should be included at larger densities. If Eq.(\[89\]) is assumed to be true, this happens at $\rho\approx1.6\rho_0$.
QCD sum rules
-------------
In this approach we hope to establish some general relations between the in-medium values of QCD condensates and the characteristics of nucleons.
The QCD sum rules were invented by Shifman et al. [@4] and applied for the description of the mesonic properties in vacuum. Later Ioffe [@94] expanded the method for the description of the characteristics of nucleons in vacuum. The main idea is to build the function $G(q^2)$ which describes the propagation of the system (“current”) with the quantum numbers of the proton. (The usual notation is $\Pi(q^2)$. We used another one to avoid confusion with pion polarization operator, expressed by Eq.(\[79\])). The dispersion relation $$\label{91}
G(q^2)\ =\ \frac1\pi\int \frac{\mbox{Im }G(k^2)}{k^2-q^2}\ dk^2$$ is considered at $q^2\to-\infty$. Imaginary part in the rhs is expressed through parameters of observable hadrons. Due to asymptotic freedom of QCD lhs of Eq.(\[91\]) can be presented as perturbative series in $-q^2$ with QCD vacuum condensates as coefficients of the expansion. Convergence of the series means that the condensates of lower dimension are the most important ones.
The method was used for the calculation of characteristics of the lowest lying hadron states. This is why the “pole+continuum” model was employed for the description of Im$\,G(k^2)$ in the rhs of Eq.(\[91\]). This means that the contribution of the lowest lying hadron was treated explicitly, while all the other excitations were approximated by continuum. In order to emphasise the contribution of the pole inverse Laplace (Borel) transform was applied to both sides of Eq.(\[91\]) in the papers mentioned above. The Borel transform also removes the polynomial divergent terms.
Using QCD sum rules Ioffe [@94] found that the nucleon mass vanishes if the scalar condensate turns to zero. Numerically, [@36; @94; @95] $$\label{93}
m\ =\ \left(-2(2\pi)^2\ \langle0|\bar qq|0\rangle\right)^{1/3}\ .$$
Later the method was applied by Drukarev and Levin [@18; @19; @96] for investigation of properties of nucleons in the nuclear matter. The idea was to express the change of nucleon characteristics through the in-medium change of the values of QCD condensates. The generalization for the case of finite densities was not straightforward. Since the Lorentz invariance is lost, the function $G^m(q)$ describing the propagation of the system in medium depends on two variables, e.g. $G^m=G(q^2,q_0)$. Thus, each term of expansion of $G^m$ in powers of $q^{-2}$ may contain infinite number of local condensates. In the rhs of dispersion relation it is necessary to separate the singularities, connected with the nucleon from those, connected with excitation of the matter itself.
We shall return to these points in Sec.5. Here we present the main results. The method provided the result for the shift of the position of the nucleon pole. The new value is expressed as a linear combination of several condensates with vector condensate $v(\rho)$ and scalar condensate $\kappa(\rho)$ being most important [@18; @19; @96] $$\label{94}
m_m\ =\ m+C_1\kappa(\rho)+C_2v(\rho)\ .$$ On the other hand, $$\label{95}
m_m-m\ =\ U\left(1+0\left(\frac Um\right)\right)$$ with $U$ being single-particle potential energy of the nucleon. Hence, the scalar forces are to large extent determined by the $\sigma$-term.
The Dirac effective mass was found to be proportional to the scalar condensate $$\label{96}
m^*(\rho)\ =\ \kappa(\rho)F(\rho)$$ with $F(\rho)$ containing the dependence on the other condensates, e.g. on vector condensate $v(\rho)$.
Using Eqs.(\[14\]) and (\[15\]) we see, that $v(\rho)$ is linear in $\rho$. Thus, the main nonlinear contributions to the energy ${\cal E}(\rho)$ presented by Eq.(\[77\]) come from nonlinearities in the function $\kappa(\rho)$. For the saturation properties of the matter the sign of the contribution $S(\rho)$ becomes important. The nonlinearities of the condensate $\kappa(\rho)$ can be responsible for the saturation if $S<0$. Calculations of Drukarev and Ryskin [@40] show that the saturation can be obtained at reasonable values of density with reasonable value of the binding energy. Of course, this result should not be taken too seriously, since it is very sensitive to the exact values of $\sigma$-term. It can be altered also by the account of higher order terms. (However, as noted by Birse [@54], the QHD saturation picture is also very sensitive to the values of the parameters). Similar saturation mechanism was obtained recently in the approach developed by Lutz et.al [@LFA]. Anyway, it can be a good starting point to analyse the problem.
First step to self-consistent treatment
=======================================
As we have seen in Sec.2 in the gas approximation the scalar condensate $\kappa(\rho)$ is expressed through the observables. However, beyond the gas approximation it depends on a set of other parameters. Here we show how such dependence manifests itself in a more rigorous treatment of the hadronic presentation of nuclear matter.
Account of multi-nucleon effects in the quark scalar condensate
---------------------------------------------------------------
Now we present the main equations, which describe the contribution of the pion cloud to the condensate $\kappa(\rho)$. Recall, that the pions are expected to give the leading contribution to the nonlinear part $S(\rho)$ due to the large expectation value $\langle\pi|\bar
qq|\pi\rangle$ — Eq.(\[62\]).
In order to calculate the contribution we employ the quasiparticle theory, developed by Migdal for the propagation of pions in matter [@Mi1]. Using Eq.(\[67\]), we present $S(\rho)$ through the derivative of the nucleon self-energy with respect to $m^2_\pi$: $$\begin{aligned}
&& S\ =\ \sum_B S_B\ ; \nonumber\\
\label{96a}
&& S_B=\ -C_B\Upsilon\int\frac{d^3p}{(2\pi)^3}\frac{d^3kd\omega}{
(2\pi)^4\cdot i}\left(\Gamma^2_BD^2(\omega,k)g_B(p-k)-\Gamma^{02}_B
D^2_0(\omega,k)g_B^0(p-k)\right).\end{aligned}$$ Here $B$ labels the excited baryon states with propagators $g_B$ and $\pi NB$ vertices $\Gamma_B$. The pion propagator $D$ includes the multi-nucleon effects ($D^{-1}$ is the lhs of Eq.(\[79\])). The second term of the rhs of Eq.(\[96a\]) , with the index “0” corresponding to the vacuum values, subtracts the terms, which are included into the expectation value already. The coefficient $C_B$ comes from summation over the spin and the isospin variables. Integration over nucleon momenta $p$ is limited by the condition $p\leq p_F$. The factor $\Upsilon$ stands for the expectation value of the operator $\bar qq$ in pion, i.e. $\Upsilon=\langle\pi|\bar qq|\pi\rangle = m^2_\pi/\hat m$. Of course, Eq.(\[96a\]), illustrated by Fig.4 corresponds to the Lagrangian which includes the lowest order $\pi N$ interactions only.
The pion propagator in medium can be viewed as the solution of the Dyson equation [@72; @73] — Fig.5: $$\label{97}
D\ =\ D_0+D_0\Pi D$$ with $$\label{98}
\Pi(\omega,k)\ =\ 4\pi\int\limits^{p_F} \frac{d^3p'}{(2\pi)^3}\
A(p';\omega,k)\ ,$$ while $A(p';\omega,k)$ stands for the amplitude of the forward $\pi N$ scattering (all the summation over the spin and isospin variables is assumed to be carried out) on the nucleon of the matter with the three- dimensional momentum $p'$. Of course, the pion is not on the mass-shell.
Neglecting the interactions inside the bubbles of Fig.5 (this is denoted by the upper index $"(0)"$) we can present $$\begin{aligned}
&& A^{(0)}\ =\ \sum A^{(0)}_B\ ; \quad \Pi^{(0)}\ =\ \sum\Pi^{(0)}_B
\nonumber\\
&& A^0_B\ =\ c_B\widetilde\Gamma^2_B (k)\Lambda_B(p';\omega,k)
\label{99}\end{aligned}$$ with $c_B$ being a numerical coefficient, $$\label{100}
\Lambda_B(p';\omega,k)\ =\ g_B(\varepsilon'+\omega,\bar p'+\bar k)+g_B
(\varepsilon'-\omega,\bar p'-\bar k)\ .$$ The factors $\widetilde\Gamma^2_B(k)$ come from the vertex functions. Considering $p$-wave part of polarization operator only (the $s$-wave part is expressed through the pion effective mass $m^*_\pi$ — Eq.(\[80\])), we present $$\label{101}
\widetilde\Gamma^2_B(k)\ =\ \tilde
g^2_{\pi NB}k^2d^2_{NB}(k)$$ with $d^2_{NB}$ accounting for the finite size of the baryons, $\tilde g_{\pi NB}$ is the coupling constant.
Starting the analysis with the contribution of the nucleon intermediate state $(B=N)$ to Eqs.(\[99\]) and (\[100\]), we see that in the nonrelativistic limit we can present the first term in rhs of Eq.(\[100\]) as
$$\label{102}
g_N(\varepsilon'+\omega,\bar p'+\bar k)\ =\ \frac{\theta(|\bar p'+\bar k|-p_F)}{\omega
+\varepsilon_{p'}-\varepsilon_{p'+k}}$$
(similar presentation can be written for the second term) with $\varepsilon_q=q^2/2m^*+U$, while $U$ stands for the potential energy. Hence, the terms, containing $U$ cancel and all the dependence on the properties of the matter enters through the effective mass $m^*$. This enables to obtain the contribution to the polarization operator $$\label{103}
\Pi^{(0)}_N\ =\ -4\tilde g^2_{\pi NN}k^2d^2_{NN}(k)\
\frac{m^*p_F}{2\pi^2}\ \phi^{(0)}_N(\omega,k)$$ with explicit analytical expression for $\phi^{(0)}_N(\omega,k)$ presented in [@72; @100], the static long-wave limit is $\phi^{(0)}_N(0,0)=1$.
Such approach does not include the particle-hole interactions in the bubble diagram of Fig.5. The short-range correlations can be described with the help of effective FFST constants, as it was mentioned above. Using the Dyson equation for the short-range amplitude of nucleon-hole scattering one finds $$\label{104}
\Pi_N\ =\ -4\tilde g^2_{\pi NN}k^2d^2_{NN}(k)\ \frac{m^*p_F}{2\pi^2}\
\phi_N(\omega,k)$$ with $$\label{105}
\phi_N(\omega,k)\ =\ \frac{\phi^{(0)}_N(\omega,k)}{1+g'_{NN}
\phi^{(0)}_N(\omega,k)}\ ,$$ if only the nucleon intermediate states are included.
The long-ranged correlations inside the bubbles were analysed by Dickhoff et al. [@97]. It was shown, that exchange by the renormalized pions inside the bubbles (“bubbles in bubbles”) can be accounted for by the altering of the values of FFST constants. The change in the numerical values does not appear to be large.
The usual approach includes also the $\Delta$-isobar states in the sums in Eq.(\[96a\]). Until the particle-hole correlations are included, the total $p$-wave operator $\Pi^{(0)}$ is just the additive sum of the nucleon and isobar terms, i.e. $\Pi^{(0)}=\Pi^{(0)}_N+\Pi^{(0)}_\Delta$. Also, one can obtain analytical expression, similar to Eq.(\[103\]) for the contribution $\Pi^{(0)}_\Delta$ under a reasonable assumption on the propagation of $\Delta$-isobar in medium (see below). However, account of the short-range correlations makes the expression for the total $p$-wave polarization operator more complicated. We use the explicit form presented by Dickhoff et al. [@98] $$\Pi\ =\ \Pi_N+\Pi_\Delta$$ with $$\begin{aligned}
\label{106}
&& \Pi_N\ =\ \Pi_N^{(0)}\left(1-(\gamma_\Delta-\gamma_{\Delta\Delta})
\frac{\Pi^{(0)}_\Delta}{k^2}\right)\bigg/ E \\
\label{107}
&&\Pi_\Delta\ =\ \Pi^{(0)}_\Delta\left(1+(\gamma_\Delta-\gamma_{NN})
\frac{\Pi^{(0)}_N}{k^2}\right)\bigg/E\ .\end{aligned}$$ Denominator $E$ has the form $$\label{108}
E\ =\ 1-\gamma_{NN}\frac{\Pi^{(0)}_N}{k^2}-\gamma_{\Delta\Delta}
\frac{\Pi^{(0)}_\Delta}{k^2}+\left(\gamma_{NN}\gamma_{\Delta\Delta} -
\gamma^2_\Delta\right)\frac{\Pi^{(0)}_N \Pi^{(0)}_\Delta}{k^4}\ .$$ The effective constants $\Gamma$ are related to FFST parameters $g'$ as follows: $$\label{109}
\gamma_{NN}=C_0\frac{g'_{NN}}{\tilde g^2_{\pi NN}}\ ; \quad
\gamma_\Delta=C_0\frac{g'_{N\Delta}}{\tilde g_{\pi NN}\tilde g_{\pi
N\Delta}}\ , \quad \gamma_{\Delta\Delta}=C_0\frac{g'_{\Delta\Delta}}{
\tilde g^2_{\pi N\Delta}}\ ,$$ where $C_0$ is the normalization factor for the effective particle–hole interaction in nuclear matter. We use $C_0=\pi^2/p_Fm^*$, following [@73]. (Note, that there is some discrepancy in the notations used by different authors. Our parameters $\gamma$ coincide with those, used in [@98]. We use the original FFST parameters $g'$ of [@73], which are related to the constants $G'_0$ of [@98] as $g'=G'/2$). The short-range interactions require also renormalization of the vertices $\widetilde\Gamma^2_{\pi NB}\to
\widetilde\Gamma^2_{\pi NB} x^2_{\pi NB}$ with $$\label{110}
x_{\pi NN}=\left(1+(\gamma_\Delta-\gamma_{\Delta\Delta})
\frac{\Pi^{(0)}_\Delta}{k^2}\right)\bigg/E\ ; \qquad
x_{\pi N\Delta}=\left(1
+(\gamma_\Delta-\gamma_{NN})\frac{\Pi^{(0)}_N}{k^2}\right)\bigg/E\ .$$
In our paper [@99] we calculated the contribution $S(\rho)$, presented by Eq.(\[96a\]), using nucleons and $\Delta$-isobars as intermediate states. The integration over $\omega$ requires investigation of the solutions of the pion dispersion equation-Eq.(\[79\]).
Interpretation of the pion condensate
-------------------------------------
The pion dispersion equation [@72; @100] is $$\label{111}
\omega^2\ =\ m^{*2}_\pi+k^2\bigg(1+\chi(\omega,k)\bigg)$$ with the function $\chi$ introduced as $\Pi_p(\omega,k)=-k^2\chi
(\omega,k)$. It is known to have three branches of solutions $\omega_i(k)$ (classified by the behaviour of the functions $\omega_i(k)$ at $k\to0$). If the function $\chi(\omega,k)$ includes nucleons only as intermediate states and does not include correlations, we find $k^2\chi\to0$ at $k\to0$. This is the pion branch for which $\omega_\pi(0)=m^*_\pi$. If the correlations are included, the denominator in the rhs of Eq.(\[105\]) may turn to zero at $k\to0$, providing the sound branch with $\omega_s(0)=0$. Inclusion of $\Delta$-isobars causes the contribution to $\chi(\omega,k)$, proportional to $[m_\Delta-m-\omega]^{-1}$. Thus, there is a solution with $\omega_\Delta(0)=m_\Delta-m$, called the isobar branch.
The trajectories of the solutions of Eq.(\[79\]) on the physical sheet of Riemann surface were studied by Migdal [@100]. Their behaviour on the unphysical sheets was investigated recently by Sadovnikova [@101] and by Sadovnikova and Ryskin[@102]. In these papers it was shown, that besides the branches, mentioned above, there is one more branch starting from the value $\omega_c(0)=m^*_\pi$ and moving on the unphysical sheet for larger $k>0$. The branch comes to the physical sheet at certain value of $k$ if the density exceeds certain critical value $\rho_C$. Here $\omega_c$ is either zero or purely imaginary and thus $\omega^2_c \leq 0$. (However, this is true if the isobar width $\Gamma_{\Delta}=0$, for the finite values of $\Gamma_\Delta$ we find $\omega_c$ to be complex and $Re$$\omega^2_c\leq 0$.) This corresponds to the instability of the system first found by Migdal [@Mi1] and called the “pion condensation”. On the physical sheet $\omega_c(k)$ coincides with the solutions, obtained in [@Mi1], [@100]. However, contrary to [@Mi1], [@100], the $\omega_c(k)$ is not the part of zero-sound branch.
To follow the solution $\omega_c(k)$, let us present the function $\Phi^{(0)}_N$, which enters Eq.(\[103\]) as $$\label{112}
\Phi^{(0)}_N(\omega,k)\ =\ \varphi^{(0)}_N(\omega,k)
+\varphi^{(0)}_N(-\omega,k)$$ with the explicit expression for $0<k<2p_F$ $$\begin{aligned}
\varphi^{(0)}_N(\omega,k) &=& \frac1{p_Fk}\bigg(\frac{-\omega
m^*+kp_F}2+\frac{(kp_F)^2-(\omega m^*-k^2/2)^2}{2k^2}\ \times
\nonumber\\
\label{113}
&\times& \left.\ln\left(\frac{\omega m^*-kp_F-k^2/2}{\omega
m^*-kp_F+k^2/2} \right)-\omega m^*\ln\left(\frac{\omega m^*}{\omega
m-kp_F+k^2/2}\right)\right).\end{aligned}$$ At $k>2p_F$ the expression for $\varphi^{(0)}_N(\omega,k)$ takes another form (see [@100]) but we shall not need it here.
It was shown in [@101], [@102] that, if the density $\rho$ is large enough ($\rho\geq \rho_C$), there is a branch of solutions $\omega^2_c(k)\leq 0$, which is on the physical sheet for certain interval $k_1<k<k_2$ of the values of $k$. At smaller values $k<k_1$ the branch goes to the unphysical sheet through the cut $$\label{114}
0\ \le\ \omega\ \le\ \frac k{m^*}\left(p_F-\frac k2\right) ,$$ generated by the third term in the rhs of Eq.(\[113\]). At larger values of $k>k_2$ the solution $\omega_c$ goes away to the unphysical sheet through the same cut. The zero-sound wave goes to the unphysical sheet through another cut: $$\label{115}
\frac k{m^*}\left(p_F-\frac k2\right)\le\ \omega\ \le\ \frac
k{m^*}\left(p_F+\frac k2\right) ,$$ caused by the second term in the rhs of Eq.(\[113\]).
The value of the density $\rho_C$, for which the solution $\omega_c$ penetrates to the physical sheet, depends strongly on the model assumptions. Say, if the contribution of isobar intermediate states is ignored, the value of $\rho_C$ is shifted to unrealistically large values $\rho_C>25\rho_0$. Inclusion of both nucleon and isobar states and employing of realistic values of FFST constants leads to $\rho_C\approx1.4\rho_0$ under additional assumption $m^*_\pi(\rho)=m_\pi(0)$.
The zero values of $\omega_c(k)$ at certain nonzero values of $k$ signals on the instability of the ground state . New components, like baryon-hole excitations with the pion quantum numbers emerge in the ground state of nuclear matter. Thus, the appearance of the singularity corresponding to $\omega^2_c=0$ shows, that the phase transition takes place.
Note, however, that the imaginary part of the solution $\omega_c(k)$ is negative. Thus, there is no “accumulation of pions” in the symmetric nuclear matter, contrary to the naive interpretation of the pion condensation.
The situation is much more complicated in the case of asymmetric nuclear matter. In the neutron matter the instability of the system emerges at finite values of $\omega$, because of the conversion $n\to p+\pi^-$ [@72]. This process leads to the real accumulation of pions in the ground state. In the charged matter with the non-zero value of the difference between the neutron and proton densities there is an interplay of the reactions $n\rightleftharpoons p+\pi^-$ and beta decays of nucleons.
Quark scalar condensate in the presence of the pion condensate
--------------------------------------------------------------
Now we turn back to the calculation of the condensate $\kappa(\rho)$. Note first, that if the isobar width is neglected, we find $\kappa(\rho)\to+\infty$ at $\rho\to\rho_C$. The reason is trivial. When $\rho\to\rho_C$, the contribution $S(\rho)$, described by Eq.(\[96a\]) becomes $$\label{116}
S\ \sim\ \int\frac{d\omega d^3k}{[\omega^2-\omega^2_c(\rho,k)]^2}\ .$$ The curve $\omega_c(\rho_c,k)$ turns to zero at certain $k=k_c$, being $\omega_c=a(k-k_c)^2$ at $|k-k_c|\ll k_c$. Thus, $$\label{117}
S\ \sim\ \int\frac{d\omega k^2_cd|k-k_c|}{[\omega^2-a^2(k-k_c)^2]^2}\
\to\ \infty\ .$$
Hence, $S(\rho_C)=+\infty$ and $\kappa(\rho_C)=+\infty$. Once $\kappa(0)<0$, we find that at certain $\rho_{ch}<\rho_C$ the scalar condensate $\kappa(\rho)$ turns to zero. This means that the chiral phase transition takes place before the pion condensation. (We shall not discuss more complicated models, for which the condition $\kappa(\rho_{ch})=0$ is not sufficient for the chiral symmetry restoration). At larger densities the pion does not exist any more as a collective Goldstone degree of freedom. Also the baryon mass vanishes (if very small current quark masses is neglected), and we have to stop our calculations, based on the selected set of Feynman diagrams (Fig.4) with the exact pion propagator.
The in-medium width of delta isobar $\Gamma_\Delta$ (the probability of the decay) depends strongly on the kinematics of the process. We can put $\Gamma_\Delta(\omega,k)=0$ due to the limitation on the phase space of the possible decay process [@99]. Thus, following the paper of Sadovnikova and Ryskin [@102], we find that the chiral symmetry restoration takes place at the densities, which are smaller, than those, corresponding to the pion condensation: $$\label{118}
\rho_{ch}\ <\ \rho_C\ .$$ This means, that the pion condensation point cannot be reached in framework of the models, which do not describe the physics after the restoration of the chiral symmetry.
Calculation of the scalar condensate
------------------------------------
### Parameters of the model
Now we must specify the functional dependence and the values of the parameters which are involved into the calculations. The $\pi NN$ coupling constant is $$\label{119}
\tilde g_{\pi NN}\ =\ \frac{g_{\pi NN}}{2m}\ =\ \frac{g_A}{2f_\pi}\ ,$$ — see Eq.(\[3.1\]). The $\pi N\Delta$ coupling constant is $$\label{120}
\tilde g_{\pi N\Delta}\ =\ c_\Delta\tilde g_{\pi NN}$$ with the experiments providing $c_\Delta\approx2$ [@72]. This is supported by the value $c_\Delta\approx1.7$, calculated in the framework of Additive Quark Model (AQM).
The form factor $d_{NB}(k)$ which enters Eq.(\[101\]) is taken in a simple pole form [@72] $$\label{121}
d_{NB}\ =\ \frac{1-m^2_\pi/\Lambda^2_B}{1+k^2/\Lambda^2_B}$$ with $\Lambda_N=0.67\,$GeV, $\Lambda_\Delta=1.0$ GeV.
We use mostly the values of FFSI parameters, presented in [@104]: $g'_{NN}=1.0$, $g'_{N\Delta}=0.2$, $g'_{\Delta\Delta}=0.8$ – referring to these values as to set “a”. We shall also check the sensitivity of the results to the variation of these parameters.
It is know from QHD approach that the nucleon effective mass may drop with density very rapidly. Thus, we must adjust our equations for description of the case, when the relativistic kinematics should be employed. We still include only the positive energy of the nucleon propagator, presented by Eq.(\[102\]). However we use the relativistic expression for $$\label{122}
\varepsilon_p-\varepsilon_{p+k}\ =\ \sqrt{p^2+m^{*2}}-\sqrt{(p+k)^2+m^{*2}}\ .$$ The propagator of $\Delta$-isobar is modified in the same way. The explicit equations for the functions $\Phi^{(0)}_{N,\Delta}$, accounting for the relativistic kinematics are presented in [@99].
### Fixing the dependence $m^*(\rho)$
As we have seen above, the contribution of nucleon-hole excitations to $S(\rho)$ depends explicitly on the nucleon effective mass $m^*(\rho)$. Here we shall try the models, used in nuclear physics, which determine the direct dependence $m^*(\rho)$. One of them is the Fermi liquid model with the effective mass described by Landau formula [@105],[@73],[@BBN] $$\label{123}
\frac{m^*(\rho)}m\ =\ 1\bigg/\left(1+\frac{2mp_F}{\pi^2}\ f_1\right).$$ In QHD approach the effective mass $m^*$ is the solution of the equation [@65] $$\label{124}
m^*\ =\ m-cm^*\left[p_F(p^2_F+m^{*2})^{1/2}-m^{*2}\ln
\frac{p_F+(p^2_F+m^{*2})^{1/2}}{m^*}\right] ,$$ corresponding to the behaviour $$\label{125}
m^*\ =\ m(1-f_2\rho)$$ in the lowest order of expansion in powers of Fermi momentum $p_F$. The coefficients $f_{1,2}$ in Eqs.(\[123\]), (\[125\]) can be determined by fixing the value $m^*(\rho_0)$.
Assuming, that all the other parameters are not altered in medium: $f^*_\pi=f_\pi$, $m^*_\Delta-m^*=m_\Delta-m$, $m^*_\pi=m_\pi$, $c^*_\Delta=c_\Delta$, $g^*_A=g_A(\rho_0)\approx1.0$, we find the point of chiral symmetry restoration to depend strongly on the value $m^*(\rho_0)$, being stable enough under the variation of FFST parameters and of the parameter $c_\Delta$ — Fig.6. Fixing $m^*(\rho_0)=0.8m$, we find $\rho_{ch}<\rho_0$, in contradiction to experimental data. Even in a simplified model with the width of $\Delta$-isobar being accepted to coincide with its vacuum value $\Gamma_\Delta=115\,$MeV, we find $\rho_{ch}\approx1.15\rho_0$. The value $|\kappa(\rho_0)|\ll|\kappa(0)|$ looks unrealistic, since there are practically no strong unambiguous signals on partial restoration of the chiral symmetry at the saturation value of density $\rho_0$ [@54]. Hence, here we also come to contradiction with the experimental data.
The situation is less critical for the smaller values of $m^*(\rho_0)/m$. Say, for $\Gamma_\Delta=115\,$MeV we find $\rho_{ch}=1.7\rho_0$, assuming $m^*(\rho_0)/m=0.7$. However, under realistic assumption $\Gamma_\Delta=0$ we come to $\rho_{ch}<\rho_0$.
Note, that there is another reason for the point of the pion condensation to be unaccessible by our approach. The perturbative treatment of $\pi N$ interaction becomes invalid for large pion fields. In the chiral $\pi N$ Lagrangians the $\pi N$ interaction is described by the terms of the type $$\label{126}
L_{\pi N}\ =\ \bar \psi U^+(i\gamma_\mu\partial^\mu)U\psi$$ with $$U\ =\ \exp \frac i{2f_\pi}\ \gamma_5(\tau\varphi)\ .$$
The conventional version of pseudovector $\pi NN$ Lagrangian employed above may be treated either as the lowest term of expansion of the matrix $U$ in powers of the ratio $\varphi/f_\pi$ (identifying the pion with $\varphi$-field) or as the interaction with the field $\tilde\varphi=f_\pi\sin((\tau\varphi)/f_\pi)$. In any case, the whole approach is valid only, when the pion field is not too strong $(\varphi\le f_\pi$ or $\tilde\varphi\le f_\pi$ correspondingly). However, the strict quantitative criteria for the region of validity of Eq.(\[96a\]) is still obscure.
The strong dependence of the results on the value of $m^*(\rho_0)/m$ forces us to turn to self-consistent treatment of the hadron parameters and the quark condensates.
### Self-consistent treatment of nucleon mass and the condensate
Now we shall carry out the calculations in framework of the model, where the nucleon parameters depend on the values of condensates. In other words, instead of the attempt to calculate the condensate $\kappa(\rho,y_i(\rho))$ with $y_i$ standing for the hadron parameters $(y_i=m^*_N$, $m^*_\Delta$, $f^*_\pi,\ldots)$, we shall try to solve the equation $$\label{129}
\kappa(\rho)\ =\ {\cal K}\bigg(\rho,y_i(\kappa(\rho), c_j(\rho))\bigg)$$ with $c_j(\rho)$ standing for the other QCD condensates. Here $\cal K$ is the rhs of Eq.(62). Strictly speaking, we should try to obtain similar equations for the condensates $c_j(\rho)$.
We shall assume the physics of nuclear matter to be determined by the condensates of lowest dimension. In other words, we expect that only the condensates, containing the minimal powers of quark and gluon fields are important. The condensates of the lowest dimension are the vector and scalar condensates, determined by Eqs. (\[14\]) and (\[60\]) and also the gluon condensate — Eq.(\[33\]). As we saw in Subsect.2.7, the relative change of the gluon condensate in matter is much smaller than that of the quark scalar condensate. Thus we assume it to play a minor role. Hence, the in-medium values of $\kappa(\rho)$ and $v(\rho)$ will be most important for us, and we must solve the set of equations $$\begin{aligned}
\kappa(\rho) &=& {\cal K}(\rho,y_i) \nonumber\\
y_i &=& y_i(v(\rho),\kappa(\rho))\ . \label{130}\end{aligned}$$ Fortunately, the vector condensate $v(\rho)$ is expressed by simple formulas (\[14\]) and (\[15\]) due to the baryon current conservation.
The idea of self-consistent treatment is not a new one. Indeed, Eqs. (2) and (3) provide an example of Eq.(\[130\]) for NJL model in vacuum, with the only parameter $y_i=m$.
As to parameters $y_i$, which are $m^*,m^*_\Delta,f^*_\pi$, etc., there are several relations which are, to large extent, model-independent. Besides the in-medium GMOR relation — Eq.(\[68\]), we can present in-medium GT relation $$\label{131}
\tilde g^*_{\pi NN}\ =\ \frac{g^*_A}{2f^*_\pi}\ .$$ Recalling that GT relation means, that the neutron beta decay can be viewed as the strong decay of neutron to $\pi^-p^+$ system followed by the decay of the pion, we see that Eq.(\[131\]) is true under the same assumption as Eq.(\[68\]). Namely, the pion should be much lighter than any other state with unnatural parity and zero baryon charge. Also the expectation value of the quark scalar operator averaged over pion is $$\label{132}
\Upsilon^*\ =\ \langle\pi^*|\bar qq|\pi^*\rangle\ =\
\frac{m^{*2}_\pi}{\hat m}\ .$$
The other relations depend on the additional model assumptions. Starting with the ratio $m^*(\rho)/m$ we find in the straightforward generalization of NJL model $$\label{133}
\frac{m^*(\rho)}{m}\ =\ \frac{\kappa(\rho)}{\kappa(0)}\ .$$ This relation is referred to in the paper [@93] as Nambu scaling. The QCD sum rules prompt a more complicated dependence, presented by Eq.(\[96\]) with the function $$\label{134}
F(\rho)\ =\ \frac1{1+av(\rho)/\rho_0}$$ where $a\approx-0.2$ [@19; @40; @96] — see also Sec.5. Another assumption, expressed by Brown–Rho scaling equation (\[83\]) predicts a slower decrease of $m^*(\rho)$. Note, that Eq.(\[83\]) is based on existence of a single length scale, while there are two at least: $p_F^{-1}$ and $\Lambda^{-1}_{QCD}$.
The experimental situation with $\Delta$-isobar mass in nuclear matter is not quite clear at the moment. The result on the total photon-nucleus cross section indicates that the mass $m^*_\Delta$ does not decrease in the medium [@106], while the nucleon mass $m^*(\rho)$ diminishes with $\rho$. On the other hand, the experimental data for total pion–nucleus cross sections are consistent with the mass $m^*_\Delta$ decreasing in the matter [@107]. As to calculations, the description within the Skyrmion model [@90] predicts that $m^*_\Delta$ decreases in nuclear matter and $m^*_\Delta-m^*<m_\Delta-m$. Assuming the Additive Quark Model prediction for the scalar field-baryon couplings $g_{sNN}=g_{s\Delta\Delta}$ we come to the equation $m^*_\Delta-m^*=m_\Delta-m$. The Brown-Rho scaling leads to still smaller shift $m^*_\Delta-m^*=[m^*(m_\Delta-m)]/m$.
Now we present the results of the self-consistent calculations of the condensate under various assumptions on the dependence $y_i(\kappa(\rho))$ — Eq.(\[130\]).
In Fig.7 we show the results with BR1 scaling of the nucleon mass — Eq.(\[84\]) for different sets of FFST parameters. Following [@104] we try the values which were obtained at saturation density $\rho=\rho_0$ – set“a” defined in 4.4.1. We assume that they do not change with density. We use also another set of parameters $\gamma_N=\gamma_{N\Delta}=\gamma_{\Delta\Delta}=0.7$ (set “b”), presented in [@72]. The dependence on the behaviour $m^*_\Delta(\rho)$ appears to be more pronounced for set “a” of the FFST parameters. The calculations were carried out under the assumption $f^*_\pi=f_\pi$. Thus, the pion mass drops somewhat faster, than in BR1 scaling with decreasing $f^*_\pi$, in order to save in-medium GMOR relation — Eq.(\[68\]). The nucleon effective mass at saturation density appears to be quenched somewhat less, than in QHD models, being closer to the value, preferred by FFST approaches [@73; @104].
Assuming that the pion mass does not change in medium we find strong dependence on the values of FFST parameters. For the choice $"a"$ the self-consistent solution disappears before the density reaches the saturation value- see dotted curve “2” in Fig. 7. We explained in our paper [@99], how it happens technically.
The results, obtained with Nambu scaling for the nucleon mass being assumed — Eq.(\[133\]), are shown in Fig.8. We put $m^*_\pi=m_\pi$ and thus $f^*_\pi\sim|\kappa(\rho)|^{1/2}$, following GMOR. The three curves illustrate the dependence of the results on the assumption of the in-medium behaviour of the isobar mass.
One of the results of this subsection is that the improved (self-consistent) approach excludes the possibility of the pion condensation at relatively small densities. On the other hand Dickhoff et al. [@97] carried out self-consistent description of the particle-hole interactions by inclusion of the induced interactions to all orders. This shifts the point of the pion condensation to the higher densities. The rigorous analysis should include both aspects of self-consistency.
### Accumulation of isobars as a possible first phase transition
While the density increases, the Fermi momentum and the energy of the nucleon at the Fermi surface increase too. At some value of $\rho$ it becomes energetically favourable to start the formation of the Fermi sea of the baryons of another sort instead of adding new nucleons. This phase transition takes place at the value $\rho_a$, determined by the condition $$\label{135}
m^*_B(\rho_a)\ = \ \left(p^2_F +m^{*2}(\rho_a)\right)^{1/2}$$ with $p_{Fa}$ being the value of Fermi momentum, corresponding to $\rho_a$, while $m^*_B$ is the mass of the second lightest baryon at $\rho=\rho_a$.
The vacuum values of $\Lambda$ and $\Sigma^+$ hyperon masses are respectively 115 MeV and 43 MeV smaller than that of $\Delta$-isobar. However, both experimental and theoretical data confirm that the hyperons interact with the scalar fields much weaker than the nucleons. Thus, at least in framework of certain assumptions on the behaviour of $m^*_\Delta(\rho)$, the hierarchy of the baryon masses changes in medium. (The investigations of the problem are devoted mostly to the case of neutron or strongly asymmetric matter because of the astrophysical applications. See, however, the paper of Pandharipande [@108]). The delta isobar can become the second lightest baryon state. In this case accumulation of $\Delta$-isobars in the ground state is the first phase transition in nuclear matter. Such possibility was considered in several papers [@109]–[@112].
Under the assumption of BR1 scaling the accumulation of $\Delta$-isobar takes place at $\rho_a\approx3\rho_0$, being the first phase transition. The value of $\rho_a$ is consistent with the result of Boguta [@110].
QCD sum rules
=============
QCD sum rules in vacuum
-----------------------
Here we review briefly the main ideas of the method. There are several detailed reviews on the subject — see, e.g., [@113]. Here we focus on the points, which will be needed for the application of the approach to the case of nuclear matter.
The main idea is to establish a correspondence between descriptions of the function $G$, introduced in Subsec.3.6 in terms of hadronic and quark-gluon degrees of freedom. (Recall that $G$ describes the propagation of the system with the quantum numbers of the nucleon). The method is based on the fundamental feature of QCD, known as the asymptotic freedom. This means, that at $q^2\to-\infty$ the function $G(q^2)$ can be presented as the power series of $q^{-2}$ and QCD coupling $\alpha_s$. The coefficients of the expansion are the expectation values of local operators constructed of quark and gluon fields, which are called “condensates”. Thus such presentation, known as operator product expansion (OPE) [@Wi], provides the perturbative expansion of short-distance effects, while all the nonperturbative physics is contained in the condensates.
The correspondence between the hadron and quark-gluon descriptions is based on Eq.(\[91\]). The empirical data are used for the spectral function Im$\,G(k^2)$ in the rhs of Eq.(\[91\]). Namely, we know, that the lowest lying state is the bound state of three quarks, which manifests itself as a pole in the (unknown) point $k^2=m^2$. Assuming, that the next singularity is the branching point $k^2=W^2_{ph}=(m+m_\pi)^2$, one can write exact presentation $$\label{136}
\mbox{Im }G(k^2)\ =\ \tilde\lambda\,^2\delta(k^2-m^2)+f(k^2)
\theta(k^2-W^2_{ph})$$ with $\tilde\lambda^2$ being the residue at the pole. Substituting rhs of Eq.(\[136\]) into Eq.(\[91\]) and employing $q^{-2}$ power expansion in lhs, i.e. putting $$\label{137}
G(q^2)\ =\ G_{OPE}(q^2)$$ one finds certain connections between quark-gluon and hadron presentations $$\label{138}
G_{OPE}(q^2)\ =\ \frac{\tilde\lambda\,^2}{m^2-q^2}+\frac1\pi
\int\limits^\infty_{W^2_{ph}} \frac{f(k^2)}{k^2-q^2}\ dk^2\ .$$ Of course, the detailed structure of the spectral density $f(k^2)$ cannot be resolved in such approach. The further approximations can be prompted by asymptotic behaviour $$\label{139}
f(k^2)\ =\ \frac{1}{2i}\Delta G_{OPE}(k^2)$$ at $k^2\gg|q^2|$ with $\Delta$ denoting the discontinuity. The discontinuity is caused by the logarithmic contributions of the perturbative OPE terms. The usual ansatz consist in extrapolation of Eq.(\[139\]) to the lower values of $k^2$, replacing also the physical threshold $W^2_{ph}$ by the unknown effective threshold $W^2$, i.e. $$\label{140}
\frac1\pi\int\limits^\infty_{W^2_{ph}}\frac{f(k^2)}{k^2-q^2}\ dk^2\ =\
\frac1{2\pi i}\int\limits^\infty_{W^2} \frac{\Delta
G_{OPE}(k^2)}{k^2-q^2}\ dk^2$$ and thus $$\label{141}
G_{OPE}(q^2)\ =\ \frac{\tilde\lambda\,^2}{m^2-q^2}+\frac1{2\pi i}
\int\limits^\infty_{W^2}\frac{\Delta G_{OPE}(k^2)}{k^2-q^2}\ dk^2\ .$$ The lhs of Eq.(\[141\]) contains QCD condensates. The rhs of Eq.(\[141\]) contains three unknown parameters: $m,\tilde\lambda^2$ and $W^2$. Of course, Eq.(\[141\]) makes sense only if the first term of the rhs, treated exactly is larger than the second term, treated approximately.
The approximation $G(q^2)\approx G_{OPE}(q^2)$ becomes increasingly true while the value $|q^2|$ increases. On the contrary, the “pole+continuum” model in the rhs of Eq.(\[141\]) becomes more accurate while $|q^2|$ decreases. The analytical dependence of the lhs and rhs of Eq.(\[141\]) on $q^2$ is quite different. The important assumption is that they are close in certain intermediate region of the values of $q^2$, being close also to the true function $G(q^2)$.
To improve the overlap of the QCD and phenomenological descriptions, one usually applies the Borel transform, defined as $$\begin{aligned}
\label{142}
Bf(Q^2) &=& \lim\limits_{Q^2,n\to\infty} \frac{(Q^2)^{n+1}}{n!}
\left(-\frac d{dQ^2}\right)^n f(Q^2)\ \equiv\ \tilde f(M^2) \\
&& \hspace*{3cm} Q^2=-q^2; \quad M^2=Q^2/n \nonumber\end{aligned}$$ with $M$ called the Borel mass. There are several useful features of the Borel transform.
1. It removes the divergent terms in the lhs of Eqs. (\[138\]) and (\[141\]) which are caused by the free quark loops. This happens, since the Borel transform eliminates all the polynomials in $q^2$.
2. It emphasise the contribution of the lowest lying states in rhs of Eq.(\[141\]) due to the relation $$\label{143}
B\left[\frac1{Q^2+m^2}\right]\ =\ e^{-m^2/M^2}\ .$$
3. It improves the OPE series, since $$\label{144}
B\left[(Q^2)^{-n}\right]\ =\ \frac1{(n-1)!}\ (M^2)^{1-n}\ .$$
Applying Borel transform to both sides of Eq.(\[141\]) one finds $$\label{145}
\widetilde G_{OPE}(M^2)\ =\ \tilde\lambda\,^2e^{-m^2/M^2} + \frac1{2\pi
i}\int\limits^\infty_{W^2} dk^2e^{-k^2/M^2}\cdot\Delta G_{OPE}(k^2)\ .$$ Such relations are known as QCD sum rules. If both rhs and lhs of Eq.(\[141\]) were calculated exactly, the relation would be independent on $M^2$. However, certain approximations are made in both sides. The basic assumption is that there exists a range of $M^2$ for which the two sides have a good overlap, approximating also the true function $\widetilde G(M^2)$.
The lhs of Eq.(\[145\]) can be obtained by presenting the function $G(q^2)$, which is often called “correlation function” or “correlator” as (strictly speaking, $G$ depends on the components of vector $q$ also through the trivial term $\hat q$) $$\label{146}
G(q^2)\ =\ i\int d^4xe^{i(qx)}\langle0|T\{\eta(x)\bar\eta(0)\}|0\rangle$$ with $\eta$ being the local operator with the proton quantum numbers. It was shown in [@114] that there are three independent operators $\eta$ $$\begin{aligned}
&& \eta_1=\ \left(u^T_aC\gamma_\mu u_b\right)\gamma_5\gamma^\mu
d_c\cdot \varepsilon^{abc}, \quad \eta_2=\ \left(u^T_aC\sigma_{\mu\nu}
u_b\right)\sigma^{\mu\nu}\gamma_5d_c\varepsilon^{abc}, \nonumber\\
&& \eta_{3\mu}=\ \left[(u_a^TC\gamma_\mu u_b)\gamma_5d_c-(u_a^TC
\gamma_\mu d_b)\gamma_5u_c\right]\varepsilon^{abc} ,
\label{147}\end{aligned}$$ where $T$ denotes the transpose in Dirac space and $C$ is the charge conjugation matrix. However, the operator $\eta_2$ provides strong admixture of the states with negative parity [@114]. As to the operator $\eta_3$, it provides large contribution of the states with spin $3/2$ [@114]. Thus, the calculations with $\eta=\eta_1$ are most convincing. We shall assume $\eta=\eta_1$ in the further analysis.
The correlation function has the form $$\label{148}
G(q)\ =\ G_q(q^2)\cdot\hat q+G_s(q^2)\cdot I$$ with $I$ standing for the unit $4\times4$ matrix. The leading OPE contribution to $G_q$ comes from the loop with three free quarks. If the quark masses $m_{u,d}$ are neglected, the leading OPE term in $G_s$ comes from the exchange by the quarks between the system described by operator $\eta$ and vacuum. Technically this means, that the contribution comes from the second term of the quark propagator in vacuum $$\label{149}
\langle0|Tq_\alpha(x)\bar q_\beta(0)|0\rangle\ =\ \frac i{2\pi^2}
\frac{\hat x_{\alpha\beta}}{x^4}-\frac14\sum_A\Gamma^A_{\alpha\beta}
\langle0|\bar q\Gamma^Aq|0\rangle+0(x^2)\ ,$$
where only the contribution with $A=1$ (see Eq.(\[12\])) survives. This is illustrated by Fig.9. The higher order terms come from exchange by soft gluons between vacuum and free quarks carrying hard momenta. Next comes the four-quark condensate which can be viewed as the expansion of the two-quark propagator, similar to Eq.(\[149\]).
Direct calculation provides for massless quarks [@95] $$\begin{aligned}
\label{150}
&& G^{OPE}_q=\ -\frac1{64\pi^4}(q^2)^2\ln(-q^2)-\frac1{32\pi^2}
\ln(-q^2)g(0)-\frac2{3q^2}\ h_0\ , \\
\label{151}
&& G^{OPE}_s =\ \left(\frac1{8\pi^2}\ q^2\ln(-q^2)-\frac1{48q^2}\
g(0)\right)\kappa(0)\end{aligned}$$ with the condensates $g(0)=\langle0|\frac{\alpha_s}\pi G^2|0\rangle$, $\kappa(0)=\langle0|\bar qq|0\rangle$ and $h_0=\langle0|\bar uu\bar
uu|0\rangle$. The terms, containing polynomials of $q^2$ are omitted, since they will be eliminated by the Borel transform. This leads to the sum rules [@95] $$\begin{aligned}
\label{152}
&& M^6E_2\left(\frac{W^2}{M^2}\right)+\frac14\ bM^2E_0
\left(\frac{W^2}{M^2}\right) +\frac43\ C_0=\ \lambda^2e^{-m^2/M^2},\\
\label{153}
&& 2a\left(M^4E_1\left(\frac{W^2}{M^2}\right)-\frac b{24}\right)\ =\
m\lambda^2 e^{-m^2/M^2}\end{aligned}$$ with traditional notations $a=-2\pi^2\kappa(0)$, $b=(2\pi)^2g(0),$ $\lambda^2=32\pi^4\tilde\lambda^2$, $$E_0(x) = 1-e^{-x}\ , \quad E_1(x)=1-(1+x)e^{-x}, \quad
E_2(x)=1-\left(\frac{x^2}2+x+1\right)e^{-x}.$$ Also $C_0=(2\pi)^4h_0$. Here we omitted the anomalous dimensions, which account for the most important corrections of the order $\alpha_s$, enhanced by the “large logarithms”. The radiative corrections were shown to provide smaller contributions,as well as the higher order power corrections [@95]. The matching of the lhs and rhs of Eqs.(152),(153) was found in [@95] for the domain $$\label{154}
0.8\mbox{ GeV}^2\ <\ M^2\ <\ 1.4\mbox{ GeV}^2\ .$$
As one can see from Eq.(\[153\]), the nucleon mass turns to zero if $\langle0|\bar qq|0\rangle=0$. Hence, the mass is determined by the exchange by quarks between the our system and vacuum.
The method was applied successfully to calculation of the static characteristics of the nucleons reproducing the values of its mass [@36; @94; @95] as well as of magnetic moment [@95] and of the axial coupling constant [@116]. The proton structure functions also were analysed in framework of the approach [@117]
Proton dynamics in nuclear matter
---------------------------------
Now we extent the sum-rule approach to the investigation of the characteristics of the proton in nuclear matter. The extension is not straightforward. This is mostly because the spectrum of correlation function in medium $$\label{155}
G^m(q)\ =\ i\int d^4xe^{i(qx)}\langle M|T\{\eta(x)\bar\eta(0)\}|M\rangle$$ is much more complicated, than that of the vacuum correlator $G(q^2)$. The singularities of the correlator can be connected with the proton placed into the matter, as well as with the matter itself. One of the problems is to find the proper variables, which would enable us to focus on the properties of our probe proton.
### Choice of the variables
Searching for the analogy in the earlier investigations one can find two different approaches. Basing on the analogy with the QCD sum rules in vacuum, one should build the dispersion relation in the variable $q^2$. The physical meaning of the shift of the position of the proton pole is expressed by Eq.(\[95\]). Another analogy is the Lehmann representation [@115], which is dispersion relation for the nucleon propagator in medium $g_N(q_0,|q|)$ in the time component $q_0$. Such dispersion relation would contain all possible excited states of the matter in rhs. Thus, we expect the dispersion relations in $q^2$ to be a more reasonable choice in our case.
It is instructive to adduce the propagation of the photon with the energy $\omega$ and three dimensional momentum $k$ in medium. The vacuum propagator is $D_\gamma\sim[\omega^2-k^2]^{-1}$. Being considered as the function of $q^2=\omega^2-k^2$ it has a pole at $q^2=0$. The propagator in medium is $D^m_\gamma\sim[\omega^2\varepsilon(\omega)-k^2]^{-1}$. The dielectric function $\varepsilon(\omega)$ depends on the structure of the matter, making $D^m_\gamma(\omega)$ a complicated function. However, the function $D^m_\gamma(q^2)$ still has a simple pole, shifted to the value $q^2_m=\omega^2(1-\varepsilon(\omega))$. A straightforward calculation of the new value $q^2_m$ is a complicated problem. The same refers to the proton in-medium. The sum rules are expected to provide the value in some indirect way.
Thus, we try to build the dispersion relations in $q^2$. Since the Lorentz invariance is lost, the correlator $G^m(q)$ depends on two variables. Considering the matter as the system of $A$ nucleons with momenta $p_i$, introduce vector $$\label{156}
p\ =\ \frac{\Sigma p_i}A\ ,$$ which is thus $p\approx(m,{\bf0})$ in the rest frame of the matter. The correlator can be presented as $G^m(q)=G^m(q^2,\varphi(p,q))$ with the arbitrary choice of the function $\varphi(p,q)$, which is kept constant in the dispersion relations. This is rather formal statement, and there should be physical reasons for the choice.
To make the proper choice of the function $\varphi(p,q)$, let us consider the matrix element, which enters Eq.(\[155\]) $$\begin{aligned}
&& \langle M|T\{\eta(x)\bar\eta(0)\}|M\rangle\ =\ \langle
M_A|\eta(x)|M_{A+1}\rangle\langle M_{A+1}|\bar\eta(0)|M_A\rangle
\theta(x_0)\ - \nonumber\\
\label{157}
&&- \quad \langle M_A|\bar\eta(0)|M_{A-1}\rangle\langle
M_{A-1}|\eta(x)|M_A\rangle\ \theta(-x_0)\end{aligned}$$ with $|M_A\rangle$ standing for the ground state of the matter, while $|M_{A\pm1}\rangle$ are the systems with baryon numbers $A\pm1$. The summation over these states is implied. The matrix element $\langle
M_{A+1}|\eta|M_A\rangle$ contains the term $\langle N|\eta|0\rangle$ which adds the nucleon to the Fermi surface of the state $|M_A\rangle$. If the interactions between this nucleon and the other ones are neglected, it is just the pole at $q^2=m^2$. Now we include the interactions. The amplitudes of the nucleon interactions with the nucleons of the matter are known to have singularities in variables $s_i=(p_i+q)^2$. These singularities correspond to excitation of two nucleons in the state $|M_{A+1}\rangle$. Thus, they are connected with the properties of the matter itself. To avoid these singularities we fix $$\label{158}
\varphi(p,q)\ =\ s\ =\ \max s_i\ =\ 4E^2_{0F}$$ with $E_{0F}$ being the relativistic value of nucleon energy at the Fermi surface. Neglecting the terms of the order $p^2_F/m^2$ we can assume $p_i=(m,0)$ and thus $$\label{159}
s\ =\ 4m^2\ .$$ Our choice of the value of $s$ corresponds to $|\bar q|=p_F$ (in the simplified case, expressed by Eq.(\[159\]) $|\bar q|=0$). However, varying the value of $s$ we can find the position of the nucleon poles, corresponding to other values of $|\bar q|$.
Let us look at what happens to the nucleon pole $q^2=m^2$ after we included the interactions with the matter. The self-energy insertions $\Sigma$ modify the free nucleon propagator $g^0_N$ to $g_N$ with $$\label{160}
(g_N)^{-1}\ =\ (g^0_N)^{-1}-\Sigma$$ — see Fig.10.
In the mean field approximation (Fig.10a) the function $\Sigma$ does not contain additional intermediate states. It does not cause additional singularities in the correlator $G^m(q^2,s)$. The position of the pole is just shifted by the value, which does not depend on $s$. (Note that this does not mean that in the mean field approximation the condition $s=\,$const can be dropped. Some other contributions to the matrix element $\langle M_{A+1}|\bar\eta|M_A\rangle$ are singular in $s$. Say, there is the term $\langle B|\bar\eta|0\rangle$ with $B$ standing for the system, containing the nucleon and mesons. If the mesons are absorbed by the state $|M_A\rangle$, we come to the box diagram (Fig.11) with the branching point, starting at $s=4m^2$).
Leaving the framework of the mean-field approximation we find Hartree self-energy diagrams (Fig.10b) depending on $s$. The latter is kept constant in our approach. Hence, no additional singularities emerge in this case as well.
The situation becomes more complicated if we take into account the Fock (exchange) diagrams (Fig.10c). The self-energy insertions depend on the variable $u=(p-q)^2$. The contribution of these terms shifts the nucleon pole and gives birth to additional singularities corresponding to real states with baryon number equal to zero. They are the poles at the points $u=m^2_x$, with $m_x$ denoting the masses of the mesons ($\pi$, $\omega$, etc.), and the cuts running to the right from the point $q^2=m^2+2m^2_\pi$. The latter value is the position of the branching point corresponding to the real two-pion state in the $u$-channel.
Thus the single-nucleon states $|B_{\pm1}\rangle$ cause the pole $q^2=m^2_m$, a set of poles corresponding to the states with baryon number $B=0$ and a set of branching points. The lowest-lying one is $q^2=m^2+2m^2_\pi$. Note that the antinucleon corresponding to $q_0=-m$ generates the pole $q^2=5m^2$ shifted far to the right from the lowest-lying one.
The lowest-lying branching point $q^2=m^2+2m^2_\pi$ is separated from the position of the pole $q^2=m^2$ by a much smaller distance than in the case of the vacuum $(q^2=m^2+2mm_\pi$ in the latter case). Note, however, that at the very threshold the discounting is quenched since the vertices contain moments of the intermediate pions. Thus the branching points can be considered as a separated from the pole $q^2=m^2$. Note also that for the same reason the residue at the pole $q^2=m^2+2m^2_\pi$ in the $u$-channel vanishes.
As it was shown in the work [@40], all the other singularities of the correlator $G^m(q^2,s)$ in $q^2$ are lying to the right from the nucleon pole until we include the three–nucleon terms. Thus, they are accounted for by the continuum and suppressed by Borel transform. To prove the dispersion relation we must be sure of the possibility of contour integration in the complex $q^2$ plane. This cannot be done on an axiomatic level. However a strong argument in support of the possibility is the analytical continuation from the region of large real $q^2$. At these values of $q^2$ the asymptotic freedom of QCD enables one to find an explicit expression of the integrand. The integral over large circle gives a non-vanishing contribution. However, the latter contains only the finite polynomials in $q^2$ which are eliminated by the Borel transform.
Thus we expect “pole+continuum” model to be valid for the spectrum of the correlator $G^m(q^2,s)$.
The situation becomes more complicated if we include the three-nucleon interactions [@96]. The probe proton, created by the operator $\eta$ can interact with $n$ nucleons of the matter. The corresponding amplitudes depend on the variable $s_n=(np+q)^2$. For $n\geq 2$ this causes the cuts, running to the left from the point $q^2=m^2$. This requires somewhat more complicated model of the spectrum. From the point of view of expansion in powers of $\rho$ this means, that “pole + continuum” model is legitimate until the terms of the order $\rho^2$ are included.
### Operator expansion
Following our general strategy, we shall try to obtain the leading terms of expansion of the correlator $$\label{161}
G^m(q)\ =\ G^m_q(q)\hat q+G^m_p(q)\hat p+G^m_s(q)I\ ,$$ in powers of $q^{-2}$. Note, that the condition $s=\,$const, which we needed for separation of the singularities, connected with our probe proton, provides $$\label{162}
\frac{(pq)}{q^2}\ \rightarrow\ \mbox{ const }$$ at $q^2\to-\infty$. This is just the condition which insures the operator expansion in deep-inelastic scattering (see, e.g., the book of Ioffe et al. [@118]). It is not necessary in our case. However, the physical meaning of some of the condensates, say, that of $\langle\varphi_a(\alpha)\alpha^n\rangle$ — Eq.(\[48\]) becomes most transparent in this very kinematics.
The problem is more complicated than in vacuum, since each of the terms of the expansion in powers of $q^2$ provides, generally speaking, infinite number of the condensates. Present each of the components $G^m_i$ $(i=q,p,s)$ of the correlator $G^m$ $$\label{163}
G^m_i\ =\ \int d^4xe^{i(qx)} T_i(x)\ .$$ The function $T_i(x)$ contains in-medium expectation values of the products of QCD operators in space-time points $"0"$ and $"x"$ with an operator at the point $x$ defined by Eq.(\[43\]). Each in-medium expectation value, containing covariant derivative $D_\mu$ is proportional to the vector $p_\mu$. This can be easily generalized for the case of the larger number of derivatives. Thus the correlators take the form $G^m_i=\sum_n C_n(p\nabla_q)^nf_i(q^2)$. For the contributions $f_i(q^2)\sim(q^2)^{-k}$ the terms $(p\nabla_q)^nf_i(q^2)$ are of the same order. This is the “price” for the choice of kinematics $s=\,$const. Fortunately, the leading terms of the operator expansion contain the logarithmic loops and thus can be expressed through the finite number of the condensates [@96].
The leading terms of the operator expansion can be obtained by replacing the free quark propagators by those in medium $$\label{164}
\langle M|T\psi_\alpha(x)\bar\psi_\beta(0)|M\rangle\ =\ \frac
i{2\pi^2}\frac{\hat x}{x^4}-\sum_A\frac14\Gamma^A_{\alpha\beta} \langle
M|\bar\psi(0)\Gamma^A\psi(x)|M\rangle$$ with the matrices $\Gamma^A$ being defined by Eq.(\[12\]). Operator $\psi(x)$ is defined by Eq.(\[43\]). While looking for the lowest order term of the operator expansion we can put $x^2=0$ in the second term of the rhs of Eq.(\[164\]). In the sum over $A$ the contributions with $A=3,4$ vanish due to the parity conservation by strong interactions, the one with $A=5$ turns to zero in any uniform system. Thus, only the terms with $A=1,2$ survive. Looking for the lowest order density effects, we assume that propagation of one of the quarks of the correlator $G^m$ is influenced by the medium. Hence,the term with $A=1$ contributes to the scalar structure $G^m_s$, while that with $A=2$ — to the vector structures $G^m_q$ and $G^m_p$. $$\begin{aligned}
\label{165}
&& G^m_s\ =\ \frac1{2\pi^2}\ q^2\ln(-q^2)\ \kappa(\rho) \\
\label{166}
&& G^m_q\ =\ -\frac1{64\pi^4}\ (q^2)^2\ln(-q^2)+\frac1{6\pi^2}\
(s-m^2-q^2)\ln(-q^2)v(\rho)\\
\label{167}
&& G^m_p\ =\ \frac2{3\pi^2}\ q^2\ln(-q^2)\ v(\rho)\ .\end{aligned}$$ Thus the correlator $G^m_s$ can be just obtained from the vacuum correlator $G_s$ by replacing of $\kappa(0)$ by $\kappa(\rho)$. The correlator $G^m_q$ obtains additional contribution proportional to the vector condensate $v(\rho)$. Also, the correlator $G^m_p$, which vanishes in vacuum is proportional to $v(\rho)$. These terms are illustrated by Fig.12 a,b.
Turn now to the next OPE terms. Start with the structure $G^m_s$. In the case of vacuum there is a contribution which behaves as $\ln(-q^2)$, which is proportional to the condensate $\langle0|\bar\psi
\frac{\alpha_s}\pi G^a_{\mu\nu}\sigma^{\mu\nu}\frac{\lambda^a}2
\psi|0\rangle$. However, similar term comes from expansion of expectation value $\langle0|\bar q(0)q(x)|0\rangle$ in powers of $x^2$. The two terms cancel [@36]. Similar cancellation takes place in medium [@96]. However there is a contribution, caused by the second term of rhs of Eq.(\[43\]). It does not vanish identically, but it can be neglected due to Eq.(\[52\]). Hence, the next OPE term in rhs of Eq.(\[165\]) can be obtained by simple replacement of the condensates $\kappa(0)$ and $g(0)$ in the second term of Eq.(\[151\]) by $\kappa(\rho)$ and $g(\rho)$.
The next-to-leading order corrections to the correlators $G^m_{q,p}$ come from expansion of the expectation value $\langle
M|\bar\psi(0)\gamma_0\psi(x)|M\rangle$. In the lowest order of $x^2$ expansion the matrix element can be presented through the moments of the deep-inelastic scattering (DIS) nucleon structure functions – Eqs. (\[47\]) and (\[48\]). Since the medium effects in DIS are known to be small we limit ourselves to the gas approximation at this point. The main contributions to $q^{-2}$ expansion compose the series of the terms $[(s-m^2)/q^2]^n
\langle\alpha^n\rangle$ with $\langle\alpha^n\rangle$ denoting $n$-th moment of the structure function. Being expressed in a closed form, they change Eqs. (\[166\]) and (\[167\]) to $$\label{168}
G^m_q = -\frac1{64\pi^4}(q^2)^2\ln(-q^2)+\frac1{12\pi^2}
\frac{(s-m^2-q^2)}{m}
\int\limits^1_0 d\alpha F(\alpha)\ln(q-p\alpha)^2\cdot\rho\\$$ $$+ \frac{m}{3\pi^2}
\int\limits^1_0 d\alpha \phi_b(\alpha)\ln(q-p\alpha)^2\cdot\rho\\$$ $$\label{169}
G^m_p = \frac2{3\pi^2}\ q^2\int\limits^1_0 d\alpha F(\alpha)
\ln(q-p\alpha)^2\cdot\rho$$ with $F(\alpha)$ being structure function, normalized as $\int^1_0d\alpha F(\alpha)=3$. The function $F(\alpha)$ can be presented also as $F(\alpha)=\phi_a(\alpha)$ with $\phi_a$ defined by Eq.(\[48\]). Another leading OPE term is caused by modification of the value of gluon condensate. This is expressed by changing of the value $g(0)$ in the second term in rhs of Eq.(\[150\]) to $g(\rho)$ – Fig.12b.
The higher order OPE terms lead to the contributions which decrease as $q^{-2}$. One of them is caused by the lowest order power correction to the first moment of the structure function. This term is expressed through the factor $\xi$ which is determined by Eq.(51), being calculated in [@39]. The other corrections of this order come from the four-quark condensates $Q^{AB}$, defined by Eq.(\[55\]). The correlator $G^m_s$ contains the condensate $Q^{12}$. The vector part includes the condensates $Q^{11}$ and $Q^{22}$. Another contribution of this order comes from the replacement of the condensates $g(0)$ and $\kappa(0)$ in the last term in rhs of Eq.(151) by their in-medium values – Fig.12c,d.
### Building up the sum rules
To construct the rhs of the sum rules, consider the nucleon propagator $g_N=(H-E)^{-1}$ with $E$ standing for the nucleon energy, while the Hamiltonian $H$ in the mean field approximation is presented by Eq.(\[71\]). Beyond the mean field approximation the potentials $V_\mu$ and $\Phi$ should be replaced by vector and scalar self-energies $$\label{170}
V_\mu=\Sigma^V_\mu\ ; \quad \Sigma^V_\mu=p_\mu\Sigma^p+q_\mu\Sigma^q\ ;
\quad \Phi=\Sigma^s\ .$$ Thus, under condition $s=4m^2$ — Eq.(\[159\]) $$\label{171}
g_N\ =\ Z\ \frac{\hat q(1-\Sigma^q)-\hat
p\Sigma^p+m+\Sigma^s}{q^2-m^2_m}$$ with $$\label{172}
m_m=m+U\ ; \qquad U=m(\Sigma^q+\Sigma^p)+\Sigma^s\ ,$$ while $$\label{173}
Z\ =\ \frac1{(1-\Sigma^q)(1-\Sigma^q+\Sigma^p)}\ .$$ Of course, $\Sigma^q=0$ in mean field approximation.
The Borel-transformed sum rules for in-medium correlators in the assumed “pole+continuum” model for the spectrum are: $$\begin{aligned}
\label{174}
{\cal L}^m_q(M^2) &=& \lambda^2_m e^{-m^2_m/M^2} (1-\Sigma^q) \\
\label{175}
{\cal L}^m_p(M^2) &=& -\lambda^2_m e^{-m^2_m/M^2}\Sigma^p \\
\label{176}
{\cal L}^m_s(M^2) &=& \lambda^2_m e^{-m^2_m/M^2}(m+\Sigma^s)\end{aligned}$$ with $\lambda^2_m=32\pi^4\tilde\lambda^2_mZ$ with $\tilde\lambda^2_m$ standing for the residue in the nucleon pole (see similar definition in vacuum — Eqs.(\[152\]) and (\[153\])). The lhs of Eqs. (\[174\])–(\[176\]) are: $$\begin{aligned}
&& {\cal L}^m_q(M^2)=M^6E_2\left(\frac{W^2_m}{M^2}\right)L^{-4/9}
-\frac{8\pi^2}3\left[\left((s-m^2)M^2E_0\left(\frac{W^2_m}{M^2}\right)
-M^4E_1\left(\frac{W^2_m}{M^2}\right)\right. \right) \nonumber\\
&&\times \left. \langle F\mu(\alpha)\rangle-2m^2M^2E_0\left(\frac{W^2_m}{M^2}
\right)
\langle F\mu(\alpha)\alpha\rangle
+ 4m^2M^2E_0\left(\frac{W^2_m}{M^2}\right)
\langle \phi_b\mu(\alpha)\rangle\right]\rho L^{-4/9} \nonumber\\
\label{177}
&&+ \pi^2M^2E_0\left(\frac{W^2_m}{M^2}\right) g(\rho)
+ \frac34m^2(s-m^2) \langle \theta_a \mu \rangle
+ 2m^4 \langle \theta_b \mu \rangle
+ \frac43(2\pi)^4Q^{11}_{uu}(\rho)L^{\frac49} \\
\label{178}
&& {\cal L}^m_p(M^2)\ =\ -\frac{8\pi^2}3\
M^4E_1\left(\frac{W^2_m}{M^2}\right) \langle F\mu(\alpha)\rangle\ \rho
L^{-4/9} -(2\pi)^4Q^{22}_{uu} \\
\label{179}
&& {\cal L}^m_s(M^2)\ =\ (2\pi^2)^2\left[M^4E_1 \left(\frac{W^2_m}{M^2}\right)
-\frac{(2\pi)^2}{12}g(\rho)\right] \kappa(\rho)
+ \frac{8(2\pi^2)^3}{3} Q^{12}_{ud}(\rho)\ .\end{aligned}$$
In Eqs. (\[177\])–(\[179\]) $\langle\psi\rangle=\int^1_0d\alpha\psi(\alpha)$ for any function $\psi$, $F$ is the structure function, the function $$\label{180}
\mu(\alpha)\ =\
\exp\left(\frac{-(s-m^2)\alpha+m^2\alpha^2}{M^2(1+\alpha)}\right)$$ takes into account the terms $[(s-m^2)/M^2]^n$. The factor $$\label{181}
L\ =\ \frac{\ln M^2/\Lambda^2}{\ln\nu^2/\Lambda^2}$$ accounts for the anomalous dimensions. Here $\Lambda=0.15\,$GeV is the QCD parameter while $\nu=0.5\,$GeV is the normalization point of the characteristics involved.
Recall, that “pole+continuum” model is true until we do not touch the terms of the order $\rho^2$. Thus, in the sum rules for the difference between in-medium and vacuum correlators we must limit ourselves to linear shifts of the parameters $$\begin{aligned}
\label{182}
&& \Delta {\cal L}^m_q(M^2)\ =\ \lambda^2e^{-m^2/M^2}\left(
\frac{\Delta\lambda^2}{\lambda^2}-\Sigma^q-\frac{2m\Delta
m}{M^2}\right) -\frac{W^4}{2L^{4/9}}\exp\left(-\frac{W^2}{M^2}\right)
\Delta W^2\\
\label{183}
&& {\cal L}^m_p(M^2)\ =\ - \lambda^2e^{-m^2/M^2}
\Sigma^p \\ \label{184}
&& \Delta{\cal L}^m_s(M^2)\ =\ \lambda^2e^{-m^2/M^2} \left(m\,
\frac{\Delta\lambda^2}{\lambda^2}+\Sigma^s-\frac{2m^2\Delta
m}{M^2}\right)-2aW^2\exp\left(-\frac{W^2}{M^2}\right) \Delta W^2\ .\end{aligned}$$ Here $\Delta$ denotes the difference between in-medium and vacuum values. However the self-energy $\Sigma^q$ and $\Sigma^s$ cannot be determined separately, since only the sum $\Sigma^q+\Sigma^s$ can be extracted.
Anyway, the shift of the position of the nucleon pole $m_m-m$ can be obtained: using Eq.(\[172\]) we find $$\label{185}
\Delta {\cal L}^m_s-m{\cal L}^m_p-m\Delta{\cal L}^m_q\ =\
\Delta m\lambda^2e^{-m^2/M^2}
+W^2e^{-m^2/M^2}\left(\frac{W^2}{2L^{4/9}}m-2a\right)\Delta W^2\ .$$ Since the value $(W^2/2L^{4/9})m-2a$ is numerically small, one can write approximate sum rule, neglecting the second term in rhs of Eq.(\[185\]) $$\label{186}
U\ =\ \frac{e^{m^2/M^2}}{\lambda^2}\left(\Delta {\cal L}^m_s
-m {\cal L}^m_p - m \Delta {\cal L}^m_q \right)\ ,$$ or, assuming the sum rules in vacuum to be perfect $$\label{187}
U\ =\ \frac{e^{m^2/M^2}}{\lambda^2}\ ({\cal L}_s-m{\cal L}_p-m{\cal L}_q)$$ with the vacuum part cancelling exactly.
The two lowest order OPE terms ( without perturbative expansion in parameter $\frac{s-m^2}{M^2}$ ) are presented by the first two terms of rhs of Eqs. (\[177\]), (\[179\]) and by the first term of rhs of Eq.(\[178\]). They are expressed through the condensates $v(\rho)$, $\kappa(\rho)$ and $g(\rho)$ and through the moments of the nucleon functions $\phi_{a,b}$ introduced in Subsec. 2.6. The values of the lowest moments of the structure function $F(\alpha)=\phi_a(\alpha)$ are well known from experimental data. By using the value of $\xi_a$ and employing relations, presented by Eq.(\[51\]) one can find the lowest moments of the function $\phi_b$. Only the first moment of the function $\phi_b$ and thus the first and second moments of the function $\phi_a$ appeared to be numerically important. Thus, at least in the gas approximation all the contributions to the lhs of the sum rules can be either calculated in the model-independent way or related to the observables [@40]. The scalar condensate is the most important parameter beyond the gas approximation [@18]. The model calculations have been carried out in this case.
The next order of OPE includes explicitly the moments of the functions $\theta_{a,b}$ defined in Subsec.2.6. It includes also the four-quark condensates $Q^{11}$, $Q^{12}$ and $Q^{22}$. Using Eqs.(\[51\]) one can find that only the first moment of the function $\theta_a$ is numerically important while the moments of the function $\theta_b$ can be neglected. The condensate $Q^{12}$ can be obtained easily by using Eq.(\[56\]). The uncertainties of the values of the other four-quark condensates $Q^{11}$ is the main obstacle for decisive quantitative predictions, based on Eqs. (\[182\])–(\[187\]). The scalar four-quark condensate $Q^{11}$ may appear to be a challenge for the convergence of OPE due to the large value of the second term in rhs of Eq.(\[57\]). This may be a signal that large numbers are involved. Fortunately, the only calculation of $Q^{11}$ carried out in [@44] demonstrated that there is a large cancellation between the model-dependent first term in rhs of Eq.(\[57\]) and the second one, which is to large extent model-independent. However, assuming the result presented in [@44], we still find this contribution to be numerically important.
We can try (at least for illustrative reasons) to get rid of this term in two ways. One of them is to ignore its contribution. The reason is that it corresponds to exchange by a quark system with the quantum numbers of a scalar channel between our probe proton and the matter. On the other hand, it contributes to the vector structure of the correlator $G^m_q$, and thus to the vector structure $\hat q$ of the propagator of the nucleon with the momentum $q$. Such terms are not forbidden by any physical law. However most of QHD calculations are successful without such contributions. Thus the appearance of the terms with such structure, having a noticeable magnitude is unlikely. (Of course, this is not a physical argument, but rather an excuse for trying this version). The other possibility is to eliminate the contribution by calculation of the derivative with respect to $M^2$. The two ways provide relatively close results.
### The structure of the potential energy
Under the conditions, described above, we find that the rhs of Eq.(\[186\]) is a slowly varying function of $M^2$ in the interval, defined by Eq.(\[154\]). Among the moments of the structure function the two first ones appeared to be numerically important. Thus we find $$\label{188}
U(\rho)\ =\ \left[66\,v(\rho)+70\,v_2(\rho)-\frac{10\Delta g(\rho)}m
-32\,\Delta\kappa(\rho)\right]\mbox{ GeV}^{-2}.$$ Here $v(\rho)=\ 3\rho$ is the vector condensate — Eq.(\[15\]), $\Delta\kappa(\rho)=\kappa(\rho)-\kappa(0)$ is the in-medium change of the scalar condensate — Eq.(62). The condensate $v_2(\rho)$, determined as $$\langle M|\bar\psi\gamma_\mu D_\nu\psi|M\rangle\ =\ \left(g_{\mu\nu}
-\frac{4p_\mu p_\nu}{p^2}\right)v_2(\rho)$$ is connected to the second moment of the nucleon structure function. Numerically $v_2(\rho)\approx0.3\rho$. Finally, $\Delta g(\rho)$ is the shift of the gluon condensate, expressed by Eq.(\[38\]).
Thus the problem of presenting the nucleon potential energy through in-medium condensates is solved. At the saturation value $\rho=\rho_0$ we find $U=-36\,$MeV in the gas approximation. This should be considered as a satisfactory result for such a rough model. This is increasingly true, since there is a compensation of large positive and negative values in rhs of Eq.(\[186\]).
Note that the simplest account of nonlinear terms signals on the possible saturation mechanism. Following the discussion of Subsection 2.7 and assuming the chiral limit, present $\Delta\kappa(\rho)=\Sigma/\hat m-3.2(p_F/p_{F0})\rho$. Thus we obtain the potential $$\label{189}
U(\rho)\ =\ \left[\left(198-42\,\frac\Sigma{\hat m}\right)
\frac\rho{\rho_0}+133\left(\frac\rho{\rho}_0\right)^{4/3}\right]
\mbox{MeV }.$$ After adding the kinetic energy it provides the minimum of the functional ${\cal E}(\rho)$ defined by Eq.(\[77\]) at $\Sigma=62.8\,$MeV, which is consistent with experimental data — Eq.(\[24\]). The binding energy is ${\cal E}=-9\,$MeV. The incompressibility coefficient $K=9\rho_0(d^2\varepsilon/d\rho^2)$ also has a reasonable value $K=182\,$MeV.
Of course, the results for the saturation should not be taken too seriously. As we have seen in Sect.3, the structure of the nonlinear terms of the condensate is much more complicated. Also, the result is very sensitive to the exact value of $\Sigma$-term. Say, assuming it to be larger by the magnitude of 2 MeV, we find the Fermi momentum at the saturation point about 1/3 larger than $p_{F0}$. Thus, the value of the saturation density becomes about 2.5 times larger than $\rho_0$. Such sharp dependence is caused by the form of the nonlinear term in the potential energy equation — Eq.(\[189\]). The form of the term is due to oversimplified treatment of nonlinear effects. However the result can be the sign, that further development of the approach may appear to be fruitful.
### Relation to conventional models and new points
We obtained a simple mechanism of formation of the potential energy. Recall that Ioffe analysis of QCD vacuum sum rules [@94] provided the mechanism of formation of nucleon mass as due to the exchange by quarks between the probe nucleon and the quark–antiquark pairs of vacuum. In the nuclear matter the new mass is formed by the exchange with the modified distribution of the quark–antiquark pairs and with the valence quarks. The modified distribution of $\bar qq$ pairs is described by the condensate $\kappa(\rho)$. At $\rho$ close to $\rho_0$ the modification is mostly due to the difference of the densities of $\bar qq$ pairs inside the free nucleons and in the free space. Similar exchange with the valence quarks is determined by the vector condensate $v(\rho)$ and is described by the first term of rhs of Eq.(\[188\]). The second term describes additional interaction which takes place during such exchange. These exchanges cause the shift of the position of the pole $m_m-m$. While the interactions of the nucleons depend on the condensates $\Delta\kappa(\rho)$ and $v(\rho)$, these condensates emerge due to the presence of the nucleons. Also, the nonlinear part of $\Delta\kappa$ is determined by $NN$ interactions. Thus, there is certain analogy between QCD sum rules picture and NJL mechanism.
As we have seen, the QCD sum rules can be viewed as connection between exchange by uncorrelated $\bar qq$ pairs and exchange by strongly correlated pairs with the same quantum numbers (mesons). This results in connection between the Lorentz structures of correlators and in-medium nucleon propagators. In the leading terms of OPE the vector (scalar) structure is determined by vector (scalar) condensate. The large values (of about 250–300 MeV) of the first and the fourth terms in rhs of Eq.(\[188\]) provide thus the direct analogy with QHD picture.
Note, however, that the sum rules, presented by Eqs. (\[177\])–(\[179\]) contain also the terms, which are unusual for QHD approach. Indeed, the term $Q^{11}$ in Eq.(\[177\]) enters the vector structure of the correlator (and thus, of the propagator of the nucleon) corresponding, however, to exchange by the vacuum quantum numbers with the matter. On the other hand, the last term of rhs of Eq.(\[179\]) treated in the gas approximation, corresponds to exchange by the quantum numbers of vector mesons. However, it appears in the scalar structure. This term originated from the four-quark condensate $Q^{12}$. While the exact value of the condensate $Q^{11}$ is still obscure, the condensate $Q^{12}$ is easily calculated. This OPE term is shown in Fig. 13. It provides a noticeable contribution.
Such terms do not emerge in the mean-filed approximation of QHD. They can be originated by more complicated structure of the nucleon-meson vertices. (Note that if the nucleons interacted through the four-fermion interaction, such terms would have emerged from the exchange interaction due to Fierz transform).
Another approach, developed by Maryland group, was reviewed by Cohen et al. [@119]. In most of the papers (except [@120]) the Lehmann representation was a departure point. In framework of this approach the authors investigated the Lorentz structure of QCD sum rules [@121]. They analysed detaily the dependence of the self-energies on the in-medium value of the scalar four-quark condensate [@122]. The approach was used for investigation of hyperons in nuclear matter [@123].
The approach is based on the dispersion relations in the time component $q_0$ at fixed three-dimensional momentum $|\bar q|$. It is not clear, if in this case the singularities, connected with the probe proton are separated from those of the matter itself. The fixed value of three-dimensional momenta is a proper characteristics for in-medium nucleon. That is why this was the choice of variables in Lehmann dispersion relation with the Fermi energy as a typical scale. The sum rules are dispersion relations rather for the correlation function with the possible states $N$, $N+\,$pions, $N^*$, etc. The scale of the energy is a different one and it is not clear, if this choice of variables is reasonable for QCD sum rules.
Charge -symmetry breaking phenomena
-----------------------------------
### Nolen–Schiffer anomaly
The nuclei consisting of equal numbers of protons and neutrons with one more proton or neutron added are known as the mirror nuclei. If the charge symmetry (known also as isospin symmetry) of strong interactions is assumed, the binding energy difference of mirror nuclei is determined by electromagnetic interactions only, the main contribution being caused by the interaction of the odd nucleon. Nolen and Schiffer [@124] found the discrepancy between the experimental data and theoretical results on the electromagnetic contribution to the energy difference. This discrepancy appeared to be a growing function of atomic number $A$. It reaches the value of about 0.5 MeV at $A=40$. Later the effect became known as Nolen–Schiffer anomaly (NSA).
The NSA stimulated more detailed analysis of electromagnetic interactions in such systems. Auerbach et al. [@125] studied the influence of Coulomb forces on core polarization. However, this did not explain the NSA. Bulgac and Shaginyan [@126] attributed the whole NSA phenomena to the influence of the nuclear surface on the electromagnetic interactions. Thus they predict NSA to vanish in infinite medium.
However most of the publications on the subject contain the attempts to explain NSA by the charge symmetry breaking (CSB) by the strong interactions at the hadronic level. The CSB potentials of $NN$ interactions were reviewed by Miller et al. [@127]. Some of phenomenological potentials described the NSA, but contradicted the experimental data on CSB effects in $NN$ scattering. The meson-exchange potentials contain CSB effects by inclusion of $\rho-\omega$ mixing. This explains the large part of NSA, but not the whole effect.
On the quark level the CSB effects in the strong interactions are due to the nonzero value of the difference of the quark masses $$\label{190}
m_d\approx7\mbox{ MeV }; \quad m_u\approx4\mbox{ MeV }; \quad
\mu=m_d-m_u\approx3\mbox{ MeV }.$$ Several quark models have been used for investigation of NSA by calculation of neutron–proton mass difference in nuclear matter (recall that in vacuum $m_n-m_p=1.8\,$MeV, while the Coulomb energy difference is –0.5 MeV. Hence, the shift caused by strong interactions is $\delta m=2.3\,$MeV). Henley and Krein [@128] used the NJL model for the quarks with the finite values of the current masses. The calculated neutron–proton mass difference appeared to be strongly density dependent. The result overestimated the value of NSA. The application of the bag models were considered by Hatsuda et al. [@129]. The chiral bag model provided the proper sign of the effect, but underestimated its magnitude.
### QCD sum rules view
The QCD sum rules look to be a reasonable tool for the calculation of neutron–proton binding energy difference for the nucleons, placed into the isotope–symmetric nuclear matter. Denoting $$\label{191}
\delta x\ =\ x_n-x_p$$ for the strong interaction contribution to the neutron–proton difference of any parameter $x$, we present $$\label{192}
\delta\varepsilon\ =\ \delta U+\delta T$$ with $U(T)$ — the potential (kinetic) energy of the nucleon. To the lowest order in the powers of density one finds $$\label{193}
\delta T\ =\ -\ \frac{p^2_F}{m^2}\ \delta m\ ,$$ while the value $\delta U=\delta m_m$ can be obtained from the sum rules.
The attempts to apply QCD sum rules for solving the NSA problem were made in several papers [@129]–[@132]. We shall follow the papers of Drukarev and Ryskin [@132] which present the direct extension of the approach, discussed above. It is based on Eqs. (\[182\])–(\[184\]) with the terms ${\cal L}^m_i$ being calculated with the account of the finite values of the current quark masses. Besides the quark mass difference the CSB effects manifest themselves through isospin breaking condensates $$\label{194}
\gamma_0=\frac{\langle0|\bar dd-\bar uu|0\rangle}{\langle0|\bar
uu|0\rangle}\ ; \quad \gamma_m= \frac{\langle M|\bar dd-\bar
uu|M\rangle}{\langle M|\bar uu|M\rangle}\ .$$
The characteristics $\gamma_0$ and $\gamma_m$ are not independent degrees of freedom. They turn to zero, at $\mu=0$, being certain (unknown) functions of $\mu$. The dependence $\gamma_0(\mu)$, $\gamma_m(\mu)$ can be obtained in framework of the specific models. Anyway, due to the small values of $m_{u,d}$ we expect $|\gamma_0|$, $|\gamma_m|\ll1$. Following the strategy and keeping only the leading terms, which are linear in $\mu$, we shall obtain the energy shift in the form $$\label{195}
\delta\varepsilon\ =\ a_1\mu+a_2\gamma_0+a_3\gamma_m$$ with $a_i$ being the functions of the density $\rho$ (the contribution $\delta T$ is included into $a_1$).
To calculate the mass-dependent terms in lhs of Eqs. (\[182\])–(\[184\]) one should include the quark masses to the in-medium quark propagator — Eq.(\[149\]). The first term in rhs of Eq.(\[149\]), which is just the free quark propagator should be modified into $(i\hat x_{\alpha\beta})/(2\pi^2x^4)-m_q/(2\pi^2x^2)$. This provides the contribution to the scalar structure of the correlator in the lowest order of OPE. Account of the finite quark masses in the second term of Eq.(\[149\]) manifest themselves in the next to leading orders of OPE. Say, during evaluation of the second term in Subsection 5.2, we used the relations expressed by Eq.(\[51\]), which were obtained for the massless quarks. Now the first of them takes the form $$\label{196} \langle\phi^i_b\rangle\ =\
\frac14\ \langle\/\phi^i_a\alpha\rangle- \frac{m_i}{4m}\ \langle
N|\bar\psi_i\psi_i|N\rangle$$ for the flavour $i=u,d$. This leads to the contribution to the vector structure of the correlator proportional to the scalar condensate. Also, the second moment of the scalar distribution is proportional to the vector condensate, contributing to the scalar structure of the correlator.
The leading contribution caused by the scalar structure of the correlator $$\label{197}
(\delta U)_1\ =\ 0.18\ \frac\mu m\ \frac{v(\rho)}{\rho_0}\mbox{ GeV },$$ while the CSB term, originated by the vector structure is $$\label{198}
(\delta U)_2\ =\ -\ 0.031\ \frac\mu m\
\frac{\kappa(\rho)-\kappa(\rho_0)}{\rho_0}\mbox{ GeV }$$ with $\rho_0=0.17\,\rm Fm^{-3}$ being the value of saturation density. The term $\Delta{\cal L}^m_s$ of Eq.(\[186\]) provides the contribution containing the CSB condensate $\gamma_m$ — Eq.(\[194\]) $$\label{199} (\delta U)_3\ =\
32\left(\gamma_m^{}(\kappa(\rho)-\kappa(0))
+(\gamma_m-\gamma_0)\kappa(0)\right)/\mbox{ GeV}^{2 }.$$
For the complete calculation one needs the isospin breaking shifts of vacuum parameters $\delta\lambda^2$ and $\delta W^2$ while the empirical value of $\delta m$ can be used. Thus, the analysis of CSB effects in vacuum should be carried out in framework of the method as well. This was done by Adami et al. [@130]. The values of $\delta\lambda^2$, $\delta W^2$ and of the vacuum isospin breaking value $\gamma_0$ were obtained through the quark mass difference and the empirical values of the shifts of the baryon masses. This prompts another form of Eq.(\[195\]) $$\label{200}
\delta\varepsilon\ =\ b_1\mu+b_2\gamma_m$$ with $b_1=a_1+a_2(\gamma_0/\mu)$, $b_2=a_3$. The expressions for the contributions to $\delta\varepsilon$, caused by the shifts of the vacuum values $\delta m$, $\delta\lambda^2$ and $\delta W^2$ are rather complicated. At $\rho=\rho_0$ the corresponding contribution is $$\label{201}
(\delta U)_4\ =\ -0.4\mbox{ MeV }.$$
For the sum $\sum_i(\delta U)_i$ we find, after adding the contribution $\delta T$ $$\label{202}
b_1(\rho_0)=-0.73\ , \qquad b_2(\rho_0)=-1.0\mbox{ GeV }.$$
The numerical results can be obtained if the value of $\gamma_m$ is calculated. This can be done in framework of certain models. However, even now we can make some conclusions. If we expect the increasing restoration of the isospin symmetry with growing density, it is reasonable to assume that $|\gamma_m|<|\gamma_0|$. Also, all the model calculations provide $\gamma_0<0$. Thus we expect $\gamma_0<\gamma_m<0$. If $\gamma_m=0$ we find $\delta\varepsilon
=-2.4\,$MeV, eliminating the vacuum value $\delta m=2.3\,$MeV. Hence, the isospin invariance appears to be restored for both the condensates and nucleon masses.
The present analysis enables also to clarify the role of the CSB effects in the scalar channel. Indeed, neglecting these effects, i.e. putting $(\delta U)_2=(\delta U)_3=0$ we obtain $\delta\varepsilon>0$. This contradicts both experimental values and general theoretical expectations. Thus we came to the importance of CSB effects in the scalar channel.
Adami and Brown [@130] used NJL model, combined with BR1 scaling for calculation of parameter $\gamma_m$. They found $\gamma_m/\gamma_0=(\kappa(\rho)/\kappa(0))^{1/3}$. Substituting this value into Eq.(\[200\]) we find $$\label{203}
\delta\varepsilon\ =\ (-0.9\pm0.6)\mbox {MeV}$$ with the errors caused mostly by uncertainties of the value of $\gamma_0$. A more rapid decrease of the ratio $\gamma_m/\gamma_0$ would lead to larger values $|\delta\varepsilon|$ with $\delta\varepsilon<0$. Putting $\gamma_m=\gamma_0$ provides $\delta\varepsilon=-0.3\,$MeV.
Of course, Eq.(\[203\]) is obtained for infinite nuclear matter and it is not clear, if it can be extrapolated for the case $A=40$. We can state that at least qualitative explanation of the NSA is achieved.
### New knowledge
As we have stated earlier, the QCD sum rules can be viewed as a connection between exchange of uncorrelated $\bar qq$ pairs between our probe nucleon and the matter and the exchange by strongly correlated pairs with the same quantum numbers (the mesons). In the conventional QHD picture this means that in the Dirac equation for the nucleon in the nuclear matter $$\label{204}
(\hat q-\hat V)\psi\ =\ (m+\Phi)\psi$$ the vector interaction $V$ corresponds to exchange by the vector mesons with the matter while the scalar interaction $\Phi$ is caused by the scalar mesons exchange. In the mean field approximation the vector interaction $V$ is proportional to density $\rho$, while the scalar interaction is proportional to the “scalar density” $$\label{205}
\rho_s\ =\ \int\frac{d^3p}{(2\pi)^3}\ \frac{m^*}{\varepsilon(p)}\ ,$$ which is a more complicated function of density $\rho$ — see Eqs. (\[74\]),(\[76\]). Thus $V=V(\rho)$, while $\Phi=\Phi(\rho_s)$. We have seen that QCD sum rules provide similar picture in the lowest orders of OPE: vector and scalar parts of the correlator $G^m$ depend on vector and scalar condensates correspondingly: $G^m_{q,p}=G^m_{q,p}(v(\rho))$; $G^m_s=G^m_s(\kappa(\rho))$. However as we have seen in Subsection 5.2.6, we find a somewhat more complicated dependence in the higher order OPE terms, say, $G^m_s=G^m_s(\kappa(\rho),v(\rho))$, depending on both scalar and vector condensates. This means that the corresponding scalar interaction $\Phi=\Phi(\rho_s,\rho)$, requiring analysis beyond the mean field approximation.
As one can see from Eqs. (\[197\]) and (\[198\]) in the case of CSB interactions such complications emerge in the sum rules approach in the leading orders of OPE.
Thus the QCD sum rules motivated CSB nuclear forces $V$ and $\Phi$ in Eq.(\[204\]) are expected to contain dependence on both “vector” and “scalar” densities, i.e. $V=V(\rho,\rho_s)$ and $\Phi=\Phi(\rho,\rho_s)$. As we said above, such potentials can emerge due to complicated structure of nucleon–meson vertices. This can provide the guide-lines for building up the CSB nucleon–nucleon potentials.
Another new point is the importance of the CSB in the scalar channel. Neglecting the scalar channel CSB interactions we obtain the wrong sign of the effects, i.e. $\delta\varepsilon>0$. This contradicts the earlier belief that the vector channel $\omega-\rho$ mixing is the main mechanism of the effect [@127]. Our result is supported by the analysis of Hatsuda et al. [@134] who found that the $\omega-\rho$ mixing changes sign for the off-shell mesons. This can also help in constructing the CSB nuclear forces.
EMC effect
----------
The experiments carried out by EMC collaboration [@135] demonstrated that deep inelastic scattering function $F^A_2(x_B)$ of nucleus with atomic number $A$ ( $x_B$ stands for Bjorken variable) differs from the sum of those of free nucleons. Most of the data were obtained for iron (Fe). The structure function was compared to that of deuteron, which imitates the system of free nucleons. The deviation of the ratio $$\label{206}
R^A(x_B)\ =\ \left. \frac{F^A_2(x_B)}A\right/\frac{F^D_2(x_B)}2$$ from unity is caused by deviation of a nucleus from the system of free nucleons. The ratio $R(x_B)$ appeared to be the function of $x_B$ indeed. Exceeding unity at $x_B<0.2$ it drops at larger $x_B$ reaching the minimum value $R^{\rm Fe}(x_B)\approx0.85$ at $x_B\approx0.7$. This behaviour of the ratio was called the EMC effect.
There are several mechanisms which may cause the deviation of the ratio $R(x_B)$ from unity. These are the contribution of quark–antiquark pairs, hidden in pions, originated by the nucleon–nucleon interactions, possible formation of multiquark clusters inside the nucleus, etc. Here we shall try to find how the difference of the quark distributions inside the in-medium and free nucleons changes the ratio $R(x_B)$.
The QCD sum rules method was applied to investigation of the proton deep inelastic structure functions in vacuum in several papers. The second moments of the structure functions were obtained by Kolesnichenko [@136] and by Belyaev and Block [@137]. The structure function $F_2(x_B)$ at moderate values of $x_B$ was calculated by Belyaev and Ioffe [@117]. Here we shall rely on the approach, developed by Braun et al. [@138] which can be generalized for the case of finite densities in a natural way. On the other hand, such generalization is the extension of the approach discussed in this section.
To obtain the structure function of the proton, the authors of [@138] considered the correlation function $G$, describing the system with the quantum numbers of proton, interacting twice with a strongly virtual hard photon $$\label{207}
G(q,k)\ =\ i^2\int d^4xd^4ye^{i(qx)+i(ky)} \langle0|T[\eta(x)\bar\eta
(0)]H(y,\Delta)|0\rangle\ .$$ Here $q$ and $q+k$ are the momenta carried by the correlator in initial and final states, $k=k_1-k_2$ is the momentum transferred by the photon scattering. The incoming (outgoing) photon carries momentum $k_1(k_2)$, interacting with the correlator in the point $y-\Delta/2$ $(y+\Delta/2)$. The quark–photon interaction is presented by the function $H(y,\Delta)$. In the next step the double dispersion relation in variables $q^2_1=q^2$ and $q^2_2=(q+k)^2$ is considered. The crucial point is the operator expansion in terms of the nonlocal operators depending on the light-like $(\Delta^2=0)$ vector $\Delta$ [@138]. After the Borel transform in both $q^2_1$ and $q^2_2$ is carried out and the equal Borel masses $M^2_1=M^2_2$ are considered, the Fourier transform in $\Delta$ provides the momentum distribution of the quarks.
This approach was applied by Drukarev and Ryskin [@139] for calculation of the quark distributions in the proton, placed into the nuclear matter. The two types of contributions to the correlator should be considered — Fig.14a,b. In the diagram of Fig.14a the photon interacts with the quark of the free loop. In the diagram of Fig.14b it interacts with the quark exchanging with the matter. The modification of the distributions of the quarks was expressed through the vector condensate, which vanishes in vacuum and through the in-medium shifts of the other condensates and of the nucleon parameters $m,\lambda^2$ and $W^2$. The result appeared to be less sensitive to the value of the four-quark condensate than the characteristics of the nucleon considered in Subsection 5.2.
Note that the results are true for the moderate values of $x$ only and cannot be extended to the region $x\ll1$. This is because the OPE diverges at small $x$ [@117].
Omitting the details of calculation, provided in [@139] we present the results in Fig.15. One can see that the distributions of $u$ and $d$ quarks in fraction of the target momentum $x$ are modified in a different way and there is no common scale. The fraction of the momentum carried by the $u$ quarks $\langle x^u\rangle$ decreases by about 4%. The ratio $R$, determined by Eq.(\[206\]) has a typical EMC shape.
The technique used in [@139] can be expanded for the calculation of the quark distributions at $1<x<2$. Thus the approach enables to describe the cumulative aspects of the problem as well.
The difficulties
----------------
In spite of the relative success, described above, the approach faces a number of difficulties. Some of them take place in vacuum as well. The other ones emerge in the case of the finite density.
The first problem is the convergence of OPE in the lhs of the sum rules. Fortunately, the condensates which contribute to the lowest order OPE terms can be either calculated in a model-independent way, or expressed through the observables. This is true for both vacuum and nuclear matter — at least, for the values of density which are close to the saturation values $\rho_0$. However the situation is not so simple for the higher order OPE contributions. The four-quark condensate is the well known headache of all the QCD sum rules practitioners. The problem becomes more complicated at finite density, since the conventional form of presentation of this condensate contains the strongly cancelling contributions.
In order to include the higher OPE terms one needs the additional model assumptions. The same is true for the attempts to go beyond the gas approximation at finite densities. Recently Kisslinger [@140] suggested a hybrid of QCD sum rules and of the cloudy bag model.
Note, however that QCD sum rules is not a universal tool and there are the cases when OPE does not converge. We mentioned earlier that this takes place for the nucleon structure functions at small $x$ – [@117]. Some time ago Eletsky and Ioffe [@141] adduced the case when the short distance physics plays important role, making the OPE convergence assumption less convincing. Recently Dutt-Mazumder et al. [@DM] faced the situation when the ratio of two successive terms of $q^{-2}$ expansion is not quenched.
There are also problems with the rhs of the sum rules. The “pole+continuum” model is a very simple ansatz, and it may appear to be oversimplified even in vacuum. The spectrum of the nucleon correlator in-medium is much more complicated than in vacuum. The problem is to separate the singularities of the correlator, connected with the nucleon from those of the medium itself. As we have seen, the “pole+continuum” model can be justified to the same extent as in vacuum until we do not include the three-nucleon interactions. We do not have a simple and convincing model of the spectrum which would include such interactions.
Anyway, the success of vacuum sum rules [@113] and reasonable results for the nucleons at finite densities described in this Section prompt that the further development of the approach is worth while.
A possible scenario
===================
The shape of the density dependence of the quark scalar condensate in the baryon matter $\kappa(\rho)$ appears to be very important for hadronic physics. It is the characteristics of the matter as a whole, describing the degree of restoration of the chiral symmetry with growing density. On the other hand, the dependence $\kappa(\rho)$ is believed to determine the change of the nucleon effective mass $m^*(\rho)$. The shape of the dependence $m^*(\kappa(\rho))$ differs in the different models.
The lowest order density dependence term in the expansion of the function $\kappa(\rho)$ is model-independent. However for the rigorous calculation of the higher order terms one needs to know the density dependence of the hadron parameters $m^*(\rho)$, $m^*_\Delta(\rho)$, $f^*_\pi(\rho)$, etc. In Sec.4 we presented the results of the calculations of the condensate $\kappa(\rho)$ under certain model assumptions. A more detailed analysis requires the investigation of the dependence of these parameters on QCD condensates.
Such dependence can be obtained by using the approach, based on the in-medium QCD sum rules. In Sec.5 we show how in-medium QCD sum rules for the nucleons work. Even in a somewhat skeptical review of Leinweber [@142], where the present state of art of applications of the QCD sum rules is criticised, the method is referred to as “the best fundamentally based approach for investigations of hadrons in nuclear matter”. Of course, to proceed further one must try to overcome the difficulties, discussed in Subsec.5.5.
The lowest order condensates can be either calculated, or connected directly to the observables. This is true for both vacuum and nuclear mater. However, neither in vacuum nor in medium the higher order condensates can be obtained without applications of certain models. Thus in further steps we shall need a composition of QCD sum rules with model assumptions.
We have seen that the density dependence of the delta isobar effective mass is important for the calculation of the nonlinear contribution to the scalar condensate $\kappa(\rho)$. The shape of this dependence is still obscure. Thus the extension of the QCD sum rules method for the description of $\Delta$-isobars in-medium dynamics is needed. Such work is going on — see, e.g., the paper of Johnson and Kisslinger [@143].
The fundamental in-medium Goldberger-Treiman and Gell-Mann–Oakes–Renner relations are expected to be the other milestones of the approach. The agreement with the results with those of conventional nuclear physics at $\rho\sim\rho_0$ would be the test of the approach.
Further development of the approach would require inclusion of the vector mesons. The vector meson physics at finite densities is widely studied nowadays. Say, various aspects of QCD sum rules application where considered in recent papers [@DM; @144; @145; @146] while the earlier works are cited in reviews [@119], [@142].
We expect the investigation in framework of this scenario to clarify the features of baryon parameters and of the condensates in nuclear matter.
We thank V. Braun, M. Ericson, B.L. Ioffe, L. Kisslinger, M. Rho, and E.E. Saperstein for fruitful discussions. We are indebted to Mrs. Galina Stepanova for the assistance in preparation of the manuscript. The work was supported in part by Deutsche Forschungsgemeinschaft (DFG) — grant 436/RUS 113/595/0-1 and by Russian Foundation for Basic Research (RFBR) — grants 0015-96610 and 0002-16853.
[99]{}
Y. Nambu and G. Jona-Lasinio, Phys.Rev. [**122**]{} (1961) 345.
M.L. Goldberger and S.B. Treiman, Phys.Rev. [**110**]{} (1958) 1478.
M. Gell-Mann, R.J. Oakes and B. Renner, Phys.Rev. [**175**]{} (1968) 2195.
J.Gasser and H. Leutwyler, Phys.Rep. [**87**]{} (1982) 77.
A.I. Vainshtein, V.I. Zakharov and M.A. Shifman, JETP Lett. [**27**]{} (1978) 55.
M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl.Phys. B[**147**]{} (1979) 385, 448, 519.
S. Weinberg, in [*“A Festschrift for I.I. Rabi”*]{}, L. Motz, ed. (New York Academy of Science, New York, 1977).
J.F. Donoghue and C.R. Nappi, Phys.Lett. B[**168**]{} (1986) 105.
M. Anselmino and S. Forte, Z.Phys. C[**61**]{} (1994) 453.
S. Forte, Phys.Rev. D[**47**]{} (1993) 1842.
A.W. Thomas, Adv. Nucl. Phys. [**13**]{} (1984) 1.
M.C. Birse, Prog. Part. Nucl. Phys. [**25**]{} (1990) 1.
E. Reya, Rev.Mod.Phys. [**46**]{} (1974) 545.
S.L. Adler, R.F. Dashen, [*“Current Algebras”*]{} (Benjamin, NY, 1968).
T.P. Cheng and R. Dashen, Phys.Rev.Lett. [**26**]{} (1971) 594.
R. Koch, Z. Phys. C[**15**]{} (1982) 161.
U. Weidner et al., Phys.Rev.Lett. [**58**]{} (1987) 648.
J. Gasser, M.E. Sainio and A. Švarc, Nucl.Phys. B[**307**]{} (1988) 779.
J. Gasser, H. Leutwyler and M.E. Sainio, Phys.Lett. B[**253**]{} (1991) 252, 260.
E.G. Drukarev and E.M. Levin, JETP Lett. [**48**]{} (1988) 338; Sov.Phys. JETP [**68**]{} (1989) 680.
E.G. Drukarev and E.M. Levin, Nucl.Phys. A[**511**]{} (1990) 679.
U. Vogl and W. Weise, Prog.Part.Nucl.Phys. [**27**]{} (1991) 195.
N.-W. Cao, C.M. Shakin, and W.-D. Sun, Phys.Rev. C[**46**]{} (1992) 2535.
J. Gasser, Ann. Phys. [**136**]{} (1981) 62.
H. Hellmann, [*Einführung in die Quanenchemie*]{} (Deuticke Verlag, Leipzig, 1937);\
R.P. Feynman, Phys. Rev. [**56**]{} (1939) 340.
T. Becher and H. Leutwyler, Eur.Phys. J. C[**9**]{} (1999) 643.
V.E. Lyubovitskij, T. Gutsche, A. Faessler, and E.G. Drukarev, Phys.Rev. D[**63**]{} (2001) 054026.
T. Gutsche and D. Robson, Phys. Lett. B[**229**]{} (1989) 333.
J.-P. Blaizot, M. Rho, and. N.N. Scoccola, Phys.Lett. B[**209**]{} (1988) 27.
A. Gamal and T. Frederico, Phys. Rev. C[**57**]{} (1998) 2830.
D.I. Diakonov, V.Yu. Petrov and M. Praszałowicz, Nucl.Phys. B[**323**]{} (1989) 53.
S. Güsken [*et al.*]{}, Phys.Rev. D[**59**]{} (1999) 054504.
D.B. Leinweber, A.W. Thomas, and S.V. Wright, Phys.Lett. B[**482**]{} (2000) 109.
G.M. Dieringer [*et al.*]{}, Z. Phys. C[**39**]{} (1988) 115.
X. Jin, M. Nielsen, and J. Pasupathy, Phys.Lett. B[**314**]{} (1993) 163.
M.A. Shifman, A. I. Vainshtein, and V.I. Zakharov, Phys.Lett. B[**78**]{} (1978) 443.
S.J. Dong, J.-F. Lagaë, and K.F. Liu, Phys.Rev. D[**54**]{} (1996) 5496.
F.J. Yndurain[*“Quantum Chromodynamics”*]{} (Springer-Verlag, NY,1983).
V.M. Belyev and B.L. Ioffe, Sov.Phys. JETP [**56**]{} (1982) 493.
E.V. Shuryak, Nucl. Phys. B[**328**]{} (1989) 85.
C. Itzykson and J.-B. Zuber, [*“Quantum Field Theory”*]{} (McGraw Hill, NY, 1980).
V.M. Braun and A.V. Kolesnichenko, Nucl.Phys. B[**283**]{} (1987) 723.
E.G. Drukarev and M.G. Ryskin, Nucl.Phys. A[**578**]{} (1994) 333.
X. Jin, T.D. Cohen R.J. Furnstahl, and D.K. Griegel, Phys.Rev. C[**47**]{} (1993) 2882.
V.A. Novikov, M.A. Shifman, A.I. Vainshtein, M.B. Voloshin, and V.I. Zakharov, Nucl.Phys. B[**237**]{} (1984) 525.
L.S. Celenza, C.M. Shakin, W.D. Sun, and J. Szweda, Phys.Rev. C[**51**]{} (1995) 937.
E.G. Drukarev, M.G. Ryskin, and V.A. Sadovnikova, Z.Phys. A[**353**]{} (1996) 455.
A. Bulgac, G.A. Miller, and M. Strikman, Phys.Rev. C[**56**]{} (1997) 3307.
E.G. Drukarev, M.G. Ryskin, and V.A. Sadovnikova, Phys.Atom.Nucl. [**59**]{} (1996) 601.
T.D. Cohen, R.J. Furnstahl, and D.K. Griegel, Phys.Rev. C[**45**]{} (1992) 1881.
G. Chanfray and M. Ericson, Nucl.Phys. A[**556**]{} (1993) 427.
B.L. Friman, V.R. Pandharipande, and R.B. Wiringa, Phys.Rev.Lett. [**51**]{} (1983) 763.
G. Chanfray, M. Ericson, and M. Kirchbach, Mod.Phys.Lett. A[**9**]{} (1994) 279.
M. Ericson, Phys.Lett. B[**301**]{} (1993) 11.
M.C. Birse and J.A. McGovern, Phys.Lett. [**309B**]{} (1993) 231.
M.C. Birse, J.Phys. G[**20**]{} (1994) 1537.
V. Dmitrašinović, Phys.Rev. C[**59**]{} (1999) 2801.
V. Bernard, U.-G. Meissner, I. Zahed, Phys.Rev. D[**36**]{} (1987) 819.
V. Bernard and U.-G. Meissner, Nucl.Phys. A[**489**]{} (1988) 647.
M. Jaminon, R. Mendez-Galain, G. Ripka, and P. Stassart, Nucl.Phys. A[**537**]{} (1992) 418.
M.Lutz, B.Friman, Ch.Appel, Phys. Lett. B[**474**]{} (2000) 7.
A. Bohr and B.R. Mottelson, [*“Nuclear Structure”*]{} (Benjamin, NY, 1969).
H.A. Bethe, Ann.Rev.Nucl.Sci. [**21**]{} (1971) 93.
B.D. Day, Rev.Mod.Phys. [**50**]{} (1978) 495.
J.D. Walecka, Ann.Phys. [**83**]{} (1974) 491.
L.S. Celenza and C.M. Shakin, [*“Relativistic Nuclear Physics”*]{} (World Scientific, Philadelphia, 1986).
R. Brockmann and W. Weise, Phys.Lett. [**69B**]{} (1977) 167.
B.D. Serot and H.D. Walecka, Adv.Nucl.Phys. [**16**]{} (1985) 1.
R.J. Furnstahl and B.D. Serot, Phys.Rev. C[**47**]{} (1993) 2338.
M. Ericson, Ann. Phys. [**63**]{} (1971) 562.
D.H. Wilkinson, Phys.Rev. C[**7**]{} (1973) 930.
G.D. Alkhazov, S.A. Artamonov, V.I. Isakov, K.A. Mezilev, and Yu.V. Novikov, Phys.Lett. B[**198**]{} (1987) 37.
M. Ericson, A. Figureau, and C. Thevenet, Phys.Lett. B[**45**]{} (1973) 19.
M. Rho, Nucl.Phys. A[**231**]{} (1974) 493.
T. Ericson and W. Weise, [*“Pions and Nuclei”*]{} (Clarendon Press, Oxford, 1988).
A.B. Migdal, [*“Theory of Finite Fermi Systems and Application to Atomic Nuclei”*]{} (Willey, NY, 1967).
J.W. Negele, Comm.Nucl.Part.Phys. [**14**]{} (1985) 303.
L.A. Sliv, M.I. Strikman, and L.L. Frankfurt, Sov.Phys. – Uspekhi, [**28**]{} (1985) 281.
G.E. Brown, W. Weise, G. Baym, and J. Speth, Comm.Nucl.Part.Phys. [**17**]{} (1987) 39.
T. Jaroszewicz and S.J. Brodsky, Phys.Rev. C[**43**]{} (1991) 1946.
T.D. Cohen, Phys.Rev. C[**45**]{} (1992) 833.
M. Lutz, S. Klimt, and W. Weise, Nucl.Phys. A[**542**]{} (1992) 521.
M. Jaminon and G. Ripka, Nucl.Phys. A[**564**]{} (1993) 505.
R.D. Carlitz and D.B. Creamer, Ann.Phys. [**118**]{} (1979) 429.
P.A.M. Guichon, Phys.Lett. B[**200**]{} (1988) 235.
K. Saito and A.W. Thomas, Phys.Rev. C[**51**]{} (1995) 2757.
T.H.R. Skyrme, Nucl.Phys. [**31**]{} (1962) 556.
J. Wess and B. Zumino, Phys.Lett. B[**37**]{} (1971) 95.
M. Rho, Ann.Rev.Nucl.Sci. [**34**]{} (1984) 531.
G.S. Adkins, C.R. Nappi, and E. Witten, Nucl.Phys. B[**228**]{} (1983) 552.
G.S. Adkins and C.R. Nappi, Nucl.Phys. B[**233**]{} (1984) 109.
A. Jackson, A.D Jackson, and V. Pasquier, Nucl.Phys. A[**432**]{} (1985) 567.
A.M. Rakhimov, F.C. Khanna, U.T. Yakhshiev, and M.M. Musakhanov, Nucl.Phys. A[**643**]{} (1998) 383.
D.I. Diakonov and V.Yu. Petrov, hep-ph/0009006.
G.E. Brown and M. Rho, Phys.Rev.Lett. [**66**]{} (1991) 2720.
G.E. Brown and M. Rho, Phys.Rep. [**269**]{} (1996) 333.
B.L. Ioffe, Nucl.Phys. B[**188**]{} (1981) 317.
B.L. Ioffe and A.V. Smilga, Nucl.Phys. B[**232**]{} (1984) 109.
E.G. Drukarev and E.M. Levin, Prog.Part.Nucl.Phys. [**27**]{} (1991) 77.
A.B. Migdal, Rev.Mod.Phys. [**50**]{} (1978) 107.
W.H. Dickhoff, A. Faessler, H. Muther, and S.-S. Wu, Nucl.Phys. A[**405**]{} (1983) 534.
W.H. Dickhoff, A. Faessler, J. Meyer-ter-Vehn and H. Muther, Phys.Rev. C[**23**]{} (1981) 1154.
E.G. Drukarev, M.G. Ryskin, and V.A. Sadovnikova, Eur.Phys.J. A[**4**]{} (1999) 171.
A.B. Migdal, Sov.Phys. JETP [**34**]{} (1972) 1184.
V.A. Sadovnikova, nucl-th/0002025.
V.A. Sadovnikova and M.G. Ryskin, Yad.Phys. [**64**]{} (2001) 3.
A.B. Migdal, E.E. Saperstein, M.A. Troitsky, and D.N. Voskresensky, Phys.Rep. [**192**]{} (1990) 179.
L.D. Landau, Sov.Phys. JETP [**3**]{} (1957) 920.
S.-O. Backman, G.E. Brown, J.A. Niskanen, Phys. Rep. [**124**]{} (1985) 1.
N. Bianchi, V. Muccifora, E. De Santis et al., Phys.Rev. C[**54**]{} (1996) 1688.
A.S Carroll, I.-H. Chiang, C.B. Dover et al., Phys.Rev. C[**14**]{} (1976) 635.
V.R. Pandharipande, Nucl.Phys. A[**178**]{} (1971) 123.
M.A. Troitsky and N.I. Checunaev, Sov.J.Nucl.Phys. [**29**]{} (1979) 110.
J. Boguta, Phys.Lett. B[**109**]{} (1982) 251.
B.L. Birbrair and A.B. Gridnev, Preprint LNPI-1441 (1988).
D.S. Kosov, C. Fuchs, B.V. Martemyanov, and A. Faessler, Phys.Lett. B [**421**]{} (1998) 37.
M.A. Shifman, [*“Vacuum structure and QCD sum rules”*]{} (North Holland, Amsterdam, 1992).
K.G. Wilson, Phys.Rev. [**179**]{} (1969) 1499.
B.L. Ioffe, Z.Phys. C[**18**]{} (1983) 67.
H. Lehmann, Nuovo Cimento [**11**]{} (1954) 342.
V.M. Belyaev and Y.I. Kogan, Sov.Phys. JETP Lett. [**37**]{} (1983) 730.
V.M. Belyaev and B.L. Ioffe, Nucl.Phys. B[**310**]{} (1988) 548.
B.L. Ioffe, L.N. Lipatov, and V.A. Khoze, [*“Hard Processes”*]{} (Willey, NY, 1984).
T.D. Cohen, R.J. Furnstahl, D.K. Griegel, and X. Jin, Prog.Part.Nucl.Phys. [**35**]{} (1995) 221.
T.D. Cohen, R.J. Furnstahl and D.K. Griegel, Phys.Rev.Lett. [**67**]{} (1991) 961.
R.J. Furnstahl, D.K. Griegel and T.D. Cohen, Phys.Rev. C[**46**]{} (1992) 1507.
X. Jin, M. Nielsen, T.D. Cohen, R.J. Furnstahl, and D.K. Griegel, Phys.Rev. C[**49**]{} (1994) 464.
X. Jin and R.J. Furnstahl, Phys.Rev. C[**49**]{} (1994) 1190.
J.A. Nolen Jr. and J.P.Schiffer, Ann.Rev.Nucl.Phys. [**19**]{} (1969) 471.
E.H. Auerbach, S. Kahana, and J. Weneser, Phys.Rev.Lett. [**23**]{} (1969) 1253.
A. Bulgac and V.R. Shaginyan, Eur.Phys. J. A[**5**]{} (1999) 247;
V.R. Shaginyan, Sov.J.Nucl.Phys. [**40**]{} (1984) 728.
G.A. Miller, B.M.K. Nefkens and I. Slaus, Phys.Rep. [**194**]{} (1990) 1.
E.M. Henley and G. Krein, Phys.Rev.Lett. [**62**]{} (1989) 2586.
T. Hatsuda, H. Hogaasen, and M. Prakash, Phys.Rev. C[**42**]{} (1990) 2212.
C. Adami and G.E. Brown, Z.Phys. A[**340**]{} (1991) 93.
T. Schafer, V. Koch, and G.E. Brown, Nucl.Phys. A[**562**]{} (1993) 644.
E.G. Drukarev and M.G. Ryskin, Nucl.Phys. A[**572**]{} (1994) 560; A[**577**]{} (1994) 375c.
C. Adami, E.G. Drukarev, and B.L. Ioffe, Phys.Rev. D[**48**]{} (1993) 2304.
T. Hatsuda, E.M. Henley, Th. Meissner, and G. Krein, Phys.Rev. C[**49**]{} (1994) 452.
J.J. Aubert et al., Phys.Lett.B[**123**]{} (1983) 275.
A.V. Kolesnichenko, Sov.J.Nucl.Phys.[**39**]{} (1984) 968.
V.M. Belyaev and B.Yu. Blok, Z.Phys. C[**30**]{} (1986) 279.
V. Braun, P. Gornicki, and L. Mankiewicz, Phys.Rev. D[**51**]{} (1995) 6036.
E.G. Drukarev and M.G. Ryskin, Z.Phys. A[**356**]{} (1997) 457.
L.S. Kisslinger, hep-ph/9811497.
A.K. Dutt-Mazumder, R. Hofmann and M. Pospelov, Phys.Rev. C[**63**]{} (2001) 015204.
V.L. Eletsky and B.L. Ioffe, Phys.Rev.Lett. [**78**]{} (1997) 1010.
D.B. Leinweber, Ann.Phys. [**254**]{} (1997) 328.
M.B. Johnson and L.S. Kisslinger, Phys.Rev. C[**52**]{} (1995) 1022.
F.Klingl, N.Kaiser and W.Weise, Nucl.Phys. A[**624**]{} (1997) 527.
S.Leupold and U.Mosel, Phys.Rev. C[**58**]{} (1998) 2939.
F.Klingl and W.Weise, Eur.Phys.J. A[**4**]{} (1999) 225.
|
---
abstract: 'The talks presented in the string theory and supergravity session of the GR16 conference in Durban, South Africa are described below for the proceedings.'
---
**Summary of Session D1(ii), String Theory and Supergravity**
Donald Marolf
Physics Department, Syracuse University\
Syracuse, NY 13244-1130, USA
The strings and supergravity session featured a small but varied collection of talks ranging from studies of exact solutions and solitons to supersymmetry, cosmology, and talks related to the gauge-theory/gravity dualities of[@malda; @ISMY]. Given the breadth of topics and the rather liberal amount of space made available in the proceedings, it seemed best to allow each speaker to describe their talks in some length. What follows is therefore a description of each talk in the order that they were presented. Each contribution was written by the speaker and only slightly edited by myself as session chair. For full details of the works, please refer to the references provided below.
[**Reduced $D=10$ ${\cal N}=4$ Yang-Mills Theories**]{}
Matthias Staudacher
Staudacher discussed various aspects of the dimensional reductions of the maximally supersymmetric gauge theory, namely ${\cal N}=1$ in $D=10$ to lower dimensions, and in particular the cases $D=10 \rightarrow d=0,1,2,4$. These reductions are relevant since supersymmetry survives the reduction process, allowing one to obtain a number of exact and analytic results. This is particularly important since there exist manifold, largely conjectural relationships to string theory, supermembrane quantization and supergravity models.
The first half of the talk focused on the reductions to $d=0,1,2$. All three cases have been used in different proposals for non-perturbative definitions of string theory and M-theory, going, respectively, by the names IKKT model, deWit-Hoppe-Nicolai or BFSS model, DVV model or matrix string theory. A report was given on a number of recent results concerning the d=0 reduction[@Krauth:2000bv; @Staudacher:2000gx]. This reduction is of interest in its own right and is also relevant to the bound state problem of the $d=1$ reduction. This work demonstrates that the current techniques for counting the number of ground states are not yet consistent and in fact quite incomplete. It is also relevant to the issue of computing exact partition functions in the $d=2$ reduction.
The second half of the talk discussed an ongoing investigation of Maldacena-Wilson loops in the $d=4$ gauge field theory: Using the AdS/CFT correspondence between classical supergravity and the strong coupling limit of the field theory, a number of exact results for these loops have been proposed in the literature. An important problem consists in relating these results to weakly coupled gauge theory. The first-ever two-loop perturbative calculation[@Plefka:2001bu] was discussed. The first main result is that the Maldacena-Wilson loop operator is completely two-loop finite, suggesting finiteness to all orders. The second chief result is that vertex diagrams contribute to the two-loop static potential. Previous lower order calculations in the literature found only (trivial) ladder diagrams to contribute. This indicates that, if AdS/CFT is correct, it indeed solves, at strong coupling, ’t Hooft’s longstanding planar diagram summation problem for this field theory.
[**Rotating black holes in higher dimensions**]{}
Roberto Emparan
There are several motivations for studying General Relativity in dimensions higher than 4. In addition to allowing rich dynamics and new qualitative behavior, higher dimensions are required by most unification schemes, such as string theory. Moreover, in scenarios with large extra dimensions and a low fundamental scale there is the possibility that black holes will be produced and detected in future colliders[@bhlhc]. Such black holes will generically be rotating.
Solutions describing neutral, rotating black holes in higher dimensions were found in[@mp]. For $D\geq 6$ they present new qualitative features which, perhaps surprisingly, have so far attracted little attention. This is the subject of this contribution, which is based on work in progress with Rob Myers.
In $D\geq 5$ dimensions, rotation can take place in more than one plane. However, the talk considered the case where the black hole spins in a single plane, with rotation parameter $a\propto J/M$. As is well known, the amount of rotation that a 4D Kerr black hole can support is limited: if the bound $M\geq a$ is violated, a naked singularity results. As shown in[@mp], a similar bound appears for $D=5$. However, for $D\geq 6$ the horizon is present for arbitrary values of $a$. There is no extremal limit and no bound on the spin. Hence, these ultra-spinning black holes are distinctive of higher dimensions.
What is the shape of such an ultra-spinning black hole? An analysis of the proper size of the horizon reveals that the black hole is highly flattened along the directions parallel to the plane of rotation—a ‘pancaked’ horizon. Moreover, if the rotation parameter is sent to infinity (with the mass per unit area kept finite), the geometry that results is that of a black membrane. The latter is known to be classically unstable[@gl], so it is natural to ask whether the highly pancaked, ultra-spinning black holes will not become unstable before reaching this limit, i.e., already at finite values of $a$.
The local stability of 4D black holes has been established in classic work[@chandra]. Global stability has been addressed using the area theorems (and their thermodynamic interpretation), which state that the sum of the areas of the future horizons can not decrease. As for black branes, both local and global arguments lead to the conclusion that they are unstable[@gl].
Both approaches should be applied to ultra-spinning black holes. While the local analysis of linear perturbations is still in progress, global thermodynamics points clearly to an instability of black holes in $D\geq 6$ for sufficiently large values of $a/M$. To see this, the possible decay modes of the black hole were identified and the area of the final products were compared to that of the initial black hole for the same mass and spin. Several decay modes were studied, such as emission of waves (modeled by a gas of null particles, or using the radiation formulas for near-Newtonian sources), or the black hole breaking apart into several smaller black holes (a classically forbidden process). In all cases the instability sets in at $a/M\approx$ a few. Consistently, this happens only for $D\geq 6$; for $D=4,5$ this approach predicts no instability.
[**Black hole entropy calculations based on symmetries**]{}
Jacek Wiśniewski, with Olaf Dreyer and Amit Ghosh
A microscopic derivation of black hole entropy is one of the greatest challenges to candidate quantum theories of gravity. As an alternative to the existing quantized models, a set of very attractive ideas was recently suggested by Andrew Strominger[@strom] and Stephen Carlip[@carlip]. Their symmetry based approaches are very general and mostly classical. The starting point is an observation that symmetry generating vector fields of black hole space-time form a Diff($S^1$) algebra which, on the level of the algebra of conserved charges associated with these vector fields, becomes a centrally extended Virasoro algebra[@bh]. The black hole space-time turns out to be a representative state of the algebra with a fixed conformal weight. Then, solely using representation theory, one can count the degeneracy of such a state and this gives the entropy of the black hole : $S=2\pi\sqrt{cH_0/6}$. Here, $c$ is the central extension and $H_0$ is the conformal weight of the black hole or, more precisely, the eigenvalue of the Hamiltonian (same as the zero-mode of the Virasoro algebra) for the black hole space-time as the eigenstate.
These calculations, however, face some conceptual and technical problems. Strominger’s calculations are based on [*asymptotic*]{} symmetries. It is not apparent how these symmetries capture the essence of the black hole space-time. In fact, the results are equally applicable to a star having similar asymptotic behavior. Subsequently, Carlip improved on this idea by making the symmetry analysis in the near-horizon region. Conceptually this approach is much more satisfactory in that the black hole geometry is now at the forefront. Ref.[@carlip] also, however, faces some technical problems which corrected in[@dgw] and some interesting results emerge :
a\) The Lie brackets of the vector fields form a Diff($S^1$) algebra both on and near the horizon. b) The vector fields do not admit a well-defined limit to the horizon (horizon penetrating coordinates were used to make this explicit). c) It is essential that the entire calculation of the Poisson bracket of charges (i.e., Hamiltonians of the vector fields) is performed at a distance $\epsilon$ away from the horizon and the limit $\epsilon\to 0$ is taken at the end. The lack of limit of the vector fields forbids a clear interpretation of the symmetries in the classical gravity. Presumably, this feature is an indication that the true origin of the central charge is quantum mechanical where $\epsilon$ is to be regarded as a regulator.
There is in fact a whole one-parameter family of such vector fields forming Diff($S^1$), as above. As a result both $c$ and $H_0$ are modified, but in such a way that the entropy is always reproduced (up to a multiplicative factor of $\sqrt 2$). Only for a specific choice of this parameter, motivated by some arguments in the Euclidean signature, one can shift $H_{0}$ by some ‘ground state’ value to get rid of $\sqrt 2$. However, the meaning of this choice is unclear.
Finally, one can perform a symmetry analysis completely on the horizon within the framework of [*isolated horizons*]{} (see other talks). The framework is naturally suited to address the question of symmetries of black holes in equilibrium (very weak assumptions are made). It turns out that no central extension appears in this case. Summarizing, the current viewpoint of the authors is that probably there is some truth in the symmetry based approaches, but it is hard to avoid details of quantum theory, especially in order to understand the origin of the central charge. This could be the case because one is attempting to give an essentially classical argument for a phenomenon that is inherently quantum mechanical.
[**Exact Super Black Hole Solutions**]{}
R.B. Mann, with J. Kamnitzer and M.E. Knutt
Mann reported on a project which considered the problem of finding exact solutions to supergravity coupled to matter. Very few exact classical solutions to supergravity theories are known that have non-trivial fermionic content. A superparticle (which, if massive, is a D0-brane), a cosmological constant and a super-Liouville field were included as matter sources, each minimally coupled to $(1+1)$-dimensional supergravity.
One of the pitfalls of finding exact solutions is in ensuring that they cannot be reduced by infinitesimal local supersymmetry transformations to purely bosonic solutions. Working in superspace offers a straightforward means of avoiding this difficulty, since a superspace supergravity solution – one which satisfies the constraints – has nonzero torsion beyond that of flat superspace. The torsion is a supercovariant quantity, and as such its value remains unchanged under a gauge transformation. Hence any exact superspace solution with non-zero torsion must necessarily be non-trivial in this sense.
Mann reported on several exact solutions obtained for each of the supermatter sources mentioned above. The exact compensator superfield that describes the supergravity can be used to construct models of two-dimensional supersymmetric black holes with non-trivial curvature. To our knowledge these are the first solutions found using this technique. The superparticle and cosmological solutions had locally constant supercurvature[@RM1], but the super-Liouville solution had locally non-constant curvature[@RM2]. In the latter case possibility that a gravitini condensate formed was considered and examined the implications for the resultant spacetime structure. All such condensate solutions were found to have a condensate and/or naked curvature singularity.
[**Inverse dualisation and non-local dualities between gravity and supergravity theories**]{}
Dmitri Gal’tsov, with Chiang-Mei Chen and Sergei A. Sharakin
Gal’tsov’s talk was based on the preprint hep-th/0109151 which finds classical dualities of a new type between Einstein vacuum gravity in certain dimensions and ten and eleven-dimensional supergravities. The main idea is that Kaluza-Klein two-forms arising in toroidal compactification of vacuum gravity can be dualized in dimensions $D\geq 5$ to higher rank antisymmetric forms and these forms may be identified as matter fields belonging to bosonic sectors of supergravities. While it is perhaps not surprising that the Maxwell equations and the Bianchi identities for the KK fields translate into similar equations for dual higher rank forms, a non-trivial test is whether the dilatonic exponents in the reduced actions are the same. Several cases of such dualities are described. The most interesting is the correspondence between $2+3+6$ dimensional reduction of the eleven-dimensional supergravity and eight-dimensional Einstein gravity with two commuting Killing vectors. A related duality holds between both (suitably compactified) IIA and IIB ten-dimensional supergravities and eight-dimensional Einstein gravity with three commuting Killing vectors. Another case is the correspondence between the ten-dimensional Einstein gravity and a suitably compactified IIB theory. It is worth noting that all dualities of this sort are non-local in the sense that variables of one theory are related to variables of the dual theory not algebraically, but via solving differential equations.
A remarkable fact is that the $11D$-supergravity/$8D$-gravity duality holds not only in the bosonic sector, but also extends to Killing spinor equations exhibiting unbroken supersymmetries of the $11D$ theory. Namely, the existence of Killing spinors in the supergravity framework is equivalent to the existence of covariantly constant spinors in the dual Einstein gravity. It would be interesting to check whether this correspondence found at the linearized level extends non-linearly, i.e. holds for suitably supersymmetrized $8D$ gravity. A more challenging question is whether classical dualities found here have something to do with quantum theories. Although an answer was not presented in the talk, the results concerning the ten-dimensional supergravities look promising in this direction.
[**New supersymmetry algebra on gravitational interaction of Nambu-Goldstone fermion**]{}
Motomu Tsuda, with Kazunari Shima
A supersymmetric composite unified model for spacetime and matter, superon-graviton model (SGM) based upon SO(10) super-Poincaré algebra, is proposed in the papers[@ks1; @ks2]. In SGM, the fundamental entities of nature are the graviton with spin-2 and a quintet of superons with spin-1/2. The fundamental action which is the analogue of Einstein-Hilbert (E-H) action of general relativity (GR) describes the gravitational interaction of the spin 1/2 N-G fermions in Volkov-Akulov (V-A) model[@va] of a nonlinear realization of supersymmetry (NL SUSY) regarded as the fundamental objects (superon-quintet) for matter.
Tsuda’s talk performed the similar geometrical arguments to GR in the SGM spacetime, where the tangent Minkowski spacetime is specified by the coset space SL(2,C) coordinates (corresponding to N-G fermion) of NL SUSY of V-A model[@va] in addition to the ordinary Lorentz SO(3,1) coordinates, and discussed the structure of the fundamental SGM action[@ks2; @st1; @st2]. The overall factor of SGM action is fixed to ${-c^3 \over 16{\pi}G}$, which reproduces E-H action of GR in the absence of superons (matter). Also in the Riemann-flat space-time, i.e. the vierbein $e{_a}^{\mu}(x)
\rightarrow \delta{_a}^{\mu}$, it reproduce V-A action of NL SUSY[@va] with ${{\kappa}^{-1}}_{V-A} = {c^3 \over 16{\pi}G}{\Lambda}$ in the first order derivative terms of the superon. Therefore our model (SGM) predicts a (small) non-zero cosmological constant, provided ${\kappa}_{V-A} \sim O(1)$, and possesses two mass scales. Furthermore it fixes the coupling constant of superon (N-G fermion) with the vacuum to $({c^3 \over 16{\pi}G}{\Lambda})^{1 \over 2}$ (from the low energy theorem viewpoint), which may be relevant to the birth (of the matter and Riemann space-time) of the universe. The (spacetime) symmetry of our SGM action was also demonstrated. In particular, the commutators of the new NL SUSY transformations on gravitational interaction of N-G fermion with spin-1/2[@ks2; @st1] and -3/2[@st1; @st3] form a closed algebra, which reveals N-G (NL SUSY) nature of fermions and the invariances at least under a generalized general coordinate and a generalized local Lorentz transformations. In order to linearize the SGM action, the linearization of $N = 2$ V-A model which is now under investigation is extremely important from the physical point of view, for it gives a new mechanism generating a (U(1)) gauge field of the linearized (effective) theory[@ks3].
[**Quantum Cosmology from D-Branes**]{}
P. Vargas Moniz, with A. Yu. Kamenshchik
Recent developments in string theory suggest that, in a Planck length regime, the quantum fluctuations are very large so that string coupling increases and consequently the string degrees of freedom would not be the relevant ones. Instead, solitonic degrees of freedom such as D-p-branes would become more important. Hence, what would be the effect of those new physical degrees of freedom on, say, the the very early universe and in particular from a quantum mechanical point of view?
In this work, one has initiated an investigation on D-$p$-brane induced quantum cosmology. It could be pointed that it may not be justified to quantize an effective theory (arising from a fundamental quantum theory). However, in so far as new fundamental fields and effects arise from the fundamental theory, a quantization of the effective action could capture significant and relevant novel features.
The starting point is the result (obtained by Duff, Khai and Lu) that the natural metric that couples to a $p$-brane is the Einstein metric multiplied by the dilaton. Employing an (adequately) modified Brans-Dicke action with a deformation parameter $p$-dependent, different quantum cosmological scenarios were analyzed. In particular, several early universe scenarios were identified similar to quantum Pre-Big-Bang and Universe-anti-Universe creation. The possible quantum mechanical transition amplitudes were also studied with a view towards determining the effect of quantum cosmological solitons in the very early Universe. Finally, it was found that the solutions of Wheeler-DeWitt equation allowed for sub-class with $N=2$ SUSY. Other consequences regarding “stringy” cosmology were investigated, namely possible duality transformation in the effective action and their relation to Pre-Post-Big-Bang scenarios. Possible developments of this work include considering D-brane realistic actions and SUSY extensions.
[**Polarization of the D0 ground state in quantum mechanics and supergravity**]{}
Donald Marolf, with Pedro Silva
Marolf’s talk addressed a quantum version of the dielectric effect described by Myers in[@mye1]. In[@mye1], the application of a Ramond-Ramond background field to a D0-brane system induces a classical dielectric effect and causes the D0-branes to deform into a non-commutative D2-brane. In contrast,[@pedro] places D0-branes in the background generated by a stack of D4-branes. While no classical dielectric effect results, the four-branes modify the potential that shapes the non-abelian character of the quantum D0 bound state. As a result, the bound state is deformed, or polarized.
Two aspects of the deformation were studied and compared with the corresponding supergravity system. Fundamental to this comparison is the connection described by Polchinski[@pol1] relating the size of the matrix theory bound state to the size of the bubble of space that is well-described by classical supergravity in the near D0-brane spacetime. The near D0-brane spacetime is obtained by taking a particular limit in which open strings decouple from closed strings. The result is a ten-dimensional spacetime with small curvature and small string coupling when one is reasonably close (though not too close) to the D0-branes. However, beyond some critical distance $r_c$ the curvature reaches the string scale. As a result, the system beyond $r_c$ is not adequately described by the massless fields of classical supergravity. The goal was thus to compare deformations of the non-abelian D0-brane bound state with the deformations of this bubble of ‘normal’ space.
While the detailed effects were beyond the scope of the work presented, the deformations of the quantum mechanics ground state and the supergravity bubble were shown to have corresponding scaling properties. This supports the idea that the gravity/gauge theory duality associated with D0-branes can be extended to include couplings to nontrivial backgrounds such as those discussed in[@tvr1; @tvr2; @mye1]. A part of this was the analysis in an appendix of infrared issues associated with ’t Hooft scaling in 0+1 dimensions. This in turn strengthens the argument that Polchinski’s upper bound[@pol1] on the size of the D0-brane bound state in fact gives the full scaling with $N$. Corresponding arguments can in fact be made for all Dp/D(p+4)-systems for $p\le 2$.
[99]{} J. Maldacena, [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231, hep-th/9711200.
N. Itzhaki, J. M. Maldacena, J. Sonnenschein, S. Yankielowicz, Phys.Rev. D58 (1998) 046004, [hep-th/9802042]{}.
W. Krauth and M. Staudacher, Nucl. Phys. B [**584**]{}, 641 (2000) \[hep-th/0004076\]. M. Staudacher, Phys. Lett. B [**488**]{}, 194 (2000) \[hep-th/0006234\]. J. Plefka and M. Staudacher, JHEP [**0109**]{}, 031 (2001) \[hep-th/0108182\]. P. C. Argyres, S. Dimopoulos and J. March-Russell, Phys. Lett. B [**441**]{} (1998) 96 \[hep-th/9808138\]; R. C. Myers and M. J. Perry, Annals Phys. [**172**]{} (1986) 304. R. Gregory and R. Laflamme, Phys. Rev. Lett. [**70**]{} (1993) 2837 \[hep-th/9301052\]; S. Chandrasekhar, [*The Mathematical Theory Of Black Holes*]{}, (Oxford University Press, Oxford U, 1985).
A. Strominger, JHEP 9802 (1998) 009 S. Carlip, Class. Quantum Grav. 16 (1999) 3327 J. D. Brown and M. Henneaux, Comm. Math. Phys. 104 (1986) 207 O. Dreyer, A. Ghosh and J. Wiśniewski, Class. Quantum Grav. 18 (2001) 1929
M.E. Knutt and R.B. Mann, Class.Quant.Grav. [**16**]{} (1999) 937; Phys.Lett. [**B435**]{} (1998) 25.
R.B. Mann and J. Kamnitzer, Nucl. Phys. [**B**]{} (to be published).
K. Shima, [*Z. Phys.*]{} [**C18**]{}, 25 (1983);\
K. Shima, [*European. Phys. J.*]{} [**C7**]{}, 341(1999).
K. Shima, hep-ph/0012320, [*Phys. Lett.*]{} [**B501**]{}, 237 (2001).
D.V. Volkov and V.P. Akulov, [*Phys. Lett.*]{} [**B46**]{}, 109(1973).
K. Shima and M. Tsuda, hep-th/0101178, [*Phys. Lett.*]{} [**B507**]{}, 260 (2001).
K. Shima and M. Tsuda, hep-th/0109042.
K. Shima and M. Tsuda, hep-th/0012235, [*Phys. Lett.*]{} [**B**]{} in press.
K. Shima, Plenary talk at the Fourth International Conference on Symmetry in Nonlinear Mathematical Physics, July 7-14, 2001, Kiev, Ukraine. To appear in the Proceeding.
D. Marolf and P. J. Silva, JHEP [**0108**]{}, 043 (2001) \[hep-th/0105298\]. R.C. Myers, JHEP 9912 (1999) 022, [hep-th/9910053]{}.
J. Polchinski, Prog.Theor.Phys.Suppl. 134 (1999) 158-170, [hep-th/9903165]{}.
W. Taylor, M. Van Raamsdonk, Nucl.Phys. B558 (1999) 63-95, [hep-th/9904095]{}. W. Taylor, M. Van Raamsdonk, Nucl.Phys. B573 (2000) 703-734, [hep-th/9910052]{}.
|
---
abstract: 'We give a construction of measures with partial sum of Lyapunov exponents bounded by below.'
author:
- Henry de Thélin
title: '[**Construction of measures with dilation**]{}'
---
Key words: Lyapunov exponents, volume growth.\
AMS: 28Dxx, 58F11.
[**Introduction**]{} {#introduction .unnumbered}
====================
Let $M$ be a compact $C^1$-Riemannian manifold of dimension $d$ and let $f: M
\mapsto M$ be a $C^1$-map.
For $1 {\leqslant}k {\leqslant}d$, we denote by ${\mathcal{S}}_k$ the set of $C^1$-maps $\sigma :D^k= [0,1]^k \mapsto M$. We define the $k$-volume of $\sigma \in {\mathcal{S}}_k$ with the formula:
$$V(\sigma)=\int_{D^k} | \Lambda^k T_x \sigma | d \lambda(x),$$ where $d \lambda$ is the Lebesgue measure on $D^k$ and $|\Lambda^k T_x \sigma|$ is the norm of the linear map $\Lambda^k T_x \sigma : \Lambda^k T_x D^k
\mapsto \Lambda^k T_{\sigma(x)} M$ induced by the Riemannian metric on $M$.
Some links between the volume growth of iterates of submanifolds of $M$ and the entropy of $f$ have been studied by Y. Yomdin (see [@Y] and [@Gr]), S. E. Newhouse (see [@Ne]), O.S. Kozlovski (see [@Ko]) and J. Buzzi (see [@Bu]).
In this article, we prove that the volume growth of iterates of submanifolds of $M$ permits to create invariant measures with partial sum of Lyapunov exponents bounded by below. More precisely, for $1 {\leqslant}k {\leqslant}d$ we define the $k$-dilation:
$$d_k:= \limsup_{n \rightarrow \infty} \frac{1}{n} \log \sup_{\sigma \in {\mathcal{S}}_k}
\frac{V(f^n \circ \sigma)}{V(\sigma)}.$$
We will prove the following theorem:
For all integer $k$ between $1$ and $d=\mbox{dim}(M)$ there exists an ergodic measure $\nu(k)$ for which: $$\sum_{i=1}^{k} \chi_i {\geqslant}d_k.$$
Here $\chi_1 {\geqslant}\chi_2 {\geqslant}\dots {\geqslant}\chi_d$ are the Lyapunov exponents of $\nu(k)$.
Notice that when $k=d$ and $f$ is a ramified covering in some sense, the theorem can be deduced from a result due to T.-C. Dinh and N. Sibony (see [@DS] paragraph 2.3).
[**Proof of the theorem**]{} {#proof-of-the-theorem .unnumbered}
============================
Let $k$ be a positive integer between $1$ and $d$. We have to prove that there exists an ergodic measure $\nu(k)$ for which $$\sum_{i=1}^{k} \chi_i = \lim_{m \rightarrow \infty}
\frac{1}{m} \int \log | \Lambda^k T_y f^m | d \nu(k)(y) {\geqslant}d_k.$$
For the definition of Lyapunov exponents and for the statement of the previous equality, see [@KH] and [@Ar] chapter $3$.
$ $
There will be three steps in the proof of the theorem.
In the first one, we will change the dilation $d_k$ into a dilation of $ |\Lambda^k T_{x} f^{n}|$. More precisely, we will find points $x_{n_l}$ with $\frac{1}{n_l}
\log |\Lambda^k T_{x(n_l)} f^{n_l}| {\geqslant}d_k - {\varepsilon}$.
In the second part, we will see that the dilation of $|\Lambda^k
T_{x(n_l)} f^{n_l}|$ can be spread out in time. We will give the construction of a measure $\nu_l$ such that $d_k- 2 {\varepsilon}{\leqslant}\frac{1}{m} \int \log | \Lambda^k T_y f^m | d
\nu_l(y)$.
The third step of the proof will be to take the limit in the previous inequality.
[**1) First step**]{} {#first-step .unnumbered}
---------------------
Let $n_l$ be a subsequence such that:
$$\frac{1}{n_l} \log \sup_{\sigma \in {\mathcal{S}}_k}
\frac{V(f^{n_l} \circ \sigma)}{V(\sigma)} \rightarrow d_k.$$
We can find now a sequence $\sigma_{n_l} \in {\mathcal{S}}_k$ which verifies:
$$\frac{1}{n_l} \log \frac{V(f^{n_l} \circ
\sigma_{n_l})}{V(\sigma_{n_l})} \rightarrow d_k.$$
In the next lemma, we prove that we have dilation for $| \Lambda^k T_x
f^n|$ for some $x$:
For all $l {\geqslant}0$ there exists $x(n_l) \in M$ with:
$$\log |\Lambda^k T_{x(n_l)} f^{n_l}| {\geqslant}\log \left( \frac{ V(f^{n_l}
\circ \sigma_{n_l})}{2V(\sigma_{n_l})} \right).$$
Otherwise we would have an integer $l$ such that for all $x \in M$:
$$|\Lambda^k T_{x} f^{n_l}| {\leqslant}\frac{ V(f^{n_l}
\circ \sigma_{n_l})}{2V(\sigma_{n_l})} .$$
So (see [@Ar] chapter 3.2.3 for properties on exterior powers),
$$V(f^{n_l} \circ \sigma_{n_l})= \int_{D^k} |\Lambda^k T_x(f^{n_l} \circ
\sigma_{n_l})| d \lambda(x) = \int_{D^k} |\Lambda^k
T_{\sigma_{n_l}(x)}f^{n_l} \circ \Lambda^k T_x \sigma_{n_l}| d \lambda(x)$$ is bounded by above by
$$\int_{D^k} |\Lambda^k
T_{\sigma_{n_l}(x)}f^{n_l} | | \Lambda^k T_x \sigma_{n_l})| d \lambda(x)
{\leqslant}\frac{ V(f^{n_l} \circ \sigma_{n_l})}{2}$$
and we obtain a contradiction.
There exists a sequence ${\varepsilon}(l)$ which converges to $0$ such that:
$$\frac{1}{n_l} \log |\Lambda^k T_{x(n_l)} f^{n_l}| {\geqslant}d_k - {\varepsilon}(l),$$ for some points $x(n_l)$ in $M$.
[**2) Second step**]{} {#second-step .unnumbered}
----------------------
In this section, we will spread out in time the previous dilation.
Let $m$ be a positive integer. We will now cut $n_l$ with $m$ different ways.
By using the Euclidian division, we can find $q_l^i$ and $r_l^i$ (for $i=0, \dots ,m-1$) such that:
$$n_l=i+ m \times q_l^i + r_l^i$$ with $0 {\leqslant}r_l^i < m$.
If $i \in \{0, \dots , m-1 \}$, we have:
$$| \Lambda^k T_{x(n_l)} f^{n_l} | {\leqslant}|\Lambda^k T_{f^{i+ mq_l^i}(x(n_l))}f^{r_l^i}| \times
\prod_{j=0}^{q_l^i-1} | \Lambda^k T_{f^{i+jm}(x(n_l))} f^m | \times |
\Lambda^k T_{x(n_l)} f^i |,$$
so, by using the previous corollary,
$$n_l(d_k- {\varepsilon}(l)) {\leqslant}\log |\Lambda^k T_{f^{i+
mq_l^i}(x(n_l))}f^{r_l^i}| + \sum_{j=0}^{q_l^i-1} \log | \Lambda^k
T_{f^{i+jm}(x(n_l))} f^m | + \log |
\Lambda^k T_{x(n_l)} f^i |.$$
If we take the sum on the $m$ different ways to write $n_l$, we obtain:
$$m n_l (d_k - {\varepsilon}(l)) {\leqslant}\sum_{i=0}^{m-1} \log |\Lambda^k T_{f^{i+
mq_l^i}(x(n_l))}f^{r_l^i}|+\sum_{i=0}^{m-1} \sum_{j=0}^{q_l^i-1} \log | \Lambda^k
T_{f^{i+jm}(x(n_l))} f^m | + \sum_{i=0}^{m-1} \log |
\Lambda^k T_{x(n_l)} f^i |.$$
We have to transform this estimate on a relation on a measure. To realize that, we remark that:
$$\log | \Lambda^k T_{f^p(x(n_l))} f^m | = \int \log | \Lambda^k T_y
f^m | d \delta_{f^p(x(n_l))}(y),$$ where $\delta_{f^p(x(n_l))}$ is the dirac measure at the point $f^p(x(n_l))$.
So the previous inequality becomes:
$$d_k- {\varepsilon}(l) {\leqslant}a_l + \frac{1}{m} \int \log | \Lambda^k T_y
f^m | d \left(
\frac{1}{n_l} \sum_{i=0}^{m-1} \sum_{j=0}^{q_l^i-1}
\delta_{f^{i+mj}(x(n_l))} \right) (y) + b_l$$
with $$a_l= \frac{1}{mn_l} \sum_{i=0}^{m-1} \log |\Lambda^k T_{f^{i+
mq_l^i}(x(n_l))}f^{r_l^i}|$$ and
$$b_l =\frac{1}{m n_l} \sum_{i=0}^{m-1} \log | \Lambda^k T_{x(n_l)} f^i
|.$$
Now, because $f$ is a $C^1$-map we have: $$a_l {\leqslant}\frac{1}{m n_l} \sum_{i=0}^{m-1} \log L^{mk} {\leqslant}\frac{k
m^2}{mn_l} \log L$$ where $L = \max(\max_x |T_x f|,1)$ and: $$b_l {\leqslant}\frac{1}{m n_l} \sum_{i=0}^{m-1} \log L^{mk} {\leqslant}\frac{k
m^2}{m n_l} \log L.$$ So the sequences $a_l$ and $b_l$ are bounded by above by a sequence which converges to $0$ when $l$ goes to infinity.
In conclusion, we have:
$${\label{eq1}}
d_k- {\varepsilon}'(l) {\leqslant}\frac{1}{m} \int \log | \Lambda^k T_y f^m | d
\nu_l(y)$$
with $$\nu_l=\frac{1}{n_l} \sum_{i=0}^{m-1} \sum_{j=0}^{q_l^i-1}
\delta_{f^{i+mj}(x(n_l))},$$ and ${\varepsilon}'(l)$ a sequence which converges to $0$.
[**3) Third step**]{} {#third-step .unnumbered}
---------------------
The aim of this section is to take a limit for $\nu_l$ in the equation (\[eq1\]).
First, observe that $\nu_l= \frac{1}{n_l} \sum_{p=0}^{n_l-m}
\delta_{f^p(x(n_l))}$ and that the sequence $ \frac{1}{n_l}
\sum_{p=0}^{n_l-1} \delta_{f^p(x(n_l))} - \nu_l$ converges to $0$. In particular, there exists a subsequence of $\nu_l$ which converges to a measure $\nu$ which is a probability invariant under $f$ and independant of $m$. We continue to call $\nu_l$ the subsequence which converges to $\nu$. To complete the proof of the theorem, we have to take the limit in the equation (\[eq1\]). However, we have to be careful because the function $y \mapsto \log | \Lambda^k T_y f^m |$ is not continuous. But, we have the following lemma:
$$\limsup_{l \rightarrow \infty} \frac{1}{m} \int \log | \Lambda^k T_y f^m | d \nu_l(y) {\leqslant}\frac{1}{m} \int \log |\Lambda^k T_y f^m | d \nu(y).$$
For $r \in {\mathbb{N}}$, let $\Phi_r(y)= \max (\log | \Lambda^k T_y f^m |, -r)$.
The functions $\Phi_r$ are continuous and the sequence $\Phi_r$ decreases to the map $y
\mapsto \log | \Lambda^k T_y f^m |$ when $r$ goes to infinity.
Then:
$$\frac{1}{m} \int \log | \Lambda^k T_y f^m | d \nu_l(y) {\leqslant}\frac{1}{m}
\int \Phi_r(y) d \nu_l(y),$$
and,
$$\limsup_{l \rightarrow \infty} \frac{1}{m} \int \log | \Lambda^k T_y f^m | d \nu_l(y)
{\leqslant}\frac{1}{m} \int \Phi_r(y) d \nu(y)$$
because $\Phi_r$ is continuous. Now, we obtain the lemma by using the monotone convergence theorem.
It remains to take the limit in the equation (\[eq1\]). We obtain then the
For all $m$, we have:
$$d_k {\leqslant}\frac{1}{m} \int \log |\Lambda^k T_y f^m | d \nu(y).$$
In particular,
$$d_k {\leqslant}\int \sum_{i=1}^{k} \chi_i(y) d \nu(y)$$
where the $\chi_1 {\geqslant}\chi_2 {\geqslant}\dots {\geqslant}\chi_d$ are the Lyapunov exponents of $\nu$. Finally, by using the ergodic decomposition of $\nu$, we obtain the existence of an ergodic measure $\nu(k)$ with:
$$d_k {\leqslant}\sum_{i=1}^{k} \chi_i.$$
[00]{}
L. Arnold, *Random Dynamical Systems*, Springer Monographs in Mathematics, Springer-Verlag, (1998). J. Buzzi, *Entropy, volume growth and Lyapunov exponents*, preprint (1996). T.-C. Dinh and N. Sibony, *Dynamique des applications d’allure polynomiale*, J. Math. Pures Appl., [**82**]{} (2003), 367-423. M. Gromov, *Entropy, homology and semialgebraic geometry*, Astérisque, [**145-146**]{} (1987), 225-240. A. Katok et B. Hasselblatt, *Introduction to the modern theory of dynamical systems*, Encycl. of Math. and its Appl., vol. 54, Cambridge University Press, (1995). O.S. Kozlovski, *An integral formula for topological entropy of $C^{\infty}$ maps*, Ergodic Theory Dynam. Systems, [**18**]{} (1998), 405-424. S. E. Newhouse, *Entropy and volume*, Ergodic Theory Dynam. Systems, [**8**]{} (1988), 283-299. Y. Yomdin, *Volume growth and entropy*, Israel J. Math., [**57**]{} (1987), 285-300.
Henry de Thélin\
Université Paris-Sud (Paris 11)\
Mathématique, Bât. 425\
91405 Orsay\
France
|
---
abstract: 'In this paper we apply the recently developed mimetic discretization method to the mixed formulation of the Stokes problem in terms of vorticity, velocity and pressure. The mimetic discretization presented in this paper and in [@kreeftpalhagerritsma2011] is a higher-order method for curvilinear quadrilaterals and hexahedrals. Fundamental is the underlying structure of oriented geometric objects, the relation between these objects through the boundary operator and how this defines the exterior derivative, representing the grad, curl and div, through the generalized Stokes theorem. The mimetic method presented here uses the language of differential $k$-forms with $k$-cochains as their discrete counterpart, and the relations between them in terms of the mimetic operators: reduction, reconstruction and projection. The reconstruction consists of the recently developed mimetic spectral interpolation functions. The most important result of the mimetic framework is the commutation between differentiation at the continuous level with that on the finite dimensional and discrete level. As a result operators like gradient, curl and divergence are discretized exactly. For Stokes flow, this implies a pointwise divergence-free solution. This is confirmed using a set of test cases on both Cartesian and curvilinear meshes. It will be shown that the method converges optimally for all admissible boundary conditions.'
address: 'Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, 2629 HT Delft, The Netherlands.'
author:
- Jasper Kreeft
- Marc Gerritsma
bibliography:
- './literature.bib'
title: 'Mixed Mimetic Spectral Element Method for Stokes Flow: A pointwise divergence-free solution'
---
[^1] [^2]
Introduction {#sec:introduction}
============
We consider Stokes flow, which models a viscous, incompressible fluid flow in which the inertial forces are negligible with respect to the viscous forces, i.e. when the Reynolds number is very small, $Re\ll1$. Since $Re=UL/\nu$, small Reynolds numbers appear when either considering extremely small length scales, when dealing with a very viscous liquid or when one treats slow flows. Despite the simple appearance of Stokes flow model, there exists a large number of numerical methods to simulate Stokes flow. They all reduce to two classes of either circumventing the Ladyshenskaya-Babuška-Brezzi (LBB) stability condition or satisfying this condition, [@francahughes1988]. The first class can roughly be split into two subclasses, one is the group of stabilized methods, see e.g. [@bochevdohrmanngunzburger; @hughes1986] and the references therein, the other the group of least-squares methods, see e.g. [@bochevgunzburger; @jiang].
The class that satisfies the LBB condition is the group of compatible methods. In compatible methods discrete vector spaces are constructed such that they satisfy the discrete LBB condition. Best known are the curl conforming Nédélec [@nedelec1980] and divergence conforming Raviart-Thomas [@raviartthomas1977] and Brezzi-Douglas-Marini [@brezzidouglasmarini1985] spaces. A subclass of compatible methods consists of [*mimetic methods*]{}. Mimetic methods do not solely search for appropriate vector spaces, but try to mimic structures and symmetries of the continuous problem, see [@bochevhyman2006; @brezzibuffa2010; @kreeftpalhagerritsma2011; @mattiussi2000; @perot2011; @subramanian2006; @tonti1]. As a consequence of this mimicking, mimetic methods automatically preserve structures of the continuous formulation.
At the heart of the mimetic method we present is the generalized Stokes theorem, which couples the exterior derivative to the boundary operator. In vector calculus this theorem is equivalent to the classical Newton-Leibnitz, Stokes circulation and Gauss divergence theorems. These well-known theorems relate the vector operators grad, curl and div to the restriction to the boundary of a manifold. Therefore, obeying geometry and orientation will result in satisfying exactly the mentioned theorems, and consequently performing the vector operators exactly in a finite dimensional setting. This is indeed what we are looking for and what our mimetic method has in common with finite volume methods, [@harlowwelch1965; @subramanian2006]. In a three dimensional space we distinguish between four types of submanifolds, that is, points, lines, surfaces and volumes, and two types of orientation, namely, outer- and inner-orientation. The inner and outer orientations can be seen as generalizations of the concept of tangential and normal in vector calculus, respectively. This geometric structure will form the backbone of the mimetic method to be discussed in this paper. It will reappear throughout the paper in various guises. Examples of submanifolds in $\mathbb{R}^3$ are shown in together with the action of the boundary operator.
![The four geometric objects possible in $\mathbb{R}^3$, point, line, surface and volume, with outer- (above) and inner- (below) orientation. The boundary operator, $\partial$, maps $k$-dimensional objects to $(k-1)$-dimensional objects.[]{data-label="fig:manifoldswithorientation"}](orientationComplex.pdf){width="70.00000%"}
By creating a quadrilateral or hexadedral mesh, we divide the physical domain in a large number of these geometric objects, and to each geometric object we associate a discrete unknown. This implies that these discrete unknowns are *integral quantities*. Since the generalized Stokes theorem is an integral equation, it follows for example that taking a divergence in a volume is equivalent to taking the sum of the integral quantities associated to the surrounding surface elements, i.e. the fluxes. So using integral quantities as degrees of freedom to perform a vector operation like grad, curl or div, is equivalent to taking the sum of the degrees of freedom located at its boundary. These relations are of purely topological nature and can be seen as the horizontal connections between the geometric objects in . The vertical connections – not shown in –, however describe the metric dependent parts, which are better known as the constitutive relations.
In this work we use the language of differential geometry to identify these structures, since it clearly identifies the metric and metric-free part of the PDEs. The latter has a discrete counterpart in the language of algebraic topology. In mimetic methods we employ commuting diagrams to indicate the strong analogy between differential geometry and algebraic topology. The most important commuting property employed in this work is the commutation between the projection operator and differentiation in terms of the exterior derivative. This means that also in finite dimensional spaces, operations like gradient, curl and divergence are performed exactly. This implies, among others, and most importantly that incompressible Navier-Stokes and Stokes flow are guaranteed to be pointwise divergence-free, because the projection operator commutes with the divergence operator.
The similarities between differential geometry and algebraic topology in physical theories were first described by Tonti, [@tonti1]. A mimetic framework relating differential forms and cochains was initiated by Hyman and Scovel, [@HymanScovel1988], and extended first by Bochev and Hyman, [@bochevhyman2006], and later by Kreeft, Palha and Gerritsma [@kreeftpalhagerritsma2011]. A framework, closely related to the mentioned mimetic framework, is the finite element exterior calculus framework by Arnold, Falk and Winther [@arnoldfalkwinther2006; @arnoldfalkwinther2010]. A more geometric approach is described in the work by Desbrun et al. [@desbrun2005c; @desbrun2005]. An excellent introduction and motivation for the use of differential forms in the description of physics and the use in numerical modeling can be found in the ‘Japanese papers’ by Bossavit, [@bossavit1998; @bossavit9900].
We make use of spectral element interpolation functions as basis functions. In the past nodal spectral elements were mostly used in combination with Galerkin (GSEM) [@bernardimayday; @karniadakissherwin], and least-squares formulations (LSSEM) [@pontaza2003; @prootgerritsma2002]. The GSEM satisfies the LBB compatibility condition by lowering the polynomial degree of the pressure by two with respect to the velocity. This results in a method that is only weakly divergence-free, meaning that the divergence of the velocity field only convergence to zero with mesh refinement. The LSSEM circumvents the LBB condition in order to be able to use equal order polynomials. The drawback of this method is the poor mass conservation property, [@kattelans2009; @prootgerritsma2006].
The present study uses mimetic spectral element interpolation or basis functions on curvilinear quadrilaterals and hexahedrals of arbitrary order as described in [@gerritsma2011; @kreeftpalhagerritsma2011]. The mixed mimetic spectral element method (MMSEM) satisfies the LBB condition and gives a pointwise divergence-free solution for all mesh sizes. The mimetic spectral element interpolation functions are tensor product based interpolants. In every coordinate direction either a nodal or an edge interpolation function is used. By using tensor products, we are able to interpolate points, lines, surfaces, volumes, hyper-volumes and higher degree $n$-cube manifolds.
Although mimetic spectral elements are used to simulate Stokes flow and to derive numerical properties, alternative compatible/mimetic functions could be used in combination with the mimetic framework without much change, e.g., compatible B-splines, [@buffa2011; @buffa2011b; @evans2011], and mimetic B-splines, [@back2011].
This paper is organized as follows: first in the Stokes problem in terms of vector calculus is given, with its relation to geometry and orientation. In a brief summary of the most important concepts from differential geometry is given. discusses the discretization of the Stokes model. It introduces the discrete structures of algebraic topology and a set of mimetic operators relating differential forms to cochains; the reduction operator, $\mathcal{R}$, the reconstruction operator, $\mathcal{I}$, and its composition, the bounded cochain projection, $\pi_h:=\mathcal{I}\circ\mathcal{R}$. As reconstruction functions the mimetic spectral element basis functions are used in this paper. A mixed formulation for the Stokes problem is formulated in . In numerical results are discussed that show optimal convergence of all variables on curvilinear quadrilateral meshes. Secondly, the lid-driven cavity problem is shown on a square, cubic and triangle domain. The last test case is the flow around a cylinder moving with a constant velocity.
Stokes problem in vector calculus {#sec:stokes}
=================================
Let $\Omega\subset\mathbb{R}^n$, $n=2,3$, be a bounded $n$-dimensional domain with boundary $\p\Omega$. On this domain we consider the Stokes problem, consisting of a momentum equation and the incompressibility constraint, resulting from the conservation of mass. The Stokes problem is given by
$$\begin{aligned}
\nabla\cdot\sigma&=\vec{f}\quad\mathrm{on}\ \Omega,\\
\mathrm{div}\,\vec{u}&=0\quad\mathrm{on}\ \Omega,\end{aligned}$$
where the stress tensor $\sigma$ is given by $$\sigma=-\nu\nabla\vec{u}+p I,$$ with $\vec{u}$ the velocity vector, $p$ the pressure, $\vec{f}$ the forcing term and $\nu$ the kinematic viscosity. In case of velocity boundary conditions the pressure is only determined up to a constant. So in a post processing step either the pressure in a point in $\Omega$ can be set, or a zero average pressure can be imposed; i.e. $$\int_\Omega p\ud\Omega=0.$$
For the method we like to present, we want to restrict ourselves to vector operations only. Therefore, instead of considering the divergence of a stress tensor, $\nabla\cdot(\nu\nabla\vec{u})$, we write this as $\nu\Delta\vec{u}$ by considering constant viscosity. Then the following vector identity is used for the vector Laplacian, $-\Delta\vec{u}=\mathrm{curl}\,\mathrm{curl}\,\vec{u}-\mathrm{grad}\,\mathrm{div}\,\vec{u}$. The vorticity-velocity-pressure formulation is obtained by introducing vorticity as auxiliary variable, $\vec{\omega}=\mathrm{curl}\,\vec{u}$. In terms of a system of first-order PDEs, the Stokes problem becomes
\[stokessinglevector\] $$\begin{aligned}
\vec{\omega}-\mathrm{curl}\,\vec{u}&=0\quad\mathrm{on}\ \Omega,\label{stokessinglevector1}\\
\mathrm{curl}\,\vec{\omega}+\mathrm{grad}\,p&=\vec{f}\quad\mathrm{on}\ \Omega,\label{stokessinglevector2}\\
\mathrm{div}\,\vec{u}&=0\quad\mathrm{on}\ \Omega.\label{stokessinglevector3}\end{aligned}$$
Since these PDEs should hold on a certain physical domain, we will include geometry by means of integration. In that case we can relate every physical quantity to a geometric object. Starting with the incompressibility constraint we have due to Gauss’ divergence theorem, $$\int_\mathcal{V}\div\vec{u}\,\ud\mathcal{V}=\int_{\p\mathcal{V}}\vec{u}\cdot\vec{n}\,\ud\mathcal{S}=0,$$ and by means of Stokes’ circulation theorem the relation can be written as $$\int_\mathcal{S}\vec{\omega}\times\vec{n}\,\ud\mathcal{S}=\int_\mathcal{S}\curl\vec{u}\times\vec{n}\,\ud\mathcal{S}=\int_{\p\mathcal{S}}\vec{u}\cdot\vec{t}\,\ud l.$$ From the first relation it follows that $\div\vec{u}$ is associated to volumes. The association to a geometric object for velocity $\vec{u}$ is less clear. In fact it can be associated to two different types of geometric objects. In the incompressibility constraint velocity denotes a flux *through* a surface that bounds a volume, while in the circulation relation velocity is defined *along* a line that bounds the surface. We will call the velocity vector [*through*]{} a surface [*outer-oriented*]{} and the velocity [*along*]{} a line segment [*inner-oriented*]{}. Similarly, vorticity has two different representations, either as the rotation [*in*]{} a plane as Stokes’ circulation theorem above suggests, or the Biot-Savart description of rotation [*around*]{} a line. In the former case $\vec{\omega}$ is [*inner-oriented*]{} whereas in the latter case $\vec{\omega}$ is [*outer-oriented*]{}, see also . In fact both the velocity vector $\vec{u}$ and the vorticity vector $\vec{\omega}$ itself do not have a connection with geometry. Therefore, it are the terms $\vec{u}\cdot\vec{t}\,\ud l$, $\vec{u}\cdot\vec{n}\,\ud\mathcal{S}$, $\vec{\omega}\times\vec{n}\,\ud\mathcal{S}$ and $\vec{\omega}\times\vec{t}\,\ud l$ that are *more useful variables* when considering Stokes problem on a physical domain.
The last equation to be considered is . This equation shows that classical Newton-Leibnitz, Stokes circulation and Gauss divergence theorems tell only half the story. From the perspective of the classical Newton-Leibnitz theorem, the gradient acting on the pressure relates line values to their corresponding end point, while the Stokes circulation theorem shows that the curl acting on the vorticity vector relates surface values to the line segment enclosing it. So how does this fit into one equation? In fact there exists two gradients, two curls and two divergence operators. One of each related to the mentioned theorems as explained above. The others are formal adjoint operators, so the second gradient is the adjoint of a divergence that is related to Gauss divergence theorem, the second curl is the adjoint of the curl related to Stokes circulation theorem and the second divergence is the adjoint of the gradient related to the classical Newton-Leibnitz theorem. Let grad, curl and div be the original differential operators associated to the mentioned theorems, then the formal Hilbert adjoint operators grad$^*$, curl$^*$ and div$^*$ are defined as, $$\big(\vec{a},\mathrm{grad}^*\,b\big):=\big(\div\vec{a},b\big),\quad \big(\vec{a},\mathrm{curl}^*\,\vec{b}\big):=\big(\curl\vec{a},\vec{b}\big),\quad \big(a,\mathrm{div}^*\,\vec{b}\big):=\big(\grad a,\vec{b}\big).$$ Adjoint operators relate geometric operators in opposite direction. Where div relates a vector quantity associated to surfaces to a scalar quantity associated to a volume enclosed by these surfaces. Its adjoint operator, grad$^*$, relates a scalar quantity associated with a volume to a vector quantity associated with its surrounding surfaces. This is illustrated in . Following , the adjoint operators consist of three consecutive steps: First, switch to the other type of orientation (inner $\rightarrow$ outer or outer $\rightarrow$ inner), then take the derivative and finally switch the result back to its original orientation.
![Geometric interpretation of the action of the boundary operators, vector differential operators and their formal Hilbert adjoint operators.[]{data-label="fig:manifoldswithorientation2"}](orientationComplex2.pdf){width="70.00000%"}
So could then either be associated to an inner-oriented line segment by rewriting it as $$\mathrm{curl}^*\,\vec{\omega}+\grad p=\vec{f},$$ or be associated to an outer-oriented surface by rewriting it as $$\curl\vec{\omega}+\mathrm{grad}^* p=\vec{f}.$$ Without geometric considerations we could never make a distinction between grad, curl and div and their associated Hilbert adjoints div$^*$, curl$^*$ and grad$^*$. Vector calculus does not make this distinction.
Since our focus is on obtaining a pointwise divergence-free result, we decide to use a formulation associated to outer-oriented geometric objects. Then the Stokes problem becomes,
$$\begin{aligned}
\vec{\omega}-\mathrm{curl}^*\,\vec{u}&=0,\\
\curl\vec{\omega}+\mathrm{grad}^*\,p&=\vec{f},\\
\div\vec{u}&=0,\end{aligned}$$
\[eq:system\_vector\_calculus\]
where the first equation is associated to line segments, the second to surfaces and the third to volumes. In [@bochevgunzburger2009; @bochevgunzburger] the same velocity-vorticity-pressure formulation is given in terms of grad, curl, div and grad$^*$, curl$^*$ and div$^*$.
For a valid equation, the mathematical objects should be the same; we can only add vectors with vectors and scalars with scalars, but not scalars with vectors. But now we add that equations also need the be *geometrically compatible*. We can only add quantities associated with the same kind of geometry and with the same type of orientation. This lack of geometric notion in vector calculus is what motivates many to use the language of differential geometry instead, [@arnoldfalkwinther2006; @arnoldfalkwinther2010; @back2011; @bochevhyman2006; @bossavit1998; @bossavit9900; @buffa2011b; @desbrun2005c; @frankel; @HymanScovel1988; @kreefterrorestimate; @kreeftpalhagerritsma2011; @tonti1]. Other advantages of using differential geometry over vector calculus are that it possesses a clear distinction between variables associated with inner- and outer-orientation and it makes a clear distinction between topological and metric-dependent operations. All horizontal realtions in are topological. Any detour along geometric objects with the other type of orientation introduces metric in the equation. In differential geometry these structures are intrinsically embedded. It naturally leads to a discretization technique that can be seen as a hybrid between the finite volumes (topological part) and finite elements (metric part).
Differential geometry {#sec:differentialgeometry}
=====================
This section presents the Stokes model in the language of differential forms. Differential geometry offers significant benefits in the construction of structure-preserving spatial discretizations. For example, the generalization of differentiation in terms of the exterior derivative encodes the gradient, curl and divergence operators from vector calculus and the codifferential represents the associated Hilbert adjoint operators grad$^*$, curl$^*$ and div$^*$. The generalized Stokes theorem encapsulates their corresponding integration theorems, respectively. The coordinate-free action of the exterior derivative and generalized Stokes theorem give rise to commuting properties with respect to mappings between different manifolds. These kind of commuting properties are essential for the structure preserving behavior of the mimetic method.
Only those concepts from differential geometry which play a role in the remainder of this paper will be explained. Much more can be found in [@abrahammarsdenratiu; @flanders; @frankel; @kreeftpalhagerritsma2011].
Differential forms
------------------
Let $\Lambda^k(\Omega)$ denote a space of *differential $k$-forms* or *$k$-forms*, on a sufficiently smooth bounded $n$-dimensional oriented manifold $\Omega\subset\mathbb{R}^n$, $\mathbf{x}:=(x^1,\hdots,x^n)$, with boundary $\p\Omega$. Every element $\kdifform{a}{k}\in\Lambda^k(\Omega)$ has a unique representation of the form, $$\label{differentialform}
\kdifform{a}{k}=\sum_If_I(\mathbf{x})\ud x^{i_1}\wedge\ud x^{i_2}\wedge\cdots\wedge\ud x^{i_k},$$ where $I=i_1,\hdots,i_k$, and $1\leq i_1<\hdots<i_k\leq n$ and where $f_I(\mathbf{x})$ are continuously differentiable scalar functions. Differential forms can be seen as quantities that live under the integral sign, [@flanders], which were indicated in the previous section as the ‘more useful variables’. For $\Omega\subset\mathbb{R}^3$ with a Cartesian coordinate system $\mathbf{x}:=(x,y,z)$, the outer-oriented vorticity, velocity and pressure are $$\begin{aligned}
\kdifform{\omega}{1}&=\omega_1(\mathbf{x})\,\ud x+\omega_2(\mathbf{x})\,\ud y+\omega_3(\mathbf{x})\,\ud z,\\
\kdifform{u}{2}&=u(\mathbf{x})\,\ud y\!\wedge\!\ud z+v(\mathbf{x})\,\ud z\!\wedge\!\ud x+w(\mathbf{x})\,\ud x\!\wedge\!\ud y,\\
\kdifform{p}{3}&=p(\mathbf{x})\,\ud x\!\wedge\!\ud y\!\wedge\!\ud z.\end{aligned}$$ We see that $\omega$ is associated with line elements d$x$, d$y$ and d$z$. This is the outer-oriented representation in terms of Biot-Savart of rotation around a line segment. Similarly, velocity is associated with surface elements, $\ud y\!\wedge\!\ud z$, $\ud z\!\wedge\!\ud x$, $\ud x\!\wedge\!\ud y$, which is the outer-oriented representation of the velocity flux [through]{} a surface. Finally, writing pressure as a volume form also corresponds to an outer-oriented representation.
Differential $k$-forms are naturally integrated over $k$-dimensional manifolds, i.e. for $\kdifform{a}{k}\in\Lambda^k(\Omega)$ and $\Omega_k\subset\mathbb{R}^n$, with $k=\mathrm{dim}(\Omega_k)$, $$\label{integration}
\int_{\Omega_k}\kdifform{a}{k}\in\mathbb{R}\quad\Leftrightarrow\quad\langle \kdifform{a}{k},\Omega_k\rangle\in\mathbb{R},$$ where $\langle\cdot,\cdot\rangle$ indicates a duality pairing[^3] between the differential form and the geometry. This duality pairing is a metric-free operation, see [@frankel]. Note that the $n$-dimensional computational domain is indicated as $\Omega$, so without subscript. We would like to distinguish between $k$-forms that can be integrated over outer-oriented $k$-dimensional manifolds and $k$-forms that can be integrated over inner-oriented $k$-dimensional manifolds. To emphasis this difference, we sometimes write the space of the latter as $\tilde{\Lambda}^k(\Omega)$.
The wedge product, $\wedge$, of two differential forms $\kdifform{a}{k}$ and $\kdifform{b}{l}$ is a mapping: $\wedge:\Lambda^k(\Omega)\times\Lambda^l(\Omega)\rightarrow\Lambda^{k+l}(\Omega),\ k+l\leq n$. The wedge product is a skew-symmetric operator, i.e. $\kdifform{a}{k}\wedge \kdifform{b}{l}=(-1)^{kl}\kdifform{b}{l}\wedge \kdifform{a}{k}$. The pointwise inner-product of $k$-forms, $(\cdot,\cdot):\Lambda^k(\Omega)\times\Lambda^k(\Omega)\rightarrow\mathbb{R}$, is constructed using inner products of one-forms, that is based on the inner product on vector spaces, see [@flanders; @frankel]. The wedge product and inner product induce the Hodge-$\star$ operator, $\star:\Lambda^k(\Omega)\rightarrow\tilde{\Lambda}^{n-k}(\Omega)$, a metric operator that includes orientation, defined as $$\kdifform{a}{k}\wedge\star \kdifform{b}{l}:=\big(\kdifform{a}{k},\kdifform{b}{k}\big)\kdifform{\sigma}{n},
\label{hodgestar}$$ where $\kdifform{\sigma}{n}\in\Lambda^n(\Omega)$ is a unit volume form, $\kdifform{\sigma}{n}=\star1$. Let $(\ud x,\ud y,\ud z)$ be a basis in $\mathbb{R}^3$ for 1-forms associated with inner-oriented line segments. Then by applying the Hodge-$\star$ we retrieve a basis for 2-forms associated with outer-oriented surfaces, $$\star\ud x=\ud y\!\wedge\!\ud z,\quad \star\ud y=\ud z\!\wedge\!\ud x,\quad \star\ud z=\ud x\!\wedge\!\ud y.$$ Therefore, the Hodge operator switches between inner- and outer-orientation. The Hodge-$\star$ operation can be interpreted as the vertical relations as given in , and coincides with a constitutive relation. The space of $k$-forms on $\Omega$ can be equipped with an $L^2$ inner product, $\big(\cdot,\cdot\big)_\Omega:\Lambda^k(\Omega)\times\Lambda^k(\Omega)\rightarrow\mathbb{R}$, given by, $$\big(\kdifform{a}{k},\kdifform{b}{k}\big)_\Omega:=\int_\Omega\big(\kdifform{a}{k},\kdifform{b}{k}\big)\kdifform{\sigma}{n}=\int_\Omega \kdifform{a}{k}\wedge\star \kdifform{b}{k}.
\label{L2innerproduct}$$ The differential forms live on manifolds and transform under the action of mappings. Let $\Phi:\Omega_{\rm ref}\rightarrow\Omega$ be a mapping between two manifolds. Then we can define the pullback operator, $\Phi^\star:\Lambda^k(\Omega)\rightarrow\Lambda^k(\Omega_{\rm ref})$, expressing the $k$-form on the reference manifold, $\Omega_{\rm ref}$. The mapping, $\Phi$, and the pullback, $\Phi^\star$, are related by $$\int_{\Phi(\Omega_{\rm ref})}\kdifform{a}{k}=\int_{\Omega_{\rm ref}}\Phi^\star \kdifform{a}{k}\quad\Leftrightarrow\quad\langle \kdifform{a}{k},\Phi(\Omega_{\rm ref})\rangle=\langle\pullback \kdifform{a}{k},\Omega_{\rm ref}\rangle.$$ A special case of the pullback operator is the trace operator. The trace of $k$-forms to the boundary, $\tr:\Lambda^k(\Omega)\rightarrow\Lambda^k(\p\Omega)$, is the pullback of the inclusion of the boundary of a manifold, $\p\Omega\hookrightarrow\Omega$, see [@kreeftpalhagerritsma2011].
An important operator in differential geometry is the exterior derivative, $\ud:\Lambda^k(\Omega)\rightarrow\Lambda^{k+1}(\Omega)$. It represents the grad, curl and div (also rot in 2D) operators from vector calculus. It is induced by the *generalized Stokes’ theorem*, combining the classical Newton-Leibnitz, Stokes circulation and Gauss divergence theorems. Let $\Omega_{k+1}$ be a $(k+1)$-dimensional submanifold and $a^{(k)}\in\Lambda^k(\Omega)$, then $$\label{stokestheorem}
\int_{\Omega_{k+1}}\ud \kdifform{a}{k}=\int_{\p\Omega_{k+1}}\tr\kdifform{a}{k}\quad\Leftrightarrow\quad \langle\ud \kdifform{a}{k},\Omega_{k+1}\rangle = \langle\tr\kdifform{a}{k},\partial\Omega_{k+1}\rangle,$$ where $\partial\Omega_{k+1}$ is a $k$-dimensional manifold being the boundary of $\Omega_{k+1}$. Due to the duality pairing in , the exterior derivative is the formal adjoint of the *boundary operator* $\p:\Omega_{k+1}\rightarrow\Omega_k$ as indicated by the duality pairing, . The boundary operator *defines* the exterior derivative. The exterior derivative is independent of any metric or coordinate system. Applying the exterior derivative twice always leads to the null $(k+2)$-form, $\ud(\ud \kdifform{a}{k})=0^{(k+2)}$. On contractible domains the exterior derivative gives rise to an exact sequence, called *de Rham complex* [@frankel], and indicated by $(\Lambda,\ud)$, $$\mathbb{R}\hookrightarrow\Lambda^0(\Omega)\stackrel{\ud}{\longrightarrow}\Lambda^1(\Omega)\stackrel{\ud}{\longrightarrow}\cdots\stackrel{\ud}{\longrightarrow}\Lambda^n(\Omega)\stackrel{\ud}{\longrightarrow}0.
\label{derhamcomplex}$$ In vector calculus a similar sequence exists, where, from left to right for $\mathbb{R}^3$, the $\ud$’s denote the vector operators grad, curl and div. Both inner- and outer-oriented spaces of differential forms, $\Lambda^k(\Omega)$ and $\tilde{\Lambda}^k(\Omega)$, possess a de Rham sequence. The two are connected by the Hodge-$\star$ operator, and constitute a double de Rham complex, $$\begin{matrix}
\mathbb{R} \longrightarrow&\Lambda^{0}(\Omega)
&\stackrel{\ederiv}{\longrightarrow}& \Lambda^{1}(\Omega)
&\stackrel{\ederiv}{\longrightarrow}& \hdots \;
&\stackrel{\ederiv}{\longrightarrow}\; &\Lambda^{n}(\Omega) \;
&\stackrel{\ederiv}{\longrightarrow}\; &0 \\
&\star\updownarrow & & \star\updownarrow && &
&\star\updownarrow & & \\
0 \stackrel{\ederiv}{\longleftarrow}&{\tilde{\Lambda}}^{n}(\Omega)
&\stackrel{\ederiv}{\longleftarrow}& {\tilde{\Lambda}}^{n-1}(\Omega)
&\stackrel{\ederiv}{\longleftarrow}& \hdots \;
&\stackrel{\ederiv}{\longleftarrow}\; &{\tilde{\Lambda}}^{0}(\Omega) \;
&\stackrel{}{\longleftarrow}\; &\mathbb{R}.
\end{matrix} \label{double_deRham_complex}$$ Observe the similarity between diagram and Figures \[fig:manifoldswithorientation\] and \[fig:manifoldswithorientation2\], which is due to the fact that the exterior derivative is the adjoint of the boundary operator. The pullback operator and exterior derivative possess the following commuting property[^4], $$\Phi^\star\ud \kdifform{a}{k}=\ud\Phi^\star \kdifform{a}{k},\quad \forall \kdifform{a}{k}\in\Lambda^k(\Omega),$$ as illustrated in the following commuting diagram, $$\begin{CD}
\Lambda^k(\Omega) @>\ud>> \Lambda^{k+1}(\Omega)\\
@VV\Phi^\star V @VV\Phi^\star V \\
\Lambda^k(\Omega_{\rm ref}) @>\ud>> \Lambda^{k+1}(\Omega_{\rm ref}).
\end{CD}$$ The inner product, , gives rise to the formal Hilbert adjoint of the exterior derivative, the [*c*odifferential operator]{}, $\ud^*:\Lambda^k(\Omega)\rightarrow\Lambda^{k-1}(\Omega)$, as $\big(\ud \kdifform{a}{k-1},\kdifform{b}{k}\big)_\Omega=\big(\kdifform{a}{k-1},\ud^* \kdifform{b}{k}\big)_\Omega$, which represents the grad$^*$, curl$^*$ and div$^*$ operators. Whereas the exterior derivative is a metric-free operator, the codifferential operator is metric-dependent, and given by $\ud^*=(-1)^{n(k+1)+1}\star\ud\star$, [@frankel; @kreeftpalhagerritsma2011]. Here we see the three operations that were mentioned in the previous section and were illustrated in : Switch to the other type of orientation, $\star$, apply the derivative, d, and switch back to the original orientation, $\star$. In case of non-zero trace, and by combining and , we get $$\big(\kdifform{a}{k-1},\ud^*\kdifform{b}{k}\big)_\Omega=\big(\ud \kdifform{a}{k-1},\kdifform{b}{k})_\Omega-\int_{\p\Omega} \tr \kdifform{a}{k-1}\wedge \tr\star \kdifform{b}{k}.
\label{integrationbyparts}$$ This is better known as integration by parts and is often used in finite element methods to avoid the codifferential. Also for the codifferential, on contractible manifolds there exists an exact sequence, $$0\stackrel{\ud^*}{\longleftarrow}\Lambda^0(\Omega)\stackrel{\ud^*}{\longleftarrow}\Lambda^1(\Omega)\stackrel{\ud^*}{\longleftarrow}\cdots\stackrel{\ud^*}{\longleftarrow}\Lambda^n(\Omega)\hookleftarrow\mathbb{R}.$$ Finally, the Hodge-Laplace operator, $\Delta:\Lambda^k(\Omega)\rightarrow\Lambda^k(\Omega)$, is constructed as a composition of the exterior derivative and the codifferential operator, $$\label{laplace}
-\Delta \kdifform{a}{k}:=(\ud^*\ud+\ud\ud^*)\kdifform{a}{k}.$$
Hilbert spaces
--------------
Function spaces play an important role in the analysis of numerical methods. Of importance in this paper are the Hilbert spaces. On an oriented Riemannian manifold, we can define Hilbert spaces for differential forms. Let all $f_I(\mathbf{x})$ in be functions in $L^2(\Omega)$, then $\kdifform{a}{k}$ in is a $k$-form in the Hilbert space $L^2\Lambda^k(\Omega)$. The norm corresponding to the space $L^2\Lambda^k(\Omega)$ is $\Vert \kdifform{a}{k}\Vert_{L^2\Lambda^k}=\sqrt{(\kdifform{a}{k},\kdifform{a}{k})_\Omega}$. Although extension to higher Sobolev spaces are possible, we focus here on the Hilbert space corresponding to the exterior derivative. The Hilbert space $H\Lambda^k(\Omega)$ is defined by $$H\Lambda^k(\Omega)=\{\kdifform{a}{k}\in L^2\Lambda^k(\Omega)\;|\;\ederiv \kdifform{a}{k}\in L^2\Lambda^{k+1}(\Omega)\},$$ and the norm corresponding to $H\Lambda^k(\Omega)$ is defined as $$\Vert \kdifform{a}{k}\Vert^2_{H\Lambda^k}:=\Vert \kdifform{a}{k}\Vert^2_{L^2\Lambda^k}+\Vert\ederiv \kdifform{a}{k}\Vert^2_{L^2\Lambda^{k+1}}.$$ The Hilbert complex, $(H\Lambda,\ud)$, a special version of the de Rham complex, is the exact sequence of maps and spaces given by $$\mathbb{R}\hookrightarrow H\Lambda^0(\Omega)\stackrel{\ederiv}{\longrightarrow} H\Lambda^1(\Omega)\stackrel{\ederiv}{\longrightarrow}\cdots\stackrel{\ederiv}{\longrightarrow} H\Lambda^n(\Omega)\stackrel{\ud}{\longrightarrow}0.$$ In vector operations the Hilbert complex becomes for $\Omega\subset\mathbb{R}^3$, $$\label{3dcomplex}
H^1(\Omega)\stackrel{\rm grad}{\longrightarrow} H(\mathrm{curl},\Omega)\stackrel{\rm curl}{\longrightarrow}H(\mathrm{div},\Omega)\stackrel{\rm div}{\longrightarrow} L^2(\Omega),$$ and for $\Omega\subset\mathbb{R}^2$, either $$\label{2dcomplexes}
H^1(\Omega)\stackrel{\rm grad}{\longrightarrow} H(\mathrm{rot},\Omega)\stackrel{\rm rot}{\longrightarrow}L^2(\Omega),\quad\mathrm{or}\quad
H^1(\Omega)\stackrel{\rm curl}{\longrightarrow} H(\mathrm{div},\Omega)\stackrel{\rm div}{\longrightarrow}L^2(\Omega).$$ The two are related by the Hodge-$\star$ operator , see [@palha2010], $$\label{doublehilbertcomplex}
\begin{matrix}
H\Lambda^{0}(\Omega)\!\!&\!\!\stackrel{\ederiv}{\longrightarrow}\!\!&\!\! H\Lambda^{1}(\Omega)\!\!&\!\!\stackrel{\ederiv}{\longrightarrow}\!\!&\!\!L^2\Lambda^{2}(\Omega)\\
\star\updownarrow & & \star\updownarrow & &\star\updownarrow \\
L^2\Lambda^{2}(\Omega)\!\!&\!\!\stackrel{\ederiv}{\longleftarrow}\!\!&\!\!H\Lambda^{1}(\Omega)\!\!&\!\!\stackrel{\ederiv}{\longleftarrow}\!\!&\!\!H\Lambda^{0}(\Omega)
\end{matrix}
\quad\Leftrightarrow\quad
\begin{matrix}
H^1(\Omega)\!\!&\!\!\stackrel{\mathrm{curl}}{\longrightarrow}\!\!&\!\!H(\mathrm{div},\Omega)\!\!&\!\!\stackrel{\mathrm{div}}{\longrightarrow}\!\!&\!\!L^2(\Omega)\\
\star\updownarrow & & \star\updownarrow & &\star\updownarrow \\
L^2(\Omega)\!\!&\!\!\stackrel{\mathrm{rot}}{\longleftarrow}\!\!&\!\!H(\mathrm{rot},\Omega)\!\!&\!\!\stackrel{\mathrm{grad}}{\longleftarrow}\!\!&\!\!H^1(\Omega).
\end{matrix}$$ A similar double Hilbert complex can be constructed in $\mathbb{R}^3$. Again note the similarities with these double Hilbert complexes and that of and geometric structure depicted in Figures \[fig:manifoldswithorientation\] and \[fig:manifoldswithorientation2\].
Stokes problem in terms of differential forms
---------------------------------------------
The kind of form a variable has is directly related to the kind of manifold this variable can be integrated over. For example, from a physics point of view velocity is naturally integrated *along* a line (streamline), a 1-manifold, indicating that velocity is a 1-form. However, looking at the incompressibility constraint, velocity in incompressible (Navier)-Stokes equations is usually associated to a flux *through* a surface, indicating that velocity should be an $(n-1)$-form ($n=\mathrm{dim}(\Omega)$). The two are directly related by the Hodge duality, $u^{(n-1)}=\star \tilde{u}^{(1)}$, see[^5] . The Hodge-$\star$ not only changes the corresponding type of integral domain, but also its orientation (along a line = inner, through a surface = outer).
Note that the Hodge-$\star$ is often combined with a constitutive relation. In that case the two variables have clearly a different meaning. In incompressible flow models, mass density plays the role of material property, so we actually have $(\rho u)^{(n-1)}=\star_\rho\tilde{u}^{(1)}$. Since mass density is assumed to be equal to one in incompressible (Navier)-Stokes, this difference is less obvious.
As for the velocity, also for pressure and vorticity there exists an inner and outer oriented version. The inner oriented variables are pressure, $\tilde{p}\in\Lambda^0(\Omega)$, associated to point values, vorticity and $\tilde{\omega}\in\Lambda^2(\Omega)$, associated to circulation in a surface. Alternatively, there exists the set of outer-oriented variables, being the pressure, $p\in\Lambda^n(\Omega)$, measured in a volume, and vorticity, $\omega\in\Lambda^{n-2}(\Omega)$, corresponding to circulation around a line (both in case of $\Omega\subset\mathbb{R}^3$).
Both sets, $(\tilde{p}^{(0)},\tilde{u}^{(1)},\tilde{\omega}^{(2)})$ and $(\omega^{(n-2)},u^{(n-1)},p^{(n)})$ are used in literature to derive mixed formulations and numerical schemes. For the former see [@abboud2011; @bramble1994] and for the latter see [@bernardi2006; @dubois2002].
To obtain a pointwise divergence-free solution, the incompressibility constraint is leading, and therefore the set of outer-oriented variables are used in this paper, $(\kdifform{\omega}{n-2},\kdifform{u}{n-1},\kdifform{p}{n})$, with forcing term $\kdifform{f}{n-1}$. Then the Stokes problem in terms of differential forms becomes,
$$\begin{aligned}
-\nu\Delta\kdifform{u}{n-1}+\ud^*\kdifform{p}{n}&=\kdifform{f}{n-1},\quad\mathrm{on}\ \Omega,\\
\ud \kdifform{u}{n-1}&=0,\quad\quad\quad\ \mathrm{on}\ \Omega,\label{mass}\end{aligned}$$
where $\Delta$ is the Hodge-Laplacian defined by . Vorticity is introduced as auxiliary variable to cast this system into a system of first-order equations. Substitution of and the incompressibility constraint , gives the vorticity-velocity-pressure formulation in terms of differential forms,
\[stokeseq\] $$\begin{aligned}
\kdifform{\omega}{n-2}-\ud^* \kdifform{u}{n-1}&=0,\quad\quad\quad\ \mathrm{on}\ \Omega,\label{stokeseq1}\\
\nu\ud\kdifform{\omega}{n-2}+\ud^*\kdifform{p}{n}&=\kdifform{f}{n-1},\quad\mathrm{on}\ \Omega,\label{stokeseq2}\\
\ud \kdifform{u}{n-1}&=0,\quad\quad\quad\ \mathrm{on}\ \Omega.\label{stokeseq3}\end{aligned}$$ \[eq:system\_diff\_forms\]
Note the resemblance of this system with . Note also that whereas grad, curl and div are only defined in $\mathbb{R}^3$, is valid in $\mathbb{R}^n$ for all $n \geq 1$.
The actions of the exterior derivatives and codifferentials in this system are illustrated below for a two-dimensional domain.
\[ex:2dstokes\] Let $\Omega\subset\mathbb{R}^2$, with Cartesian coordinates $\mathbf{x}:=(x,y)$, and let the two-dimensional de Rham complex be equivalent to the second complex in . Then velocity is expressed as $$\kdifform{u}{1}=-v(\mathbf{x})\ud x+u(\mathbf{x})\ud y.$$ Applying the exterior derivative gives us a 2-form, the divergence of velocity, $$\ud \kdifform{u}{1}=\left(\frac{\p u}{\p x}+\frac{\p v}{\p y}\right)\ud x\wedge\ud y.$$ Vorticity is a 0-form, $\kdifform{\omega}{0}=\omega(\mathbf{x})\in\Lambda^0(\Omega)$, and the curl of vorticity gives, $$\ud\kdifform{\omega}{0}=\frac{\p\omega}{\p x}\ud x+\frac{\p\omega}{\p y}\ud y.$$ The gradient of pressure, $\kdifform{p}{2}=p(\mathbf{x})\ud x\wedge\ud y\in\Lambda^2(\Omega)$, is the action of the codifferential, $$\ud^*\kdifform{p}{2}=-\frac{\p p}{\p y}\ud x+\frac{\p p}{\p x}\ud y.$$ Then the momentum equation follows, $$-\left(-\frac{\p\omega}{\p x}+\frac{\p p}{\p y}\right)\ud x+\left(\frac{\p\omega}{\p y}+\frac{\p p}{\p x}\right)\ud y=-f_y(x,y)\ud x+f_x(x,y)\ud y.$$ In a similar way the vorticity-velocity relation can be obtained.
Discretization of Stokes problem {#sec:discretization}
================================
The mimetic discretization of the Stokes problem consists of three parts. First, the discrete structure is described in terms of chains and cochains from algebraic topology, the discrete counterpart of differential geometry. This discrete structure mimics a lot of properties of differential geometry. Secondly, mimetic operators are introduced that relate the continuous formulation in terms of differential forms to the discrete representation based on cochains. Thirdly, mimetic spectral element basis functions are described which satisfy the structure defined in the algebraic topology and mimetic operators sections, Sections \[sec:algeb\_topol\] and \[mimeticoperators\], respectively. The action of the exterior derivative, i.e., grad, curl and div, are shown, which leads among others to the divergence-free solution.
Algebraic Topology {#sec:algeb_topol}
------------------
In many numerical methods, especially in finite difference and almost all finite element methods, the discrete coefficients are point values, i.e. zero-dimensional sub-manifolds. In the structure of algebraic topology, the discrete unknowns represent values on $k$-dimensional submanifolds, ranging from points to $n$-dimensional volumes, so $0\leq k\leq n$. These $k$-dimensional submanifolds are called *$k$-cells*, $\tau_{(k)}$. See [@hatcher; @kreeftpalhagerritsma2011; @munkres1984] how they are formally defined. The two most popular classes of $k$-cells in literature to describe the topology of a manifold are either in terms of *simplices*, see for instance [@munkres1984; @SingerThorpe; @whitney], or in terms of *cubes*, see [@Massey2; @spivak; @tonti1] and for an example of $k$-cubes in $\mathbb{R}^3$. From a topological point of view both descriptions are equivalent, see [@Dieudonne]. Despite this equivalence between simplicial complexes and cubical complexes, the reconstruction maps to be discussed in differ significantly. For mimetic methods based on simplices see [@arnoldfalkwinther2006; @desbrun2005c; @rapetti2009; @subramanian2006], whereas for mimetic methods based on cubes see [@arnoldboffifalk2005; @HymanShashkovSteinberg2002; @RobidouxSteinberg2011].
![Example of a 0-cell, a 1-cell, a 2-cell and a 3-cell in $\mathbb{R}^3$.[]{data-label="fig:kcells"}](kcells_v2.pdf){width="35.00000%"}
Here we list the terminology to setup a homology theory in terms of $n$-cubes as given by [@Massey2]. Consider an oriented unit $k$-cube given by $I^k = I \times I \times \dots \times I$ ($k$ factors, $k\geq 0$), where $I=[0,1]$ is a one-dimensional closed interval. By definition $I^0$ is a space consisting of a single point. Then a [*$k$-cube*]{} in an $n$-dimensional manifold $\Omega$ is a continuous map $\tau_{(k)}:I^k\rightarrow\Omega,\ 0\leq k\leq n$.
All $k$-cells are oriented. This means that we define a *default orientation*. The default orientation of the cell is implied by the orientation of the line segment $I$, which is defined positive in positive coordinate axis direction, and the map $\tau_{(k)}$. For outer-oriented cells, this for example also implies a positive way of going through a surface and rotating around a line. A $k$-cell with opposite orientation is said to have a negative orientation.
The concept of orientation shown in Figures \[fig:manifoldswithorientation\] and \[fig:manifoldswithorientation2\] gives rise to the boundary operator, $\p$, that relates a $k$-cell to a set of surrounding $(k-1)$-cells, which has either the same or opposite orientation. Examples are given in , where the faces of the $k$-cells are shown in black.
![Examples of faces of outer-oriented $k$-cells in $\mathbb{R}^3$.[]{data-label="fig:faces"}](cells_boundary.pdf){width="70.00000%"}
This definition describes the boundary which we already encountered in and . The boundary of a $k$-cell again consists of a set of $(k-1)$-cells, as illustrated in . From this we can define a *cell complex*.
[@hatcher]\[cellcomplex\] A cell complex, $\ccomplex{D}$, in a compact manifold $\Omega$ is a finite collection of cells such that:
1. The set of $n$-cells in $D$ covers the manifold $\Omega$.
2. Every face of a cell in $D$ is contained in $D$.
3. The intersection of any two $k$-cells, $\tau_{(k)}$ and $\sigma_{(k)}$ in $D$ either share a common $l$-cell, $l=0,\hdots,k-1$ in $D$, is empty, or $\tau_{(k)}=\sigma_{(k)}$.
![Example of a cell complex. Left: a three dimensional compact manifold. Right: the $k$-cells that consistitute the cell complex.[]{data-label="fig:cellcomplex"}](cell_complex_final.pdf){width="\textwidth"}
We call a cell complex an *oriented cell complex*, once we add to each $k$-cell a default orientation according to the definition of $k$-cubes. depicts an example of a cell complex in a compact manifold $\Omega\subset\mathbb{R}^3$. The ordered collection of all $k$-cells in $D$ generate a basis for the space of $k$-chains, $C_k(D)$. Then a $k$-chain, $\kchain{c}{k}\in C_k(D)$, is a formal linear combination of $k$-cells, $\tau_{(k),i}\in D$, $$\kchain{c}{k}=\sum_ic_i\tau_{(k),i}.
\label{kchain}$$ The $k$-cells, $\tau_{(k),i}$ form a basis for the $k$-chains. Once such a basis with default orientation has been chosen, any chain is completely determined by the coefficients $c_i$ which can be arranged in a column vector $\vec{c}=[c_1,c_2,\hdots]^T$. In the description of geometry, we restrict ourselves to chains with coefficients in $\mathbb{Z}/3=\{ -1,0,1\}$. The meaning of these coefficients is : 1 if the cell is in the chain with the same orientation as its default orientation in the cell complex, -1 if the cell is in the chain with the opposite orientation to the default orientation in the cell complex and 0 if the cell is not part of the chain.
We can now extend the boundary operator applied to a $k$-cell to the boundary of a $k$-chain. The boundary operator acting on $k$-chains, $\partial:\kchainspacedomain{k}{D}\spacemap\kchainspacedomain{k-1}{D}$, is defined by [@hatcher; @munkres1984], $$\label{algebraic::boundary_operator}
\partial \kchain{c}{k} = \partial \sum_{i}c^{i}\tau_{(k),i} := \sum_{i}c^{i} \partial \left ( \tau_{(k),i} \right ) \;.$$ The boundary of a $k$-cell $\tau_{(k)}$ is a $(k-1)$-chain formed by the faces of $\tau_{(k)}$. The coefficients of this ($k-1$)-chain associated to each of the faces is given by the orientations. $$\partial \tau_{(k),i} = \sum_{j}e^{j}_{i}\tau_{(k-1),j} \;,$$ with $$\left\{
\begin{array}{l}
% e^{j}_{i} = 1, \text{ if } \tau_{(k-1),j}\text{ has the same orientation of } \tau_{(k),i} \\
% e^{j}_{i} = -1, \text{ if } \tau_{(k-1),j}\text{ has the opposite orientation of } \tau_{(k),i} \\
% e^{j}_{i} = 0, \text{ if } \tau_{(k-1),j}\text{ is not a face of } \tau_{(k),i} \\
e^{j}_{i} = 1, \text{ if the orientation of } \tau_{(k-1),j} \text{ equals the default orientation,} \\
e^{j}_{i} = -1, \text{ if the orientation of } \tau_{(k-1),j} \text{ is opposite to the default orientation,} \\
e^{j}_{i} = 0, \text{ if } \tau_{(k-1),j}\text{ is not a face of } \tau_{(k),i}\;. \\
\end{array}
\right.$$ The boundary of a 0-cell is empty. In case all $k$-cells in the chain $\kchain{c}{k}$ have positive orientation, so $c^i=1$, then $$\partial \kchain{c}{k} = \sum_{i}\sum_je^{j}_{i}\tau_{(k-1),j}. \label{eq::algTop_boundary}$$ Recalling that the space of $k$-chains is a linear vector space it follows that the boundary operator can be represented as a matrix acting on the column vector $\vec{c}$ of the $k$-chain. The coefficients $e^{j}_{i}$ are the coefficients of an incidence matrix $\incidenceboundary{k-1}{k}$ that represents the boundary operator. Like the exterior derivative, applying the boundary operator twice on a $k$-chain gives the null $(k-2)$-chain, $\p\p\kchain{c}{k}=\kchain{0}{k-2}$ for all $\kchain{c}{k}\in C_k(D)$, see .
![The boundary of the boundary of a 3-cell is zero because all edges have opposite orientation.[]{data-label="fig:boundaryboundary"}](boundary_boundary.pdf){width="70.00000%"}
This was expected, since the exterior derivative and boundary operator are related according to the generalized Stokes theorem, . This property is reflected in the incidence matrices, since they are matrix representations of the topological boundary operators. Therefore $\incidenceboundary{k-2}{k-1}\incidenceboundary{k-1}{k}=0$, where for we have $$\incidenceboundary{1}{2}=
\tiny{
\left[
%\begin{array}{rrrrrr}
% 1 & 0 & 1 & 0 & 0 & 0 \\
% -1 & 0 & 0 & 1 & 0 & 0 \\
% 0 & 1 & -1 & 0 & 0 & 0 \\
% 0 & -1 & 0 & -1 & 0 & 0 \\
% -1 & 0 & 0 & 0 & 1 & 0 \\
% 1 & 0 & 0 & 0 & 0 & 1 \\
% 0 & -1 & 0 & 0 & -1 & 0 \\
% 0 & 1 & 0 & 0 & 0 & -1 \\
% 0 & 0 & -1 & 0 & -1 & 0 \\
% 0 & 0 & 1 & 0 & 0 & -1 \\
% 0 & 0 & 0 & -1 & 1 & 0 \\
% 0 & 0 & 0 & 1 & 0 & 1 \\
%\end{array}
\begin{array}{rrrrrr}
-1 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & -1 & 0 & 0 \\
0 & 1 & -1 & 0 & 0 & 0 \\
0 & -1 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & -1 & 0 \\
-1 & 0 & 0 & 0 & 0 & 1 \\
0 & -1 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & -1 \\
0 & 0 & 0 & 1 & -1 & 0 \\
0 & 0 & 0 & -1 & 0 & 1 \\
\end{array}
\right]
}, \quad\quad
\incidenceboundary{2}{3} =
\tiny{
\left[
\begin{array}{r}
-1 \\ 1 \\ -1 \\ 1 \\ -1 \\ 1 \\
\end{array}
\right]
}.$$ The set of $k$-chains and boundary operators gives rise to an exact sequence, the chain complex $(C_k(D),\p)$, $$\label{chaincomplex}
\begin{CD}
\cdots @<\p<< C_{k-1}(D) @<\p<< C_k(D) @<\p<< C_{k+1}(D) @<\p<< \cdots.
\end{CD}$$ This sequence is the algebraic equivalent of . Dual to the space of $k$-chains, $C_k(D)$, is the space of *$k$-cochains*, $C^k(D)$, defined as the set of all linear functionals, $\kcochain{c}{k}:C_k(D)\rightarrow\mathbb{R}$. The duality is expressed using the duality pairing $\langle\kcochain{c}{k},\kchain{c}{k}\rangle:=\kcochain{c}{k}(\kchain{c}{k})$. Note the resemblance between this duality pairing and the integration of differential forms, see .
Let $\{\tau_{(k),i}\}$ form a basis of $C_k(D)$, then there is a dual basis $\{\tau^{(k),i}\}$ of $C^k(D)$, such that $\tau^{(k),i}(\tau_{(k),i})=\delta^i_j$ and all $k$-cochains can be represented as linear combinations of the basis elements, $$\kcochain{c}{k}=\sum_ic_i\tau^{(k),i}.$$ The cochains are the discrete analogue of differential forms. With this duality relation between chains and cochains, we can define the formal adjoint of the boundary operator which constitutes an exact sequence on the spaces of $k$-cochains in the cell complex. This formal adjoint is called the *coboundary operator*, $\delta:\kcochainspacedomain{k}{D}\rightarrow\kcochainspacedomain{k+1}{D}$, and is defined as $$\duality{\delta\kcochain{c}{k}}{\kchain{c}{k+1}} := \duality{\kcochain{c}{k}}{\partial\kchain{c}{k+1}}, \quad\forall\kcochain{c}{k}\in\kcochainspacedomain{k}{D} \text{ and } \,\forall\kchain{c}{k+1}\in\kchainspacedomain{k+1}{D} \;. \label{algTop_codifferential_dual}$$ Also the coboundary operator satisfies $\delta\delta\kcochain{c}{k}=\kcochain{0}{k+2}$ for all $\kcochain{c}{k}\in C^k(D)$, see , and gives rise to an exact sequence, called the *cochain complex* $(C^k(D),\delta)$, $$\label{cochaincomplex}
\begin{CD}
\cdots @>\delta>> C^{k-1}(D)@>\delta>> C^k(D)@>\delta>>C^{k+1}(D)@>\delta>>\cdots\;.
\end{CD}$$ The coboundary operator is the discrete analogue of the exterior derivative. Also the coboundary operator has a matrix representation. As a result of the duality pairing in , the matrix representation of the coboundary operator is the transpose of the incidence matrix of the boundary operator, $\incidencederivative{k}{k-1}:=\left(\incidenceboundary{k-1}{k}\right)^T$. And again, $\incidencederivative{k+1}{k}\incidencederivative{k}{k-1}=0$. Note that expression is nothing but a discrete generalized Stokes’ theorem. The matrices representing the coboundary operator only depend on the mesh topology. These matrices will explicitly appear in the final matrix system, .
![The action of twice the coboundary operator $\delta$ on a 1-cell has a zero net result on its surrounding 3-cells, because they all have both a positive and a negative contribution from its neighboring 2-cells (reproduced from [@bochevhyman2006]).[]{data-label="fig:coboundarycoboundary"}](coboundary_coboundary.pdf){width="80.00000%"}
Mimetic Operators {#mimeticoperators}
-----------------
The discretization of the flow variables involves a projection operator, $\pi_h$, from the complete space $\Lambda^k(\Omega)$ to a subspace $\Lambda^k_h(\Omega;C_k)\subset\Lambda^k(\Omega)$. In this subspace differential forms are expressed in terms of $k$-cochains defined on $k$-chains, and corresponding $k$-form interpolation functions (often called basis-functions). Usually, the subspace is a polynomial space. The projection operation actually consists of two steps, a reduction operator, $\reduction$, that integrates the $k$-forms on $k$-chains to get $k$-cochains, and a reconstruction operator, $\reconstruction$, to reconstruct $k$-forms from $k$-cochains using the appropriate basis-functions. These mimetic operators were already introduced before in [@bochevhyman2006; @HymanScovel1988]. A composition of the two operators gives the projection operator $\projection=\reconstruction\circ\reduction$ as is illustrated below.
\^k()&\^& \^k\_h(;C\_k)\
\^ & \_ &\
C\^k(D) & &
We already saw the similarities between differential geometry and algebraic topology. We now impose constraints on the maps $\reduction$ and $\reconstruction$ to ensure that these structures are preserved. By imposing structure-preserving constraints on these operations, these three operators together set up the mimetic framework. An extensive discussion on mimetic operators can be found in [@kreeftpalhagerritsma2011]. Here only the most important properties are listed.
\[def:reduction\] The [*reduction operator*]{} $\reduction:\kformspacedomain{k}{\Omega}\rightarrow \kcochainspacedomain{k}{D}$ maps differential forms to cochains. This map is defined by integration as $$\duality{\reduction \kdifform{a}{k}}{\tau_{(k)}}:=\int_{\tau_{(k)}}\kdifform{a}{k},\quad \forall\tau_{(k)}\in C_k(D).
\label{reduction}$$ Then for all $\kchain{c}{k}\in C_k(D)$, the reduction of the $k$-form, $\kdifform{a}{k}\in\Lambda^{k}(\Omega)$, to the $k$-cochain, $\kcochain{a}{k}\in C^k(D)$, is given by $$\kcochain{a}{k}(\kchain{c}{k}):=\duality{\reduction\kdifform{a}{k}}{\kchain{c}{k}}\stackrel{\eqref{kchain}}{=}\sum_ic^i\duality{\reduction \kdifform{a}{k}}{\tau_{(k),i}}\stackrel{\eqref{reduction}}{=}\sum_ic^i\int_{\tau_{(k),i}}\kdifform{a}{k}=\int_{\kchain{c}{k}}\kdifform{a}{k}.$$
The reduction map $\reduction$ provides the [*integral quantities*]{} that were mentioned in the Introduction. It is the integration of a $k$-form over all $k$-cells in a $k$-chain that results in a $k$-cochain. A special case of reduction is integration of an $n$-form $a\in\Lambda^n(\Omega)$ over $\Omega$, then $$\int_\Omega\kdifform{a}{n}:=\duality{\reduction\kdifform{a}{n}}{\boldsymbol\sigma_{(n)}}\;,$$ where the chain $\boldsymbol\sigma_{(n)}=\sum_i\tau_{(n),i}$ (so all $c^i=+1$) covers the entire computational domain $\Omega$. The reduction map has a commuting property with respect to continuous and discrete differentiation, $$\reduction\ederiv=\dederiv\reduction\quad\mathrm{on}\ \Lambda^k(\Omega).
\label{cdp1}$$ This commutation can be illustrated as $$\begin{CD}
\Lambda^k @>\ederiv>> \Lambda^{k+1}\\
@VV\reduction V @VV\reduction V \\
C^k @>\delta>> C^{k+1}
\end{CD}$$ This property follows from the generalized Stokes’ theorem and the duality pairing of , $$\langle\reduction\ederiv \kdifform{a}{k},\kchain{c}{k}\rangle\stackrel{\eqref{reduction}}{=}\int_{\kchain{c}{k}}\ederiv \kdifform{a}{k}\stackrel{\eqref{stokestheorem}}{=}\int_{\partial\kchain{c}{k}}\kdifform{a}{k}\stackrel{\eqref{reduction}}{=}\langle\reduction \kdifform{a}{k},\partial\kchain{c}{k}\rangle\stackrel{\eqref{algTop_codifferential_dual}}{=}\langle\dederiv\reduction \kdifform{a}{k},\kchain{c}{k}\rangle.$$ The operator acting in the opposite direction to the reduction operator is the reconstruction operator, $\reconstruction$. The *reconstruction operator* $\reconstruction:\kcochainspacedomain{k}{D}\rightarrow\kformspace{k}_h(\Omega;C_k)$ maps $k$-cochains onto finite dimensional $k$-forms. The reconstructed differential forms belong to the space $\Lambda^k_h(\Omega;C_k)$, which is a proper subset of the complete $k$-form space $\Lambda^k(\Omega)$. While the reduction step is clearly defined in , in the choice of interpolation forms there exists some freedom.
\[reconstructionoperator\] Although the choice of a reconstruction method allows for some freedom, $\reconstruction$ must satisfy the following properties:
- Reconstruction $\reconstruction$ must be the right inverse of $\reduction$, so it returns identity ([*consistency property*]{}), $$\reduction\reconstruction=Id\quad\mathrm{on}\ C^k(D).
\label{consistency}$$
- Like $\reduction$, also the reconstruction operator $\reconstruction$ has to possess a commuting property with respect to differentiation. A properly chosen reconstruction operator $\reconstruction$ must satisfy a commuting property with respect to the exterior derivative and coboundary operator, $$\ederiv\reconstruction=\mathcal{I}\dederiv\quad\mathrm{on}\ C^k(D).
\label{cdp2}$$ This commutation can be illustrated as $$\begin{CD}
\kformspaceh{k} @>\ederiv>> \kformspaceh{k+1}\\
@AA\reconstruction A @AA\reconstruction A \\
\kcochainspace{k} @>\delta>> \kcochainspace{k+1}
\end{CD}$$
Moreover, we want it to be an approximate left inverse of $\reduction$, so the result is close to identity ([*approximation property*]{}) $$\reconstruction\reduction=Id+\mathcal{O}\left(h^p\right)\quad \mathrm{in}\ \Lambda^k(\Omega).
\label{approximation}$$ where $\mathcal{O}(h^p)$ indicates a truncation error in terms of a measure of the mesh size, $h$, and a polynomial order $p$.
\[th:projection\] The composition $\reconstruction\circ\reduction$ will denote the projection operator, $\projection\define\reconstruction\reduction:\kformspace{k}(\Omega)\rightarrow\kformspace{k}_h(\Omega;C_k)$, allowing for an approximate continuous representation of a $k$-form $a^{(k)}\in\Lambda^k(\Omega)$, $$\kdifformh{a}{k}=\projection\kdifform{a}{k}=\reconstruction\reduction\kdifform{a}{k}, \quad \pi_h\kdifform{a}{k}\in\kformspace{k}_h(\Omega;C_k)\subset\kformspace{k}(\Omega).
\label{projection}$$ where $\reconstruction\reduction\kdifform{a}{k}$ is expressed as a combination of $k$-cochains and interpolating $k$-forms.
A proof that $\pi_h$ is indeed a projection operator is given in [@kreeftpalhagerritsma2011]. Since $\projection\kdifform{a}{k}=\reconstruction\reduction\kdifform{a}{k}$ is a linear combination of $k$-cochains and interpolation $k$-forms, the expansion coefficients in the spectral element formulation to be discussed in Section \[sec:mimeticsem\] are the cochains which in turn are the integral quantities mentioned in the Introduction.
\[Lem:projectionextder\] There exists a commuting property for the projection and the exterior derivative, such that $$\label{projectionextder}
\ederiv\projection=\projection\ederiv\quad\mathrm{on}\ \Lambda^k(\Omega).$$ This can be illustrated as $$\begin{CD}
\Lambda^k @>\ederiv>> \Lambda^{k+1}\\
@VV\projection V @VV\projection V\\
\Lambda_h^k @>\ederiv>> \Lambda_h^{k+1}.
\end{CD}$$
This is a direct consequence of the definitions of the reduction , reconstruction and projection operators , $$\ederiv\projection\kdifform{a}{k}\stackrel{\eqref{projection}}{=}\ederiv\reconstruction\reduction\kdifform{a}{k}\stackrel{\eqref{cdp2}}{=}\reconstruction\delta\reduction\kdifform{a}{k}\stackrel{\eqref{cdp1}}{=}\reconstruction\reduction\ederiv\kdifform{a}{k}\stackrel{\eqref{projection}}{=}\projection\ederiv\kdifform{a}{k},\quad\forall \kdifform{a}{k}\in\Lambda^k(\Omega).$$
Note that it is the intermediate step $\reconstruction\delta\reduction\kdifform{a}{k}$ that is used in practice for the discretization, see Examples \[ex:curl\] and \[ex:div\], . Since we have a matrix representation of the coboundary operator in terms of incidence matrices, we expect the incidence matrices to appear explicitly in the spectral element formulation, see . is the most important result in this paper. As a direct consequence we obtain the pointwise divergence-free solution, as illustrated in the following example.
Consider the relation $\ud\kdifform{u}{n-1}=\kdifform{g}{n}$. In vector notation the $\ud$ represents the $\mathrm{div}$ operator. Now let $\ud\kdifformh{u}{n-1}=\kdifformh{g}{n}$ be the discretization of our continuous problem. Then by using we get $$\ud \kdifformh{u}{n-1}-\kdifformh{g}{n}=\ud\pi_h\kdifform{u}{n-1}-\pi_h\kdifform{g}{n}=\pi_h(\ud \kdifform{u}{n-1}-\kdifform{g}{n})=0.$$ It follows that our discretization is exact. In case $\kdifform{g}{n}=0$, we have a pointwise divergence-free solution of $\kdifformh{u}{n-1}$.
As a direct consequence of we satisfy the LBB stability criteria, see [@brezzifortin; @giraultraviart; @kreefterrorestimate]. The projection does *not* commute with codifferential operator. This is the main reason why we rewrite the codifferentials into exterior derivatives and boundary integrals, by means of integration by parts using .
We do not restrict ourselves to affine mappings only, as is required for many other compatible finite elements, like Nédélec and Raviart-Thomas elements and their generalizations [@arnoldfalkwinther2006; @nedelec1980; @raviartthomas1977], but also allow non-affine maps such as transfinite mappings [@gordonhall1973] or isogeometric transformations. This allows for better approximations in complex domains with curved boundaries, without the need for excessive refinement. This is possible since the projection operator $\projection$ commutes with the pullback $\pullback$, $$\label{pullbackprojection}
\pullback\projection=\projection\pullback\quad\mathrm{on}\ \Lambda^k(\Omega).$$ This commutation can be illustrated as $$\begin{CD}
\Lambda^k(\Omega) @>\pullback>> \Lambda^k(\Omega_{\rm ref})\\
@VV\projection V @VV\projection V\\
\Lambda^k_h(\Omega,C_k) @>\pullback>> \Lambda^k_h(\Omega_{\rm ref},C_k)
\end{CD}$$ An extensive proof is given in [@kreeftpalhagerritsma2011].
Mimetic spectral element basis-functions {#sec:mimeticsem}
----------------------------------------
Now that a mimetic framework is formulated using differential geometry, algebraic topology and the relations between those - the mimetic operators - we derive reconstruction functions, $\mathcal{I}$, that satisfy the properties of the mimetic operators. In combination with the reduction operator, $\mathcal{R}$, it defines the mimetic projection operators, $\pi_h$. The finite dimensional $k$-forms used in this paper are polynomials, based on the idea of spectral element methods, [@canuto1]. Spectral element methods have many desirable features such as arbitrary polynomial representation, favourable conditioning, element wise local support, and optimal stability and approximation properties. However, the definition of the reconstruction operator requires a new set of spectral element interpolation functions. The *mimetic spectral elements* were derived independently by [@gerritsma2011; @robidoux2008], and are more extensively discussed in [@kreeftpalhagerritsma2011]. Only the most important properties of the mimetic spectral element method are presented here.
In spectral element methods the computational domain $\Omega$ is decomposed into $M$ non-overlapping, possibly curvilinear quadrilateral or hexahedral, closed sub-domains $Q_m$, $$\Omega=\bigcup_{m=1}^MQ_m,\quad Q_m\cap Q_l=\p Q_m\cap\p Q_l,\ m\neq l,$$ where in each sub-domain a Gauss-Lobatto mesh is constructed, see Figures \[fig:meshes\] and \[fig:LDC\] in the next section. The complete mesh is indicated by $\mathcal{Q}:=\sum_{m=1}^MQ_m$.
The collection of Gauss-Lobatto meshes in all elements $Q_m\in\mathcal{Q}$ constitutes the cell complex $D$. For each element $Q_m$ there exists a sub cell complex, $D_m$. Note that $D_m\cap D_l,\ m\neq l$, is not an empty set in case they are neighboring elements, but contains all $k$-cells, $k<n$, of the common boundary, see .
Each sub-domain is mapped from the reference element, $Q_{\rm ref}=[-1,1]^n$, using the mapping $\Phi_m:Q_{\rm ref}\rightarrow Q_m$. Then all flow variables defined on $Q_m$ are pulled back onto this reference element using the following pullback operation, $\Phi^\star_m:\Lambda^k_h(Q_m;C_k)\rightarrow\Lambda^k_h(Q_{\rm ref};C_k)$. In three dimensions the reference element is given by $Q_{\rm ref}:=\{(\xi,\eta,\zeta)\;|\;-1\leq\xi,\eta,\zeta\leq1\}$.
The basis-functions that interpolate the cochains on the quadrilateral or hexahedral elements are constructed using tensor products. It is therefore sufficient to derive interpolation functions in one dimension and use tensor products afterwards to construct $n$-dimensional basis functions. A similar approach was taken in [@buffa2011b]. Because the projection operator and the pullback operator commute , the interpolation functions are discussed for the reference element only.
Consider a 0-form $\kdifform{a}{0}\in\Lambda^0(Q_{\rm ref})$ on $Q_{\rm ref}:=\xi\in[-1,1]$, on which a cell complex $D$ is defined that consists of $N+1$ nodes, $\xi_i$, where $-1\leq \xi_0<\hdots<\xi_N\leq 1$, and $N$ edges, $\tau_{(1),i}=[\xi_{i-1},\xi_i]$, of which the nodes are the boundaries. Corresponding to this set of nodes (0-chains) there exists a projection using $N^{\rm th}$ order *Lagrange polynomials*, $l_i(\xi)$, to approximate a $0$-form, as $$\projection\kdifform{a}{0}=\sum_{i=0}^N a_il_i(\xi).
\label{nodalapprox}$$ Lagrange polynomials have the property that they interpolate nodal values and are therefore suitable to reconstruct the cochain $\kcochain{a}{0}=\reduction\kdifform{a}{0}$ containing the set $a_i=a(\xi_i)$ for $i=0,\hdots,N$. So Lagrange polynomials can be used to reconstruct a 0-form from a 0-cochain. Lagrange polynomials are in fact 0-forms themselves, $l_i(\xi)\in\Lambda^0_h(Q_{\rm ref};C_0)$. Lagrange polynomials are constructed such that their value is one in the corresponding point and zero in all other mesh points, $$\reduction l_i(\xi)=l_i(\xi_p)=\left\{
\begin{aligned}
&1&{\rm if}\ i=p\\ &0&{\rm if}\ i\neq p
\end{aligned}
\right..
\label{nodalproperty}$$ This satisfies , where in this case $\reconstruction=l_i(\xi)$. Gerritsma [@gerritsma2011] and Robidoux [@robidoux2008] derived a similar projection for 1-forms, consisting of $1$-cochains and $1$-form polynomials, that is called the *edge polynomial*, $e_i(\xi)\in\Lambda^1_h(Q_{\rm ref};C_1)$.
\[lemma:edge\] Following Definitions \[def:reduction\] and \[reconstructionoperator\], apply the exterior derivative to $\pi_h\kdifform{a}{0}$, it gives the 1-form $\pi_h\kdifform{b}{1}=\ederiv \pi_h\kdifform{a}{0}=\reconstruction\delta\reduction \kdifform{a}{0}$ given by $$\projection \kdifform{b}{1}=\sum_{i=1}^Nb_i e_i(\xi),$$ with 1-cochain $\kcochain{b}{1}$, where $$\begin{aligned}
\label{ucochain}
b_i&=\langle\reduction \kdifform{b}{1},\tau_{(1),i}\rangle=\int_{\tau_{(1),i}}b(\xi)=\int_{\tau_{(1),i}}\ederiv \kdifform{a}{0}=\int_{\p\tau_{(1),i}}\kdifform{a}{0},\\
&=a(\xi_i)-a(\xi_{i-1})=a_i-a_{i-1},\nonumber\end{aligned}$$ with the edge interpolation polynomial defined by $$\begin{aligned}
\label{edge}
e_i(\xi)&=-\sum_{k=0}^{i-1}\ederiv l_k(\xi)=\sum_{k=i}^{N}\ederiv l_k(\xi)=\tfrac{1}{2}\sum_{k=i}^{N}\ederiv l_k(\xi)-\tfrac{1}{2}\sum_{k=0}^{i-1}\ederiv l_k(\xi).\end{aligned}$$
See [@gerritsma2011; @kreeftpalhagerritsma2011; @robidoux2008].
The value corresponding to line segment (1-cell) $\tau_{(1),i}$ is given by $b_i=a_i-a_{i-1}$ and so $\kcochain{b}{1}=\delta\kcochain{a}{0}$ is the discrete derivative operator in 1D. This operation is purely topological, no metric is involved. It satisfies , since $\ederiv\reconstruction\kcochain{a}{0}=\reconstruction\delta\kcochain{a}{0}$. Note that we have $\ederiv e_i(\xi)=\sum\ederiv\!\circ\!\ederiv l_i(\xi)=0$. The 1-form edge polynomial can also be written as below, separating the edge function into its polynomial and its basis, $$e_i(\xi)=\ve_i(\xi)\ederiv \xi,\quad\mathrm{with}\quad\ve_i(\xi)=-\sum_{k=0}^{i-1}\frac{\ederiv l_k}{\ederiv \xi}.$$ Similar to , the edge functions are constructed such that when integrating $e_i(\xi)$ over a line segment it gives one for the corresponding element and zero for any other line segment, so $$\reduction e_i(\xi)=\int_{\xi_{p-1}}^{\xi_p}e_i(\xi)=\left\{
\begin{aligned}
&1&{\rm if}\ i=p\\ &0&{\rm if}\ i\neq p
\end{aligned}
\right..
\label{intedge}$$ This also satisfies , where in this case $\reconstruction=e_i(\xi)$. The fourth-order Lagrange and third-order edge polynomials, corresponding to a Gauss-Lobatto mesh with $N=4$, are shown in Figures \[fig:lagrange\] and \[fig:edgepoly\].
![Edge polynomials on Gauss-Lobatto-Legendre mesh.[]{data-label="fig:edgepoly"}](gll.pdf){width="0.9\linewidth"}
![Edge polynomials on Gauss-Lobatto-Legendre mesh.[]{data-label="fig:edgepoly"}](edge.pdf){width="0.9\linewidth"}
Now that we have developed interpolation functions in one dimension, we can extend this to the multidimensional framework by means of tensor products. This allows for the interpolation of integral quantities defined on $k$-dimensional cubes. Consider a reference element in $\mathbb{R}^3$, $Q_{\rm ref}=[-1,1]^3$. Then the interpolation functions for points, lines, surfaces and volumes are given by, $$\begin{aligned}
&\mathrm{point}:&&P^{(0)}_{i,j,k}(\xi,\eta,\zeta)=l_i(\xi)\otimes l_j(\eta)\otimes l_k(\zeta),\\
&\mathrm{line}:&&L^{(1)}_{i,j,k}(\xi,\eta,\zeta)=\{e_i(\xi)\otimes l_j(\eta)\otimes l_k(\zeta),\ l_i(\xi)\otimes e_j(\eta)\otimes l_k(\zeta),\ l_i(\xi)\otimes l_j(\eta)\otimes e_k(\zeta)\},\\
&\mathrm{surface}:&&S^{(2)}_{i,j,k}(\xi,\eta,\zeta)=\{l_i(\xi)\otimes e_j(\eta)\otimes e_k(\zeta),\ e_i(\xi)\otimes l_j(\eta)\otimes e_k(\zeta),\ e_i(\xi)\otimes e_j(\eta)\otimes l_k(\zeta)\},\\
&\mathrm{volume}:&&V^{(3)}_{i,j,k}(\xi,\eta,\zeta)=e_i(\xi)\otimes e_j(\eta)\otimes e_k(\zeta).
\end{aligned}$$ Note that $V^{(3)}_{i,j,k}$ is indeed a 3-form, since $e_i(\xi)\otimes e_j(\eta)\otimes e_k(\zeta)=\ve_i(\xi)\ve_j(\eta)\ve_k(\zeta)\,\ud\xi\wedge\ud\eta\wedge\ud\zeta$. So the approximation spaces are spanned by combinations of Lagrange and edge basis functions, $$\begin{aligned}
\Lambda^0_h(\mathcal{Q};C_0)&:=\mathrm{span}\left\{P^{(0)}_{i,j,k}\right\}_{i=0,j=0,k=0}^{N,N,N},\\
\Lambda^1_h(\mathcal{Q};C_1)&:=\mathrm{span}\left\{\big(L^{(1)}_{i,j,k}\big)_1\right\}_{i=1,j=0,k=0}^{N,N,N}\times \mathrm{span}\left\{\big(L^{(1)}_{i,j,k}\big)_2\right\}_{i=0,j=1,k=0}^{N,N,N}\times \mathrm{span}\left\{\big(L^{(1)}_{i,j,k}\big)_3\right\}_{i=0,j=0,k=1}^{N,N,N},\\
\Lambda^2_h(\mathcal{Q};C_2)&:=\mathrm{span}\left\{\big(S^{(2)}_{i,j,k}\big)_1\right\}_{i=0,j=1,k=1}^{N,N,N}\times \mathrm{span}\left\{\big(S^{(2)}_{i,j,k}\big)_2\right\}_{i=1,j=0,k=1}^{N,N,N}\times \mathrm{span}\left\{\big(S^{(2)}_{i,j,k}\big)_3\right\}_{i=1,j=1,k=0}^{N,N,N},\\
\Lambda^3_h(\mathcal{Q};C_3)&:=\mathrm{span}\left\{V^{(3)}_{i,j,k}\right\}_{i=1,j=1,k=1}^{N,N,N}.\end{aligned}$$
Lagrange interpolation by itself does not guarantee a convergent approximation [@erdos1980], but it requires a suitably chosen set of points, $-1\leq\xi_0<\xi_1<\hdots<\xi_N\leq1$. Here, the Gauss-Lobatto distribution is proposed, because of its superior convergence behaviour [@canuto1]. The convergence rates of Lagrange and edge interpolants were obtained in [@kreeftpalhagerritsma2011] and are given by, $$\begin{aligned}
\Vert \kdifform{a}{0}-\pi_h\kdifform{a}{0}\Vert_{H\Lambda^0}&\leq C\frac{h^{l-1}}{p^{m-1}}|\kdifform{a}{0}|_{H^m\Lambda^0},\label{hpconvergenceestimate}\\
\Vert \kdifform{b}{1}-\pi_h\kdifform{b}{1}\Vert_{L^2\Lambda^1}&\leq C\frac{h^{l-1}}{p^{m-1}}|\kdifform{b}{1}|_{H^{m-1}\Lambda^1},\label{edgeinterpolationerror}\end{aligned}$$ with $l=\mathrm{min}(p+1,m)$. For the variables vorticity, velocity and pressure in the VVP formulation of the Stokes problem, the $h$-convergence rates of the interpolation errors become, $$\begin{gathered}
\label{vvpinterpolationerror}
\norm{\omega-\pi_h\omega}_{L^2\Lambda^{n-2}}=\mathcal{O}(h^{N+s}),\quad\norm{\omega-\pi_h\omega}_{H\Lambda^{n-2}}=\mathcal{O}(h^N),\nonumber\\
\norm{u-\pi_h u}_{L^2\Lambda^{n-1}}=\mathcal{O}(h^N),\quad\norm{p-\pi_h p}_{L^2\Lambda^{n}}=\mathcal{O}(h^N),\end{gathered}$$ where $s=1$ for $n=2$ and $s=0$ for $n>2$, and with $N$ defined as in . Because of and , we have $\norm{u-\pi_h u}_{H\Lambda^{n-1}}=\norm{u-\pi_h u}_{L^2\Lambda^{n-1}}$.
Pointwise divergence-free discretization {#pointwisedivergencefree}
----------------------------------------
One of the most interesting properties of the mimetic method presented in this paper, is that within our weak formulation, the divergence-free constraint is satisfied pointwise. This result follows from the three commuting properties with the exterior derivative, , and , as was shown in . The corresponding commuting diagrams are repeated in the diagram below for the two dimensional case.
\^[0]{}\_h(;C\_0) & \^\_ & \^[1]{}\_h(;C\_1) & \^\_ & \^[2]{}\_h(;C\_2)\
\^ \_ & & \^ \_ & & \^ \_\
C\^[0]{}(D) & \^ & C\^[1]{}(D) & \^ & C\^[2]{}(D)
Note that by curl we refer to the two-dimensional variant, applied to a scalar, i.e. $\mathrm{curl}\,\omega=(\p\omega/\p y,-\p\omega/\p x)^T$, see also Example \[ex:2dstokes\], and is also called the normal gradient operator, $\mathrm{grad}^\perp$, see [@palha2010].
In the following two examples we demonstrate the action of the exterior derivative on vorticity, $\kdifformh{\omega}{0}\in\Lambda_h^{0}(Q_{\rm ref};C_0)$, and on the velocity flux, $\kdifformh{u}{1}\in\Lambda^{1}_h(Q_{\rm ref};C_1)$. Two dimensional reconstruction is based on tensor product construction of the one dimensional reconstruction function introduced above.
\[ex:curl\] Consider a flux $\kdifformh{z}{1}\in\Lambda^1_h(Q_{\rm ref};C_1)$ with $C_1$ outer-oriented, and where $\kdifformh{z}{1}=\ud\kdifformh{\omega}{0}$. Then $\kdifformh{\omega}{0}$ is expanded in the reference coordinates $(\xi,\eta)$ as $$\kdifformh{\omega}{0}=\sum_{i=0}^N\sum_{j=0}^N\omega_{i,j}l_i(\xi)l_j(\eta).$$ Apply the exterior derivative in the same way as in , it gives
$$\begin{aligned}
\kdifformh{z}{1}=\ud\kdifformh{\omega}{0}&=\sum_{i=1}^N\sum_{j=0}^N (\omega_{i,j}-\omega_{i-1,j})e_i(\xi)l_j(\eta)+\sum_{i=0}^N\sum_{j=1}^N(\omega_{i,j}-\omega_{i,j-1}) l_i(\xi)e_j(\eta),\nonumber\\
&=-\sum_{i=1}^N\sum_{j=0}^N z^\eta_{i,j}e_i(\xi)l_j(\eta)+\sum_{i=0}^N\sum_{j=1}^Nz^\xi_{i,j} l_i(\xi)e_j(\eta),\end{aligned}$$
where $z^\xi_{i,j}=\omega_{i,j}-\omega_{i,j-1}$, and $z^\eta_{i,j}=\omega_{i-1,j}-\omega_{i,j}$ can be compactly written as $\mathbf{z}^{(1)}=\delta\boldsymbol\omega^{(0)}$, with $\boldsymbol\omega^{(0)}\in C^0(D)$ and $\mathbf{z}^{(1)}\in C^1(D)$, or in matrix notation as $\mathbf{z}=\mathsf{E}^{(1,0)}\boldsymbol\omega$. This relation is exact, coordinate free and invariant under transformations.
\[ex:div\] Let $\kdifformh{u}{1}\in\Lambda^1_h(Q_{\rm ref};C_1)$ be the velocity flux defined as $$\kdifformh{u}{1}=-\sum_{i=1}^N\sum_{j=0}^Nv_{i,j}e_i(\xi)l(\eta)+\sum_{i=0}^N\sum_{j=1}^Nu_{i,j}l_i(\xi)e_j(\eta).$$ Compare this to the velocity flux in Example \[ex:2dstokes\], p.. Then the change of mass, $\kdifformh{m}{2}\in\Lambda^2_h(Q_{\rm ref};C_2)$, is equal to the exterior derivative of $\kdifformh{u}{1}$, $$\begin{aligned}
\kdifformh{m}{2}=\ud \kdifformh{u}{1}&=\sum_{i=1}^N\sum_{j=1}^N(u_{i,j}-u_{i-1,j}+v_{i,j}-v_{i,j-1})e_i(\xi)e_j(\eta).\nonumber\\
&=\sum_{i=1}^N\sum_{j=1}^Nm_{i,j}e_i(\xi)e_j(\eta),\end{aligned}$$ where $m_{i,j}=u_{i,j}-u_{i-1,j}+v_{i,j}-v_{i,j-1}$ can be compactly written as $\mathbf{m}^{(2)}=\delta\mathbf{u}^{(1)}$, with $\mathbf{u}^{(1)}\in C^1(D)$ and $\mathbf{m}^{(2)}\in C^2(D)$, or in matrix notation as $\mathbf{m}=\mathsf{E}^{(2,1)}\mathbf{u}$. Note that if the mass production is zero, as in our model problem , the incompressibility constraint is already satisfied at discrete/cochain level. Interpolation then results in a pointwise divergence-free solution.
Mixed formulation, boundary conditions and implementation {#sec:mixedformulationstokes}
=========================================================
We know how to discretize exactly the metric-free exterior derivative $\ud$ (see , , and the examples above), but it is less obvious how to treat the codifferential operator $\ud^*$. Fortunately, the two are directly related using $L^2$-inner products as seen in . Therefore the derivation of the mixed formulation of the Stokes problem consists of two steps: 1). Multiply equations - by the test functions $\kdifform{\tau}{n-2},\kdifform{v}{n-1},\kdifform{q}{n}$ using $L^2$-inner products. 2). Use integration by parts, as in , to express the codifferentials in terms of the exterior derivatives and boundary integrals. The resulting mixed formulation of the Stokes problem becomes:\
\
Find $(\kdifform{\omega}{n-2},\kdifform{u}{n-1},\kdifform{p}{n})\in\{H\Lambda^{n-2}\times H\Lambda^{n-1}\times L^2\Lambda^n\}$, given $\kdifform{f}{n}\in L^2\Lambda^{n-1}$, for all $(\kdifform{\tau}{n-2},\kdifform{v}{n-1},\kdifform{q}{n})\in\{H\Lambda^{n-2}\times H\Lambda^{n-1}\times L^2\Lambda^n\}$, such that
\[mixedstokes\] $$\begin{aligned}
\big(\kdifform{\tau}{n-2},\kdifform{\omega}{n-2}\big)_\Omega-\big(\ud\kdifform{\tau}{n-2},\kdifform{u}{n-1}\big)_\Omega&=-\int_{\p\Omega}\tr\kdifform{\tau}{n-2}\wedge\tr\star \kdifform{u}{n-1}, \label{mixedstokes1}\\
\big(\kdifform{v}{n-1},\ud\kdifform{\omega}{n-2}\big)_\Omega+\big(\ud \kdifform{v}{n-1},\kdifform{p}{n}\big)_\Omega&=\big(\kdifform{v}{n-1},\kdifform{f}{n-1}\big)_\Omega +\int_{\p\Omega}\tr \kdifform{v}{n-1}\wedge\tr\star \kdifform{p}{n}, \label{mixedstokes2}\\
\big(\kdifform{q}{n},\ud \kdifform{u}{n-1})_\Omega&=0. \label{mixedstokes3}\end{aligned}$$
This mixed formulation is similar to those in [@bernardi2006; @dubois2002; @giraultraviart]. The mixed formulation is well-posed, see [@giraultraviart; @kreefterrorestimate]. The discrete problem is almost the same as the continuous problem, that is: find $(\kdifformh{\omega}{n-2},\kdifformh{u}{n-1},\kdifformh{p}{n})\in\{\Lambda_h^{n-2}\times \Lambda_h^{n-1}\times \Lambda_h^n\}$, given $\kdifformh{f}{n}\in \Lambda_h^{n-1}$, for all $(\kdifformh{\tau}{n-2},\kdifformh{v}{n-1},\kdifformh{q}{n})\in\{\Lambda_h^{n-2}\times \Lambda_h^{n-1}\times \Lambda_h^n\}$, such that - hold. The discrete problem is also well-posed, because every subcomplex of a Hilbert complex is also a Hilbert complex, so if $(H\Lambda,\ud)$ is a Hilbert complex, so is $(\Lambda_h,\ud)$, and the projection operator from $H\Lambda^k(\Omega)$ to $\Lambda^k_h(\Omega;C_k)$ is bounded, see [@kreeftpalhagerritsma2011]. A complete proof is given in [@kreefterrorestimate].
System needs to be supplemented with boundary conditions on $\p\Omega$. Their exists four possible types of boundary conditions as follows from the boundary integrals in the mixed formulation, . Subdivide the boundary into several parts, $\p\Omega=\bigcup_i\Gamma_i$, where $\Gamma_i\cap\Gamma_j=\emptyset$ for $i\neq j$. Each part of the boundary can have one of the following four boundary conditions: 1. prescribed *velocity* (such as no-slip), 2. *tangential velocity - pressure*, 3. *tangential vorticity - normal velocity*, and 4. *tangential vorticity - pressure* boundary conditions. An overview is given in Table \[tab:boundaryconditions\].
Name $\quad\quad$ Exterior Calculus $\quad\quad$ $\quad\quad$ Vector Calculus $\quad\quad$ $\quad$ Type $\quad$
---------------------- --------------------------------------------------------------------- --------------------------------------------------------------------------- ----------------------
Normal velocity $\tr\kdifform{u}{n-1}\ \Rightarrow\ \tr\kdifform{v}{n-1}=0$ $\vec{u}\cdot\vec{n}\ \Rightarrow\ \vec{v}\cdot\vec{n}=0$ essential
tangential velocity $\tr\star\kdifform{u}{n-1}$ $\vec{u}\cdot\vec{t}$ natural
Tangential velocity $\tr\star\kdifform{u}{n-1}$ $\vec{u}\cdot\vec{t}$ natural
pressure $\tr\star\kdifform{p}{n}$ $p$ natural
Tangential vorticity $\tr\kdifform{\omega}{n-2}\ \Rightarrow\ \tr\kdifform{\tau}{n-2}=0$ $\vec{\omega}\times\vec{t}\ \Rightarrow\ \vec{\tau}\times\vec{t}=\vec{0}$ essential
normal velocity $\tr\kdifform{u}{n-1}\ \Rightarrow\ \tr\kdifform{v}{n-1}=0$ $\vec{u}\cdot\vec{n}\ \Rightarrow\ \vec{v}\cdot\vec{n}=0$ essential
Tangential vorticity $\tr\kdifform{\omega}{n-2}\ \Rightarrow\ \tr\kdifform{\tau}{n-2}=0$ $\vec{\omega}\times\vec{t}\ \Rightarrow\ \vec{\tau}\times\vec{t}=\vec{0}$ essential
pressure $\tr\star\kdifform{p}{2}$ $p$ natural
: Admissible boundary conditions for Stokes flow in vorticity-velocity-pressure formulation.[]{data-label="tab:boundaryconditions"}
From the implementation point of view we would like to mention that the $L^2$ inner products and boundary integrals are evaluated using Gauss-Lobatto quadrature, which is exact for polynomials up to order $2N-1$, [@canuto1]. The resulting system matrix is a saddle point system that is given by, $$\label{matrixsystem}
\begin{bmatrix}
\mathsf{M}^{(n-2)} & \big(\mathsf{E}^{(n-1,n-2)}\big)^T\mathsf{M}^{(n-1)} & \emptyset \\
\mathsf{M}^{(n-1)}\mathsf{E}^{(n-1,n-2)} & \emptyset & \big(\mathsf{E}^{(n,n-1)}\big)^T\mathsf{M}^{(n)} \\
\emptyset & \mathsf{M}^{(n)}\mathsf{E}^{(n,n-1)} & \emptyset
\end{bmatrix}
\begin{bmatrix}
\boldsymbol\omega \\ \mathbf{u} \\ \mathbf{p}
\end{bmatrix}
=
\begin{bmatrix}
-\mathsf{B}_1(\mathbf{\star u}) \\ \mathsf{M}^{(n-1)}\mathbf{f}^{(n-1)}+\mathsf{B}_2(\mathbf{\star p}) \\ \emptyset
\end{bmatrix}$$ The final system matrix is symmetric and only consists of $L^2$ inner product matrices for $k$-forms, $\mathsf{M}^{(k)}$ (also known as mass matrices), and incidence matrices, $\mathsf{E}^{(k,k-1)}$, that are directly obtained from the mesh topology, see p.. Coordinate transformations imposed by the pullback operator appear in the $L^2$ inner products as a standard change of basis, see also [@bouman2011]. The matrices $\mathsf{B}_1$ and $\mathsf{B}_2$ represent the boundary integrals in and , and $(\star\mathbf{u})$ and $(\star\mathbf{p})$ are the tangential velocity and pressure boundary conditions imposed. A discussion on efficient solvers for symmetric indefinite systems that follow from saddle point problems can be found in [@benzigolubliesen2005; @rehman2011].
Numerical Results {#sec:numericalresults}
=================
Now that all parts of the mixed mimetic method are treated, we can test the performance of the numerical scheme using a set of three test problems. The first one consists of an analytic solution on a unit square, where optimal $h$-convergence and exponential $p$-convergence rates are shown for both Cartesian and curvilinear meshes for all combinations of boundary conditions. The second is a lid-driven cavity flow, where results are compared with a reference solution. Finally, Stokes flow around a cylinder moving with constant velocity in a channel is considered.
Manufactured solution
---------------------
The first test case addresses the convergence for $h$- and $p$-refinement of the mixed mimetic spectral element method applied to the Stokes model. The model problem is defined on the unit square $\Omega=[0,1]^2$, with Cartesian coordinates $\mathbf{x}:=(x,y)$, with $\nu=1$ and with the right hand side $\kdifform{f}{1}\in\Lambda^1(\Omega)$ given by
\[testcase1\] $$\begin{aligned}
\kdifform{f}{1}=-&f_y(\mathbf{x})\,\ud x+f_x(\mathbf{x})\,\ud y,\nonumber\\
=-&\left(\pi\sin(\pi x)\cos(\pi y)+8\pi^2\cos(2\pi x)\sin(2\pi y)\right)\ud x\nonumber\\
+&\left(\pi\cos(\pi x)\sin(\pi y)-8\pi^2\sin(2\pi x)\cos(2\pi y)\right)\ud y.\end{aligned}$$ This right hand side results in an exact solution for the vorticity $\kdifform{\omega}{0}\in\Lambda^0(\Omega)$, velocity flux $\kdifform{u}{1}\in\Lambda^1(\Omega)$, and pressure $\kdifform{p}{2}\in\Lambda^2(\Omega)$ components of the Stokes problem, given by $$\begin{aligned}
\kdifform{\omega}{0}&=\omega(\mathbf{x})=-4\pi\sin(2\pi x)\sin(2\pi y),\\
\kdifform{u}{1}&=-v(\mathbf{x})\,\ud x+u(\mathbf{x})\,\ud y\nonumber\\
&=-\left(\cos(2\pi x)\sin(2\pi y)\right)\ud x+\left(-\sin(2\pi x)\cos(2\pi y)\right)\ud y,\\
\kdifform{p}{2}&=p(\mathbf{x})\,\ud x\!\wedge\!\ud y=\left(\sin(\pi x)\sin(\pi y)\right)\ud x\!\wedge\!\ud y.\end{aligned}$$
This testcase was discussed before in [@gerritsmaphillips; @prootgerritsma2002]. Calculations were performed on both a Cartesian as well as a curvilinear mesh as shown in . The map, $(x,y)=\Phi(\xi,\eta)$, used for the curved mesh is given by
$$\begin{aligned}
x(\xi,\eta)&=\tfrac{1}{2}+\tfrac{1}{2}\left(\xi+\tfrac{1}{5}\sin(\pi\xi)\sin(\pi\eta)\right),\\
y(\xi,\eta)&=\tfrac{1}{2}+\tfrac{1}{2}\left(\eta+\tfrac{1}{5}\sin(\pi\xi)\sin(\pi\eta)\right).\end{aligned}$$
![Examples of a Cartesian and a curvilinear mesh used in the convergence analysis. The meshes shown consist of $4\times4$ spectral elements, with for each element, $N=4$. The element boundaries are indicated in red.[]{data-label="fig:meshes"}](meshes.pdf){width="70.00000%"}
![Vorticity, velocity and pressure $h$-convergence results of problem . Results in the top row correspond to Cartesian meshes, results in the bottom row are obtained on curvilinear meshes. All variables are tested on meshes with $N=2,4,6$ and 8.[]{data-label="fig:hconv"}](HCONV.pdf){width="1.\textwidth"}
![Vorticity, velocity and pressure $p$-convergence results of problem . Results in the top row correspond to Cartesian meshes, results in the bottom row are obtained on curvilinear meshes. All variables are tested on meshes with $1\times1,\ 2\times 2,\ 4\times 4$ and $8\times 8$ spectral elements.[]{data-label="fig:pconv"}](PCONV.pdf){width="1.\textwidth"}
![$L^1$, $L^2$ and $L^\infty$-error of $\mathrm{div}\,u$ on the Cartesian mesh for discontinuous piecewise linear functions, $N=2$.[]{data-label="fig:divergencefree"}](divergencefree.pdf){width="50.00000%"}
shows the $h$-convergence and shows the $p$-convergence of the vorticity $\kdifformh{\omega}{0}\in\Lambda_h^{0}(\mathcal{Q};C_0)$, velocity $\kdifform{u}{1}_h\in\Lambda^{1}_h(\mathcal{Q};C_1)$ and pressure $\kdifform{p}{2}_h\in\Lambda^2_h(\mathcal{Q};C_2)$. For both figures, the results of the top row are obtained on Cartesian meshes and the results depicted underneath are obtained on curvilinear meshes. The errors for the vorticity and velocity are both measured in the $L^2\Lambda^k$- and $H\Lambda^k$-norm, i.e. $\Vert\omega-\omega_h\Vert_{L^2\Lambda^{0}}$, $\Vert\omega-\omega_h\Vert_{H\Lambda^{0}}$, and $\Vert u-u_h\Vert_{L^2\Lambda^{1}}$, $\Vert u-u_h\Vert_{H\Lambda^{1}}$, respectively. Because the divergence-free constraint is satisfied pointwise, the norm $\Vert\ud( u-u_h)\Vert_{L^2\Lambda^{2}}$ is zero or machine precision, see , and so the $H\Lambda^{1}$-norm is equal to the $L^2\Lambda^{1}$-norm of the velocity, i.e., $\Vert u-u_h\Vert_{H\Lambda^{1}}=\Vert u-u_h\Vert_{L^2\Lambda^{1}}$. This does not hold for the vorticity, since $\ud\kdifform{\omega}{0}\in\Lambda^{1}_h(\mathcal{Q};C_1)$ is again a function of sine and cosine functions. The norm $\Vert\ud(\omega-\omega_h)\Vert_{L^2\Lambda^{1}}$ converges one order slower than $\Vert\omega-\omega_h\Vert_{L^2\Lambda^{0}}$. More details on the convergence behavior can be found in [@kreefterrorestimate].
In the slope of the theoretical convergence rates, [@kreefterrorestimate], are added which shows that $h$-convergence rates are equal to the $h$-convergence rates of the interpolation error , on both Cartesian as well as curvilinear meshes. shows that exponential convergence rates are obtained on both types of meshes.
It is important to remark is that these results are independent of the kind of boundary conditions used. This is shown in Table \[tab:bcresults\]. This is an important result, because especially optimal convergence for the normal velocity - tangential velocity boundary condition is non-trivial in compatible methods, [@arnold2011]. The standard elements in compatible methods, the Raviart-Thomas elements, show only sub-optimal convergence for velocity boundary conditions, [@arnold2011].
--------------------- --------------------- ----------------- ------------- -------------
normal velocity tangential velocity vorticity vorticity convergence
tangential velocity pressure normal velocity pressure rate
4.0758e-01 5.4293e-01 5.4292e-01 5.4292e-01 $-$
1.9814e-01 1.9738e-01 1.9738e-01 1.9738e-01 1.46
2.4893e-02 2.4776e-02 2.4776e-02 2.4776e-02 2.99
3.1037e-03 3.0954e-03 3.0954e-03 3.0954e-03 3.00
3.8738e-04 3.8684e-04 3.8684e-04 3.8684e-04 3.00
4.8386e-05 4.8352e-05 4.8352e-05 4.8351e-05 3.00
--------------------- --------------------- ----------------- ------------- -------------
: This table shows the vorticity error $\norm{\omega-\omega_h}_{L^2\Lambda^{0}}$ obtained using the four types of boundary conditions described in Table \[tab:boundaryconditions\]. The results are obtained on a Cartesian mesh with $N=2$ and $h=\tfrac{1}{2},\tfrac{1}{4},\tfrac{1}{8},\tfrac{1}{16},\tfrac{1}{32},\tfrac{1}{64}$. All four cases show third order convergence.[]{data-label="tab:bcresults"}
Lid-driven cavity Stokes
------------------------
For many years, the lid-driven cavity flow was considered as one of the classical benchmark cases for the assessment of numerical methods and the verification of incompressible (Navier)-Stokes codes. The lid-driven cavity test case deals with a flow in a unit-square box with three solid boundaries and moving lid as the top boundary, moving with constant velocity equal to one to the right. Because of the discontinuities of the velocity in the two upper corners, the solution becomes singular at these corners, where both vorticity and pressure become infinite. Especially these singularities make the lid-driven cavity problem a challenging test case.
![Lid-driven cavity Stokes problem results. The top row from left to right shows the solution of the vorticity, velocity magnitude and pressure fields. The bottom row shows from left to right the solution of the stream function, the divergence of the velocity field and the $6\times 6$, $N=6$ mesh.[]{data-label="fig:LDC"}](mesh_LDC_v2.pdf){width="1.\textwidth"}
For this test case a non-uniform $6\times6$ Cartesian spectral element mesh is used. Each spectral element consists of a Gauss-Lobatto mesh for $N=6$, see . The solutions of the vorticity, velocity, pressure and stream function are shown in . Also shown in is a plot of the divergence of velocity. It confirms a pointwise divergence-free solution up to machine precision. The results are in perfect agreement with those in [@sahinowens2003].
Because in the mixed mimetic spectral element method no velocity unknowns are located at the upper corners – only velocity flux [*through*]{} edges is considered –, no special treatment is needed for the corner singularities, in contrast to many nodal finite-difference, finite-element and spectral element methods, [@botellapeyret1998; @evans2011; @peyrettaylor; @prootthesis]. This is due to the finite-volume like structure of the method, as explained in the section of algebraic topology.
In the centerline velocities are plotted. Three different configurations are used, based on the same cell complex consisting of $9\times 9$ 2-cells:
- left: $9\times9$ spectral elements with $N=1$, resulting in piecewise constant approximations along the centerlines,
- middle: $3\times3$ spectral elements with $N=3$, resulting in piecewise quadratic approximations along the centerlines,
- right: One global spectral element with $N=9$, resulting in $8^{\rm th}$ order polynomial approximations along the centerlines.
Despite the low resolution, all approximations lay on top of those in [@sahinowens2003].
![Horizontal (top) and vertical (bottom) centerline velocities are shown in blue for a very course mesh, $9\times9$ 2-cells. From left to right the $9\times9$ 2-cells are used in: $9\times9$ zeroth-order elements, $3\times3$ second-order elements and one eight-order element. In red the reference solution from [@sahinowens2003].[]{data-label="fig:centerlines"}](Stokes_centerlines.pdf){width="1.\textwidth"}
Because of the tensor-product construction of discrete unknowns and basis-functions, an extension to three dimensions is straightforward. A 3D lid-driven cavity is of interest because it not only contains corner singularities, but also line singularities. The left plot in shows slices of the magnitude of the velocity field in a three dimensional lid-driven cavity Stokes problem, obtained on a $2\times2\times2$ element mesh with $N=8$. The slices are taken at 10%, 50% and 90% of the y-axis. The right plot in shows slices of divergence of the velocity field. The solution at the symmetry plane coincides with the 2D results in . It confirms that also in three dimensions the mixed mimetic spectral element method leads to an accurate result with a divergence-free solution.
![Left: slices of magnitude of the velocity field of a three dimensional lid-driven cavity Stokes problem obtained on a $2\times2\times2$ element mesh with $N=8$. Right: slices of the divergence of velocity. Is confirms a divergence-free velocity field.[]{data-label="fig:3Dldcdivfree3D"}](LDC_N8_E8_both.pdf){width="1.\textwidth"}
The corner singularities can be made even more severe by sharpening the corners, as happens for a lid-driven cavity problem in a triangle. shows the vorticity field and the velocity magnitude. On top of the velocity plot, stream function contours are plotted. The solutions are constructed on a 9 spectral element mesh with $N=9$. A close-up of the stream function contours is shown in the rightmost plot in . The stream function contours nicely show the first three Moffatt eddies [@moffatt].
![Lid-driven cavity Stokes flow in a triangle. Left the vorticity field, in the middle the velocity magnitude with stream function contours on top, and right the stream function contours of a close-up of the bottom corner, revealing the second and third Moffatt eddies.[]{data-label="fig:triangleLDC"}](LDC_triangle_E9_v2.pdf){width="100.00000%"}
Flow over a cylinder
--------------------
The last test case considers the flow around a cylinder moving with constant velocity to the left, as defined in [@changnelson1997]. This testcase is mostly considered in the context of least-squares finite and spectral element methods, due to their moderate performance in case of large contraction regions, [@changnelson1997; @deanggunzburger; @prootgerritsma2006], mainly in terms of conservation of mass.
The cylinder moves with unit velocity along the centerline of a narrow channel. The computational domain is defined as a rectangular box minus the cylinder, as shown in . Also visible in this figure are the 12 spectral elements in which the computational domain is divided. A transfinite mapping, [@gordonhall1973], is used to define the curved elements around the cylinder. Velocity boundary conditions of $(u,v)=(1,0)$ are prescribed on the outer boundary and no-slip, $(u,v)=(0,0)$, is prescribed along the boundary of the cylinder. Solution of the vorticity, velocity magnitude and pressure, together with streamlines are shown in .
![Spectral element mesh (top left), magnitude of velocity (top right), vorticity (bottom left) and pressure (bottom right) for flow around a moving cylinder, on a 12 element, $N=6$ mesh.[]{data-label="fig:cylindercase"}](cylindercase_v2.pdf){width="1.\textwidth"}
Next consider a control volume $\Omega_c$ consisting of the 6 elements in the domain $-1.5\leq x\leq 0,\ 0.75\leq y \leq 0.75$. The control volume is chosen such that the ratio in size between inflow and outflow boundary is maximal. In this control volume conservation of mass should hold. Conservation of mass is expressed, by means of generalized Stokes theorem , in terms of a boundary integral as $$0=\int_{\Omega_c}\ud \kdifformh{u}{1}\stackrel{\eqref{stokestheorem}}{=}\int_{\p\Omega_c}\kdifformh{u}{1}.$$ From Section \[pointwisedivergencefree\] and the results of the previous test cases we know that the solution of the velocity is divergence-free throughout the domain, independent of the chosen control volume. In a comparison is made for the horizontal velocity component $u$ at the smallest cross-section above the cylinder, i.e. $x=0$, $0.5\leq y\leq0.75$, between the recently developed LSSCM, [@kattelans2009], and our MMSEM method for $N=3,6,12$. Both methods use a similar mesh of 12 spectral elements. As can be seen from this figure, the MMSEM method performs already very well for $N=3$, i.e. quadratic polynomial, where the LSSCM still fails for $N=6$, i.e. sixth order polynomial. This is a direct consequence of the pointwise divergence-free discretization.
![Horizontal velocity at smallest cross-section above the cylinder, on a 12 element mesh, for $N=3,6,12$.[]{data-label="fig:velocity_cross-section"}](velocity_crosssection.pdf){width="40.00000%"}
Conclusions and future aspects
==============================
In this paper we presented the mixed mimetic spectral element method, applied to the vorticity-velocity-pressure formulation of Stokes model. At the heart lies the generalized Stokes theorem, which relates the boundary operator applied on an oriented geometric objects to the exterior derivative, resembling the vector operators grad, curl and div, and the recently developed higher-order mimetic discretization for quadrilaterals and hexadrals, [@kreeftpalhagerritsma2011]. The gradient, curl and divergence conforming method results in a point-wise divergence-free discretization of the Stokes problem, as was confirmed by a set of benchmark problems. These results also showed optimal convergence, independent of the type of boundary conditions on orthogonal and curved meshes. More on convergence behavior and error estimates is presented [@kreefterrorestimate]. In the near future we plan to extend the method with structure-preserving $hp$-refinement based on a compatible mortar element method.
[^1]: Jasper Kreeft is funded by STW Grant 10113.
[^2]: This paper is in final form and no version of it will be submitted for publication elsewhere.
[^3]: $\big(\cdot,\cdot\big)$ denotes metric dependent inner products, while $\langle\cdot,\cdot\rangle$ denotes metric-free duality pairing.
[^4]: Note that on the lefthandside of this equation we consider the pullback of a $(k+1)$-form, whereas on the right hand side the pullback of a $k$-form. We could have written this as $\Phi^\star_{k+1}\ud_k\kdifform{a}{k}=\ud_k\Phi^\star_k\kdifform{a}{k}$. In order to improve readibility and knowing that the meaning of these operators is clear from the context we do not explicitely denote this.
[^5]: With $\tilde{\cdot}$ we indicate a variable contained in the lower complex of .
|
---
author:
- 'Usman Raza, Parag Kulkarni, and Mahesh Sooriyabandara [^1]'
bibliography:
- './paper.bib'
title: 'Low Power Wide Area Networks: An Overview'
---
Internet of Things, IoT, Low Power Wide Area, LPWA, LPWAN, Machine-to-Machine Communication, Cellular
Proprietary Technologies {#sec:proprietary}
========================
In this section, we highlight and compare emerging proprietary technologies shown in Figure \[fig:proprietary\] and their technical aspects summarized in Table \[tab:specifications\]. Some of these technologies are being made compliant to the standards proposed by the different SDOs and SIGs. We dedicate Section \[sec:standards\] to briefly describe these standards and their association with any proprietary technologies discussed next.
\[. [Proprietary LPWA Technologies]{} \[ \] \]
{#subsec:sigfox}
itself or in partnership with other network operators offers an end-to-end LPWA connectivity solution based on its patented technologies. Network Operators (SNOs) deploy the proprietary base stations equipped with cognitive software-defined radios and connect them to the backend servers using an IP-based network. The end devices connect to these base stations using Binary Phase Shift Keying (BPSK) modulation in an ultra narrow (100Hz) band carrier. By using UNB, utilizes bandwidth efficiently and experiences very low noise levels, resulting in high receiver sensitivity, ultra-low power consumption, and inexpensive antenna design. All these benefits come at an expense of maximum throughput of only 100 bps. The achieved data rate clearly falls at the lower end of the throughput offered by most other LPWA technologies and thus limits the number of use-cases for . Further, initially supported only uplink communication but later evolved into a bidirectional technology, although with a significant link asymmetry. The downlink communication can only precede uplink communication after which the end device should wait to listen for a response from the base station. The number and size of messages over the uplink are limited to 140 12-byte messages per day to conform to the regional regulations on use of license-free spectrum [@spectrumuse]. Radio access link is asymmetric, allowing transmission of maximum of only 4 8-bytes per day over the downlink from the base stations to the end devices. It means that acknowledging every uplink message is not supported.
Without adequate support for acknowledgments, reliability of the uplink communication is improved by using time and frequency diversity as well as redundant transmissions. A single message from an end device can be transmitted multiple times over different frequency channels. For this purpose, in Europe, the band between 868.180-868.220MHz is divided into 400 100Hz channels [@waspmote], out of which 40 channels are reserved and not used. As the base stations can scan all the channels to decode the messages, the end devices can autonomously choose a random frequency channel to transmit their messages. This simplifies the design for the end devices. Further, a single message is transmitted multiple times (3 by default) to increase the probability of successful reception by the base stations.
{#subsec:lora}
is a physical layer technology that modulates the signals in band using a proprietary spread spectrum technique [@lorapatent] developed and commercialized by Semtech Corporation [@semtech]. A bidirectional communication is provided by a special chirp spread spectrum (CSS) technique, which spreads a narrow band input signal over a wider channel bandwidth. The resulting signal has noise like properties, making it harder to detect or jam. The processing gain enables resilience to interference [@spreadinterferenceresilience] and noise. The transmitter makes the chirp signals vary their frequency over time without changing their phase between adjacent symbols. As long as this frequency change is slow enough so to put higher energy per chirp symbol, distant receivers can decode a severely attenuated signal several dBs below the noise floor. supports multiple spreading factors (between 7-12) to decide the tradeoff between range and data rate. Higher spreading factors delivers long range at an expense of lower data rates and vice versa. also combines Forward Error Correction (FEC) with the spread spectrum technique to further increase the receiver sensitivity. The data rate ranges from 300 bps to 37.5 kbps depending on spreading factor and channel bandwidth. Further, multiple transmissions using different spreading factors can be received simultaneously by a base station. In essence, multiple spreading factors provide a third degree of diversity after time and frequency.
The messages transmitted by the end devices are received by not a single but all the base stations in the range, giving rise to *“star-of-stars”* topology. By exploiting reception diversity this way, improves ratio of successfully received messages. However, achieving this requires multiple base stations in the neighborhood that may increase CAPEX and OPEX. The resulting duplicate receptions are filtered out in the backend system. Further, exploits these multiple receptions of same message at different base stations for localization of the transmitting end device. For this purpose, a time difference of arrival (TDOA) based localization technique supported by very accurate time synchronization between multiple base station is used.
A special interest group constituted by several commercial and industrial partners dubbed as proposed , an open standard defining architecture and layers above the physical layer. We briefly describe under standards in Section \[sec:standards\].
{#subsec:rpma}
(formerly known as On-Ramp Wireless) proposed a proprietary LPWA technology, which unlike most other technologies does not rely on better propagation properties of band. Instead it operates in 2.4 GHz band and leverages more relaxed regulations on the spectrum use across different regions [@howrpmaworks; @spectrumuse]. To offer an example, the regulations in USA and Europe do not impose a maximum limit on duty cycle for 2.4 GHz band, enabling higher throughput and more capacity than other technologies operating in band.
Most importantly, uses a patented physical access scheme named as Random Phase Multiple Access () [@rpmapatent] Direct Sequence Spread Spectrum, which it employs for uplink communication only. As a variation of Code Division Multiple Access (CDMA) itself, enables multiple transmitters to share a single time slot. However, first increases time slot duration of traditional CDMA and then scatters the channel access within this slot by adding a random offset delay for each transmitter. By not granting channel access to the transmitters exactly at once (i.e., at the beginning of a slot), reduces overlapping between transmitted signals and thus increases signal to interference ratio for each individual link [@howrpmaworks]. On the receiving side, the base stations employ multiple demodulators to decode signals arriving at different times within a slot. provides bidirectional communication, although with a slight link asymmetry. For downlink communication, base stations spreads the signals for individual end devices and then broadcast them using CDMA.
is reported to achieve up to -142 dBm receiver sensitivity and 168 dB link budget [@howrpmaworks]. Further, the end devices can adjust their transmit power for reaching closest base station and limiting interference to nearby devices.
leads efforts to standardize the physical layer specifications under IEEE 802.15.4k standard. technology is made compliant to the IEEE 802.15.4k specifications.
{#section}
[@telensa] provides end-to-end solutions for LPWA applications incorporating fully designed vertical network stacks with a support for integration with third party software.
For a wireless connectivity between their end devices and the base stations, designed a proprietary UNB modulation technique [@telensapatent], which operates in license-free band at low data rates. While less is known about the implementation of their wireless technology, aims to standardize its technology using ETSI Low Throughput Networks (LTN) specifications for an easy integration within applications. currently focuses on a few smart city applications such as intelligent lighting, smart parking, etc. To strengthen their LPWA offerings in intelligent lighting business, is involved with TALQ consortium [@talq] in defining standards for monitoring and controlling outdoor lighting systems.
{#section-1}
deploys dual-mode LPWA networks combining their own proprietary UNB technology with . It provides LPWA connectivity as a service to the end users: It offers end devices, deploys network infrastructure, develops custom applications, and hosts them at a backend cloud. Less is however known about the technical specifications of their underlying UNB technology and other system components.
Standards {#sec:standards}
=========
![LPWA standards and their developing organizations[]{data-label="fig:sdos"}](img/standard_bodies.pdf)
A plethora of standardization efforts are undertaken by different established standardization bodies including Institute of Electrical and Electronics Engineers (IEEE), European Telecommunications Standard Institute (ETSI), and The Third Generation Partnership Project (3GPP) along with industrial consortia such as -SIG, , and Alliance. Figure \[fig:sdos\] organizes the proposed standards according to their developing organizations, while Table \[tab:standardspecifications\] summarizes technical specifications of different standards. . Most of these efforts also involve several proprietary LPWA connectivity providers discussed in the previous section. The objectives of these SDOs and SIGs are quite diverse. In the long run, it is hoped that adoption of these standards will likely reduce the fragmentation of LPWA market and enable co-existence of multiple competing technologies.
IEEE
----
IEEE is extending range and reducing power consumption of their 802.15.4 [@802154] and 802.11 [@80211] standards with the set of new specifications for the physical and the MAC layers. Two LPWA standards are proposed as amendments to IEEE 802.15.4 base standard for Low-Rate Wireless Personal Area Networks (LR-WPANs), which we will cover in this section. Along with this, the efforts on amending IEEE 802.11 standard for wireless local area networks (WLANs) for longer range are also briefly described.
### IEEE 802.15.4k: Low Energy, Critical Infrastructure Monitoring Networks.
IEEE 802.15.4k Task Group (TG4k) proposes a standard for low-energy critical infrastructure monitoring (LECIM) applications to operate in the bands (and 2.4 GHz). This was a response to the fact that the earlier standard falls short on range and the node densities required for LPWA applications. IEEE 802.15.4k amendment bridges this gap by adopting DSSS and FSK as two new PHY layers. Multiple discrete channel bandwidths ranging from 100kHz to 1MHz can be used. The MAC layer specifications are also amended to address the new physical layers. The standard supports conventional CSMA/CA without priority channel access (PCA), CSMA, and with PCA. With PCA, the devices and base stations can prioritize their traffic in accessing the medium, providing a notion of quality of service. Like most LPWA standards, end-devices are connected to the base stations in a star topology and are capable of exchanging asynchronous and scheduled messages.
, the provider of the LPWA technology [@howrpmaworks], is a proponent of this standard. The PHY and MAC layers of ’s LPWA technology are compliant with this standard.
### IEEE 802.15.4g: Low-Data-Rate, Wireless, Smart Metering Utility Networks
IEEE 802.15 WPAN task group 4g (TG4g) proposes first set of PHY amendments to extend the short range portfolio of IEEE 802.15.4 base standard. The release of standard in April 2012 [@154g] addresses the process-control applications such as smart metering networks, which are inherently comprised of massive number of fixed end devices deployed across cities or countries. The standard defines three PHY layers namely FSK, Orthogonal Frequency-Division Multiple Access (OFDMA), and offset Quaternary Phase Shift Keying (QPSK), which support multiple data rates ranging from 40 kbps to 1 Mbps across different regions. With an exception of a single licensed band in USA, the PHY predominantly operates in (and 2.4 GHz) bands and thus co-exists with other interfering technologies in the same band. The PHY is designed to deliver frames of size up to 1500 bytes so to avoid fragmenting Internet Protocol (IP) packets.
The changes in the MAC layer to support the new PHYs are defined by IEEE 802.15.4e and not by IEEE 802.15.4g standard itself.
### IEEE 802.11: Wireless Local Area Networks
. The efforts for extending range and decreasing power consumption for WLANs are made by the IEEE 802.11 Task Group AH (TGah) and the IEEE 802.11 Topic Interest Group (TIG) in Long Range Low Power (LRLP).
TGah [@ah] proposed the IEEE 802.11ah specifications for PHY and MAC to operate long range Wi-Fi operation in band.
ETSI
----
ETSI leads efforts to standardize a bidirectional low data rate LPWA standard. The resulting standard dubbed as *Low Throughput Network (LTN)* was released in 2014 in the form of three group specifications. These specifications define i) the use cases [@ltn1] ii) the functional architecture [@ltn2], and iii) the protocols and interfaces [@ltn3]. One of its primary objectives is to reduce the electromagnetic radiation by exploiting short payload sizes and low data rates of M2M/IoT communication.
Apart from the recommendation on the air interfaces, LTN defines various interfaces and protocols for the cooperation between end-devices, base stations, network server, and operational and business management systems.
Motivated by the fact that the emerging LPWA networks use both ultra narrow band (e.g.,, ) and orthogonal sequence spread spectrum (OSSS) (e.g., ) modulation techniques, LTN standard does not restrict itself to a single category. It provides flexibility to LPWA operators to design and deploy their own proprietary UNB or OSSS modulation schemes in band as long as the end-devices, base stations and the network servers implement the interfaces described by the LTN specifications [@ltn1; @ltn2; @ltn3]. These specifications recommend using BPSK in uplink and GFSK in downlink for a UNB implementation. Alternatively, any OSSS modulation scheme can be used to support bidirectional communication. Data encryption as well as user authentication procedures are defined as a part of the LTN specifications.
Several providers of LPWA technologies such as , , and Semtech are actively involved with ETSI for standardization of their technologies.
3GPP {#sec:3gpp}
----
To address M2M and IoT market, 3GPP is evolving its existing cellular standards to strip complexity and cost, improve the range and signal penetration, and prolong the battery lifetime. Its multiple licensed solutions such as Long Term Evolution (LTE) enhancements for Machine Type Communications (eMTC), Extended Coverage GSM (EC-GSM), and Narrow-Band IoT () offer different trade-offs between cost, coverage, data rate, and power consumption to address diverse needs of IoT and M2M applications. However, a common goal of all these standards is to maximize the re-use of the existing cellular infrastructure and owned radio spectrum.
### LTE enhancements for Machine Type Communications (eMTC)
Conventional LTE end devices offer high data rate services at a cost and power consumption not acceptable for several MTC use cases. To reduce the cost while being compliant to LTE system requirements, 3GPP reduces the peak data rate from LTE Category 1 to LTE Category 0 and then to LTE Category M, the different stages in the LTE evolution process. Further cost reduction is achieved by supporting optional half duplex operation in Category 0. This choice reduces the complexity of modem and antenna design. From Category 0 to Category M1 (also known as eMTC), a more pronounced drop in the receive bandwidth from 20 MHz to 1.4 MHz in combination with a reduced transmission power will result in more cost-efficient and low-power design. To extend the battery lifetime for eMTC, 3GPP adopts two features namely *Power Saving Mode (PSM)* and *extended Discontinuous Reception (eDRx)*. They enable end devices to enter in a deep sleep mode for hours or even days without losing their network registration. The end devices avoid monitoring downlink control channel for prolonged periods of time to save energy. The same power saving features are exploited in EC-GSM described next.
### EC-GSM
While Global System for Mobile Communications (GSM) is announced to be decommissioned in certain regions, Mobile Network Operators (MNOs) may like to prolong their operation in few markets. With this assumption, 3GPP is in process of proposing the extended coverage GSM (EC-GSM) standard that aims to extend the GSM coverage by +20dB using band for better signal penetration in indoor environments. A link budget in the range of 154 dB-164 dB is aimed depending on the transmission power. With only a software upgrade of GSM networks, the legacy GPRS spectrum can pack the new logical channels defined to accommodate EC-GSM devices. EC-GSM exploits repetitive transmissions and signal processing techniques to improve coverage and capacity of legacy GPRS. Two modulation techniques namely Gaussian Minimum Shift Keying (GMSK) and 8-ary Phase Shift Keying (8PSK) provide variable data rates with the peak rate of 240 kbps with the latter technique. The standard was released in mid 2016 and aims to support 50k devices per base station and enhanced security and privacy features compared to conventional GSM based solutions.
### {#section-2}
IETF
----
IETF aims to support LPWA ecosystem of dominantly proprietary technologies by standardizing *end-to-end IP-based connectivity* for ultra-low power devices and applications. IETF has already designed the IPv6 stack for Low power Wireless Personal Area Networks (6LoWPAN). However, these standardization efforts focus on legacy IEEE 802.15.4 based wireless networks, which support relatively higher data rates, longer payload sizes and shorter ranges than most LPWA technologies today. However, distinct features of LPWA technologies pose real technical challenges for the IP connectivity. Firstly, LPWA technologies are heterogeneous: every technology manipulates data in different formats using different physical and MAC layers. Secondly, most technologies use the bands, which are subject to strict regional regulations, limiting maximum data rate, time-on-air, and frequency of data transmissions. Third, many technologies are characterized by a strong link asymmetry between uplink and downlink, usually limiting downlink capabilities. Thus, the proposed IP stacks should be lightweight enough to confine within these very strict limitations of the underlying technologies. Unfortunately, these challenges are not yet addressed in earlier IETF standardization efforts.
A working group on Low-Power Wide Area Networks (LPWAN) [@ietflpwan] under IETF umbrella was formed in April 2016. This group identified challenges and the design space for IPv6 connectivity for LPWA technologies in [@gapanalysis]. Future efforts may likely culminate into multiple standards defining a full IPv6 stack for LPWA (6LPWA) that can connect LPWA devices with each other and their external ecosystem in a secure and a scalable manner. More specific technical problems to be addressed by this IETF group are described as follows:
- The maximum payload size for LPWA technologies is limited. The header compression techniques should be tailored to these small payload sizes as well as sparse and infrequent traffic of LPWA devices.
- Most LPWA technologies do not natively support fragmentation and reassembly at Layer 2 (L2). Because IPv6 packets are often too big to fit in a single L2 packet, the mechanisms for fragmentation and reassembly of IPv6 packets are to be defined.
- To manage end devices, applications, base stations, and servers, there is a need for ultra-lightweight signaling protocols, which can operate efficiently over the constrained L2 technology. To this effect, IETF may look into efficient application-level signaling protocols [@cosol].
- The IP connectivity should preserve security, integrity, and privacy of data exchanged over LPWA radio access networks and beyond. Most LPWA technologies use symmetric key cryptography, in which end devices and the networks share the same secret key. More robust and resilient techniques and mechanisms may be investigated.
{#section-3}
As described in Section \[sec:proprietary\], is a proprietary physical layer for LPWA connectivity. However, the upper layers and the system architecture are defined by under Specification [@lorawan] that were released to public in July 2015.
A simple scheme is used at the MAC layer that in combination with physical layer enables multiple devices to communicate at the same time but using different channels and/or orthogonal codes (i.e., spreading factors). End devices can hop on to any base station without extra signaling overhead. The base stations connect end devices via a backhaul to network server, the brain of the system that suppresses duplicate receptions, adapts radio access links, and forwards data to suitable application servers. Application servers then process the received data and perform user defined tasks.
anticipates that the devices will have different capabilities as per application requirements. Therefore, defines three different classes of end-devices, all of which support bidirectional communication but with different downlink latency and power requirements. Class A device achieves the longest lifetime but with the highest latency.It listens for a downlink communication *only shortly after* its uplink transmission. Class B device, in addition, can *schedule* downlink receptions from base station at certain time intervals. Thus, only at these agreed-on epochs, applications can send control messages to the end devices (for possibly performing an actuation function). Lastly, Class C device is typically mains-powered, having capability to *continuously* listen and receive downlink transmissions with the shortest possible latency at any time.
standard uses symmetric-key cryptography to authenticate end devices with the network and preserve the privacy of application data.
-SIG
----
Special Interest Group [@weightless] proposed three open LPWA standards, each providing different features, range and power consumption. These standards can operate in license-free as well as in licensed spectrum.
leverages excellent signal propagation properties of TV white-spaces. It supports several modulation schemes including 16-Quadrature Amplitude Modulation (16-QAM) and Differential-BPSK (DBPSK) and a wide range of spreading factors. Depending on the link budget, the packets having sizes in upwards of 10 bytes can be transmitted at a rate between 1 kbps and 10 Mbps. The end devices transmit to base stations in a narrow band but at a lower power level than the base stations to save energy. -W has a one drawback. The shared access of the TV white spaces is permitted only in few regions, therefore -SIG defines the other two standards in band, which is globally available for shared access.
is a UNB standard for only one-way communication from end devices to a base station, achieving significant energy efficiency and lower cost than the other standards. It uses DBPSK modulation scheme in bands. One-way communication, however, limits the number of use cases for -N.
blends two-way connectivity with two non-proprietary physical layers. It modulates the signals using GMSK and Quadrature Phase Shift Keying (QPSK), two well known schemes adopted in different commercial products. Therefore, the end devices do not require a proprietary chipset. Each single 12.5 kHz narrow channel in band offers a data rate in the range between 0.2 kbps to 100 kbps. A full support for acknowledgments and bidirectional communication capabilities enable over-the-air upgrades of firmware.
Like , all standards employ symmetric key cryptography for authentication of end devices and integrity of application data.
Alliance
--------
The Alliance is an industry consortium that defines a full vertical network stack for LPWA connectivity known as Alliance Protocol (D7AP) [@dash7]. With its origin in the ISO/IEC 18000-7 standard [@isoiec2009] for the air interface for active radio frequency identification (RFID) devices, D7AP has evolved into a stack that provides *mid-range* connectivity to low-power sensors and actuators [@dash7].
employs narrow band modulation scheme using two-level GFSK in bands. Compared to most other LPWA technologies, has a few notable differences. First it uses a tree topology by default with an option to choose star layout as well. In the former case, the end devices are first connected to duty-cycling *sub-controllers*, which then connect to the always ON base stations. This duty cycling mechanism brings more complexity to the design of the upper layers. Second, MAC protocol forces the end devices to check the channel periodically for possible downlink transmissions, adding significant idle listening cost. By doing so, gets much lower latency for downlink communication than other LPWA technologies but at an expense of higher energy consumption. Third, unlike other LPWA technologies, defines a complete network stack, enabling applications and end devices to communicate with each other without having to deal with intricacies of the underlying physical or MAC layers. implements support for forward error correction and symmetric key cryptography.
Challenges and Open Research Directions {#sec:challenges}
=======================================
LPWA players are striving hard to innovate solutions that can deliver the so-called *carrier grade performance*. To this effect, device manufacturers, network operators, and system integration experts have concentrated their efforts on cheap hardware design, reliable connectivity, and full end-to-end application integration. On the business side, the proprietary solution providers are in a rush to bring their services to the market and capture their share across multiple verticals. In this race, it is easy but counter-productive to overlook important challenges faced by LPWA technologies. In this section, we highlight these challenges and some research directions to overcome them and improve performance in long-term.
Scaling networks to massive number of devices
---------------------------------------------
LPWA technologies will connect tens of millions of devices transmitting data at an unprecedented scale over limited and often shared radio resources. This complex resource allocation problem is further complicated by several other factors. First, the device density may vary significantly across different geographical areas, creating the so called *hot-spot* problem. These hot-spots will put the LPWA base stations to a stress test. Second, cross-technology interference can severely degrade the performance of LPWA technologies. This problem is definitely more severe for LPWA technologies operating in the license-exempt and shared bands. Even licensed cellular LPWA technologies operating in-band with broadband services (like voice and video) are equally at this risk. It is not difficult to imagine a scenario when multiple UNB channels of a LPWA technology are simultaneously interfered by a single broadband signal. Further, most LPWA technologies use simple or CSMA based MAC protocols, which do not scale well with number of connected devices [@goodbyealoha].
Several research directions can be pursued to address the capacity issue of LPWA technologies. These include use of channel diversity, opportunistic spectrum access, and adaptive transmission strategies. Use of channel hopping and multi-modem base stations can exploit channel and hardware diversity and is considered already for existing LPWA technologies. Cross-layer solutions can adapt the transmission strategies to the peculiar traffic patterns of LPWA devices and mitigate the effect of cross-technology interference. Further, improvements in existing MAC protocols are required for LPWA technologies to scale them well for a large number of devices transmitting only short messages [@goodbyealoha].
In the context of cellular LPWA networks, if excessive IoT/M2M traffic starves the legacy cellular traffic, MNOs may consider deploying LPWA support in unlicensed spectrum. Such an opportunistic use of radio spectrum can benefit from use of cognitive software-defined radios (SDR). SDRs could come in handy when multiple technologies need to compete for shared spectrum.
To cater to areas with a higher device density, LPWA access networks can borrow densification techniques from cellular domain.
Interference Control and Mitigation
-----------------------------------
In future, the number of connected devices will undergo exponential increase, causing higher levels of interference to each other. The devices operating in the shared bands will undergo unprecedented levels of both cross-technology interference as well as self-interference. Some interference measurement studies [@interferencemeasurements] already point to a possible negative effect on coverage and capacity of LPWA networks. Furthermore, many LPWA technologies like and resort to simple scheme to grant channel access to the low-power end-devices. This choice of talking randomly without listening to others cannot only deteriorate performance, but also generates higher interference [@randomnesscausingstrangeinterference]. Further, densification of the base station deployments to accommodate more devices is a major source of interference across LPWA cells and requires careful deployment and design of base stations [@utz].
In an anarchy of tens of wireless technologies and massive number of devices, all sharing the same channels, interference resilient communication and efficient spectrum sharing [@policies] are key problems, both at technical and regulatory grounds. As interference varies across frequency, time, and space, devices should adapt their transmission schedules to experience the least interference and the best reliability. PHY and MAC layer designs exploiting this diversity at such a large scale need further investigation. Regulatory authorities may also need to step forward to propose rules to enable efficient sharing and cooperation between different wireless technologies in the unlicensed bands [@policies].
High data-rate modulation techniques
------------------------------------
The LPWA technologies compromise on data rates to reach long distances. Some technologies especially those using UNB modulation in the shared bands offer very low data rates and short payload sizes, limiting their potential business use cases. To support bandwidth hungry use cases, it is meaningful to implement multiple modulation schemes for devices. As per application needs, devices can switch between different modulation schemes so to enable high energy efficiency, long range and high data rate simultaneously.
To achieve this, there is a need for flexible and inexpensive hardware design that can support multiple physical layers, each of which can offer complementary trade-offs to match the range and data rate requirements of applications.
Interoperability between different LPWA technologies
----------------------------------------------------
Given that market is heading towards an intense competition between different LPWA technologies, it is safe to assume that several may coexist in future. Interoperability between these heterogeneous technologies is thus crucial to their long-term profitability. With little to no support for interoperability between different technologies, a need for standards that glue them together is strong. Some of the standardization efforts across ETSI, IEEE, 3GPP, and IETF discussed in Section \[sec:standards\] will look into these interoperability issues.
However, for a complete interoperability, several directions should be explored. Firstly, IP can already connect short-range wireless devices using mesh networking. The peculiarities of LPWA technologies limit a direct implementation of the same IP stack on LPWA devices. Alternative solutions based on gateways or backend based solutions are viable candidates. However, all such solutions should scale well with number of devices without degrading performance. Secondly, use of IoT middleware and virtualization techniques can play a major role in connecting LPWA devices. IoT middleware can support multiple radio access technologies and thus make integration of LPWA technologies with rest of IoT technologies straightforward. These middleware can also consolidate data from multiple sources to offer knowledge based value-added services to end-users.
Interoperability is a still an open challenge. Testbeds and open-source tool chains for LPWA technologies are not yet widely available to evaluate interoperability mechanisms.
Localization
------------
LPWA networks expect to generate significant revenue from logistics, supply chain management, and personal IoT applications, where location of mobile objects, vehicles, humans, and animals may be of utmost interest. An accurate localization support is thus an important feature for keeping track of valuables, kids, elderly, pets, shipments, vehicle fleets, etc. In fact, it is regarded as an important feature to enable new applications.
Localization of mobile devices is typically achieved by properties of received signals [@rssiranging] and time of flight based measurement. All such techniques require very accurate time synchronization and sufficient deployment density of base stations. This is rather easily achieved with a careful network deployment and planning. However, a very limited channel bandwidth of LPWA technologies and an often absence of a direct path between end devices and base stations introduce very large localization error [@localizationerrors; @chalmersthesis]. Thus, doing accurate localization using LPWA transceivers alone is a real challenge.
LPWA networks require new techniques that not only exploit physical layer properties [@rssiranging] but also combine other established localization techniques to ascertain that accuracy is good enough for real tracking applications.
Link optimizations and adaptability
-----------------------------------
If a LPWA technology permits, each individual link should be optimized for high link quality and low energy consumption to maximize overall network capacity. Every LPWA technology allows multiple link level configurations that introduce tradeoffs between different performance metrics such as data rate, time-on-air, area coverage, etc. This motivates a need for adaptive techniques that can monitor link quality and then readjust its parameters for better performance.
However for such techniques to work, a feedback from gateway to end devices is usually required over downlink. Link asymmetry that causes downlink of many LPWA technologies (e.g., ) to have a lower capacity than uplink is a major hurdle in this case and thus, needs to be addressed in some way.
LPWA testbeds and tools
-----------------------
LPWA technologies enable several smart city applications. A few smart city testbeds e.g. SmartSantander[@smartsantander] have emerged in recent years. Such testbeds incorporate sensors equipped with different wireless technologies such as Wi-Fi, IEEE 802.15.4 based networks and cellular networks. However, there are so far no open testbeds for LPWA networks. Therefore, it is not cost-effective to widely design LPWA systems and compare their performance at a metropolitan scale. At the time of writing, only a handful of empirical studies [@kartakis] compare two our more LPWA technologies under same conditions. In our opinion, it is a significant barrier to entry for potential customers. Providing LPWA technologies as a scientific instrumentation for general public through city governments can act as a confidence building measure. In the meanwhile, analytical models [@trllorascale; @randomnesscausingstrangeinterference] and simulators [@lorasim; @padovathesis] have recently been proposed for the popular LPWA technologies.
Authentication, Security, and Privacy
-------------------------------------
Authentication, security, and privacy are some of the most important features of any communication system. Cellular networks provide proven authentication, security, and privacy mechanisms. Use of Subscriber Identity Modules (SIM) simplifies identification and authentication of the cellular devices. LPWA technologies, due to their cost and energy considerations, not only settle for simpler communication protocols but also depart from SIM based authentication. Techniques and protocols are thus required to provide equivalent or better authentication support for LPWA technologies. Further to assure that end devices are not exposed to any security risks over prolonged duration, a support for over-the-air (OTA) updates is a crucial feature. A lack of adequate support for OTA updates poses a great security risk to most LPWA technologies.
Margelis et al. [@bristol] highlight a few security vulnerabilities of the three prominent LPWA technologies namely , , and . To offer an example, end devices in and networks do not encrypt application payload and the network join request respectively [@bristol], potentially leading to eavesdropping. Further most LPWA technologies use symmetric key cryptography in which the end devices and the networks share a same secret key. Robust and low-power mechanisms for authentication, security, and privacy need further investigation.
Mobility and Roaming
--------------------
Roaming of devices between different network operators is a vital feature responsible for the commercial success of cellular networks. Whilst some LPWA technologies do not have the notion of roaming (work on a global scale such as ), there are others that do not have support for roaming as of the time of this writing. The major challenge is to provide roaming without compromising the lifetime of the devices. To this effect, the roaming support should put minimal burden on the battery powered end-devices. Because the end-devices duty cycle aggressively, it is reasonable to assume that the low power devices cannot receive downlink traffic at all times. Data exchanges over the uplink should be exploited more aggressively. Network assignment is to be resolved in backend systems as opposed to the access network. All the issues related to agility of roaming process and efficient resource management have to be addressed.
Further billing and revenue sharing models for roaming across different networks have to be agreed upon.
International roaming across regions controlled by different spectrum regulations (e.g., USA, Europe or China) is even more challenging. In order to comply to varying spectrum regulations, end devices should be equipped with capabilities to detect the region first and then adhere to the appropriate regional requirements when transmitting data. This adds complexity to end devices and therefore the cost. Simple low cost design to support international roaming is thus required.
Support for Service Level Agreements
------------------------------------
The ability to offer certain QoS guarantees can be a competitive differentiator between different LPWA operators. While it is relatively easy to offer QoS guarantees in the licensed spectrum, most proprietary technologies opt for the license-exempt spectrum for a faster time to market. As a result, they have to adhere to regional regulations on the use of shared spectrum, which may limit the radio duty cycle and transmitted RF power. Cross-technology interference also influences the performance of LPWA technologies. Providing carrier grade performance on a spectrum shared across multiple uncoordinated technologies and tens of thousands of devices per base station is a significant challenge. Service Level Agreements (SLAs) are likely to be violated due to the factors outside the control of network operators. Therefore, the support for SLAs is expected to be limited in license-exempt bands. Studying such extremely noisy environments to know if some relaxed statistical service guarantees can be provided is a good potential research direction.
Co-existence of LPWA technologies with other wireless networks
--------------------------------------------------------------
Each application has a unique set of requirements, which may vary over different time scales and contexts. If connectivity of the end-devices is supplemented with LPWA technologies in addition to the cellular or wireless LANs, operation of applications can be optimized. Conflicting goals like energy efficiency, high throughput, ultra-low latency and wide area coverage can be achieved by leveraging the benefits of each technology [@ahlpwa; @ltelpwa]. System-level research is needed to explore benefits of such opportunistic and contextual network access.
There can be different use cases where multiple technologies can cooperate with each other. The ETSI LTN specification [@ltn1] lists a few of these use cases for cellular/LPWA cooperation. To offer an example, when cellular connectivity is not available, LPWA technologies can still be used as a fall-back option for sending only low data rate critical traffic. Further, the periodic keep-alive messages of cellular networks can be delegated to energy-efficient LPWA networks [@ltn1]. There can be other novel ways for cooperation between LPWA and cellular networks. For instance, LPWA technologies can assist route formation for the device-to-device communication in cellular networks. When some devices outside the cellular coverage need to build a multi-hop route to reach cellular infrastructure, LPWA connectivity can assist in detecting proximity to other serviced devices. These use-cases may have a strong appeal for public safety applications. Further, as we know, LPWA technologies are designed specifically for ultra low data rates. A need of occasionally sending large traffic volumes can be met with a complementary cellular connection, which can be activated only on demand.
A joint ownership of LPWA and cellular networks combined with a drop in prices of LPWA devices and connectivity make a strong business case for the above-mentioned use cases. However, there is a need to overcome many systems related challenges.
Support for Data Analytics
--------------------------
Compared to a human subscriber, the average revenue generated by a single connected M2M/IoT device is rather small. Therefore, network operators see a clear incentive in extending their business beyond the pure connectivity for sake of a higher profitability. One way to do so is by augmenting LPWA networks with sophisticated data analytics support that can convert the raw collected data into contextually relevant information for the end-users. Such knowledge can support end users in making intelligent decisions, earning higher profits, or bringing their operational costs down. Network operators thus can monetize this by selling knowledge to end users.
There are however enormous challenges associated with providing a LPWA network as a service to the end-users. It requires a unified management of business platform and a scalable integration with the cloud. One of the main challenges is also to offer custom-tailored services to many different vertical industries, effectively covering different use cases ideally by a single LPWA technology.
Conclusion {#sec:conclusion}
==========
Wide area coverage, low power consumption, and inexpensive wireless connectivity blends together in LPWA technologies to enable a strong business case for low throughput IoT/M2M applications that do not require ultra-low latency. However, this combination of often conflicting goals is a result of carefully designed physical and MAC layer techniques, precisely surveyed in this paper. To tap into the huge IoT/M2M market, several commercial providers exploit different innovative techniques in their LPWA connectivity solutions. The variety of these solutions have resulted in a fragmented market, highlighting a dire need for standards. We provided a comprehensive overview of many such standardization efforts led by several SDOs and SIGs. We observe that most standards focus on physical and MAC layers. A gap at the upper layers (application, transport, network etc.) is to be bridged. Further, we point out important challenges that LPWA technologies face today and possible directions to overcome them. We encourage further developments in LPWA technologies to push the envelop of connecting massive number of devices in future.
[^1]: All authors are with Toshiba Research Europe Limited, UK, Email: {usman.raza, parag.kulkarni, mahesh.sooriyabandara}@toshiba-trel.com.
|
---
abstract: 'We study the end-point of the Electroweak phase transition using the auxiliary mass method. The end point is $m_H\sim40$ (GeV) in the case $m_t=0$ (GeV) and strongly depends on the top quark mass. A first order phase transition disappears at $m_t\sim 160$ (GeV). The renormalization effect of the top quark is significant.'
address: |
Institute for Cosmic Ray Research, University of Tokyo, Tanashi, Tokyo 188-8502, Japan\
$^*$Department of Physics, School of Science, University of Tokyo, Tokyo 113, Japan
author:
- 'Kenzo [Ogure]{} and Joe [Sato]{}$^{*}$'
title: 'End-point of the Electroweak Phase Transition using the auxiliary mass method\'
---
The Electroweak phase transition is one of the most important phase transitions at the early universe since it may account for the baryon number of the present universe[@Kuz]. This phase transition was first investigated using the perturbation theory of the finite temperature field theory and predicts the first order phase transition from an effective potential[@Car; @Arn]. The perturbation theory, however, has difficulty due to an infrared divergence caused by light Bosons and cannot give reliable results in the case where the Higgs Boson mass $m_H$, is comparable to or greater than the Weak Boson mass. Lattice Monte Carlo simulations, therefore, become the most powerful method and are still used to investigate details of the phase transition[@Rum; @Cas; @Ilg; @Kar; @Aok]. According to these results, the Electroweak phase transition is of the first order if $m_H$ is less than an end-point $m_{H,c}\sim70$ (GeV). It turns to be of the second order just on the end-point. Beyond the end-point, we have no phase transition, which means any observable quantities do not have discontinuities. As far as we know, three other non-perturbative methods predict the existence of the end-point[@Buc; @Tet; @Hub]. The end-point is determined below 100 GeV by these three methods.
The auxiliary-mass method is a new method to avoid the infrared divergence at a finite temperature T[@Dru; @Ina; @Ogu; @Ogu2]. This method is based on a simple idea as follows. We first add a large auxiliary mass to light Bosons, which cause the infrared divergence, and calculate an effective potential at the finite temperature. Due to the auxiliary mass, the effective potential is reliable at any temperature. We next extrapolate this effective potential to the true mass by integrating an evolution equation, which we show later. We applied this method to the $Z_2$-invariant scalar model and the $O(N)$-invariant scalar model, and obtained satisfactory results[@Ina; @Ogu; @Ogu2].
We apply the method to the Standard Model and investigate the Electroweak phase transition in the present paper. We add an auxiliary mass $M\gtrsim T$ only to the Higgs Boson, which becomes very light owing to a cancellation between its negative tree mass and positive thermal mass for small field expectation values around the critical temperature. We notice that the infrared divergence from the Higgs Boson is always serious if the phase transition is of the second order or of the weakly first order[@Arn]. In the standard model, transverse modes of the gauge fields also have small masses at small field expectation values since they do not have the thermal mass at one loop order. It is however, expected that they do have a thermal mass ($\sim g^2 T$) at the two loop order[@Kar; @Buc; @Ebe]. Here, $g$ is a gauge coupling constant. If so, the loop expansion parameter[@Arn] is $\frac{g^2 T}{M_G}\lesssim 1$, even if the field expectation value is zero. Here, $M_G$ is the mass of the gauge Boson, which is a sum of a zero-temperature mass and a thermal mass. We assume that this actually occurs and the infrared divergence from the gauge Bosons is not serious. Since this small thermal mass for the transverse modes will bring only a slight change to a one-loop effective potential, we use the one-loop effective potential without this small mass for the transverse modes.
An effective potential is then calculated as follows in the Landau gauge[@Car; @Arn], $$\begin{aligned}
V(M^2)
&=&
\frac{M^2}{2}\phi^2+\frac{\lambda}{4!}\phi^4
+f_{BT}(m_H^2(\phi))
+3f_{BT}(m_{NG}^2(\phi))\nonumber\\
&&+4f_{BT}(M_W^2(\phi))+4f_{G0}(M_W^2(\phi))\nonumber\\
&&+2f_{BT}(M_{WL}^2(\phi))+2f_{G0}(M_{WL}^2(\phi)) \nonumber\\
&&+2f_{BT}(M_Z^2(\phi))+2f_{G0}(M_Z^2(\phi))\nonumber\\
&&+f_{BT}(M_{ZL}^2(\phi))+f_{G0}(M_{ZL}^2(\phi))\nonumber\\
&&+f_{BT}(M_{\gamma L}^2(\phi))+f_{G0}(M_{\gamma L}^2(\phi))\nonumber\\
&&+12f_{FT}(m_t^2(\phi))+12f_{F0}(m_t^2(\phi))
\label{ini}\end{aligned}$$ here, $$\begin{aligned}
m_H^2(\phi)&=&M^2+\frac{\lambda}{2}\phi^2,\
m_{NG}^2(\phi)=M^2+\frac{\lambda}{6}\phi^2,\nonumber\\
M_W^2(\phi)&=&\frac{g_2^2\phi^2}{4},\
M_{WL}^2(\phi)=\frac{g_2^2\phi^2}{4}+\frac{11g_2^2T^2}{6},\nonumber\\
M_Z^2(\phi)&=&\frac{(g_2^2+g_1^2)\phi^2}{4},\
m_t(\phi)=\frac{g_Y^2 \phi^2}{2}\label{mass}\nonumber\end{aligned}$$ $$\begin{aligned}
\left(
\begin{array}{cc}
M_{ZL}^2&0\\
0&M_{\gamma L}^2
\end{array}
\right)
&=&
{\bf T^{\dagger}}
\left(
\begin{array}{cc}
\frac{g_2^2 \phi^2}{4}+\frac{11g_2^2 T^2}{6}&-\frac{g_1 g_2 \phi^2}{4}\\
-\frac{g_1 g_2 \phi^2}{4}&\frac{g_1^2 \phi^2}{4}+\frac{11g_2^2 T^2}{6}
\end{array}
\right)
{\bf T}\nonumber\end{aligned}$$ $$\begin{aligned}
\label{func}
f_{BT}(m^2)&=&\frac{T}{2 \pi^2}\int_0^{\infty}dk \ k^2
\log{\{1-\exp{(-\frac{\sqrt{k^2+m^2}}{T})}\} }\nonumber\\
f_{FT}(m^2)&=&\frac{T}{2 \pi^2}\int_0^{\infty}dk \ k^2
\log{\{1+\exp{(-\frac{\sqrt{k^2+m^2}}{T})}\} }\nonumber\\
f_{G0}(m^2)&=&\frac{m^4}{64\pi^2}
\{\log{(\frac{m^2}{\bar\mu^2})}-\frac{5}{6}\}\nonumber\\
f_{F0}(m^2)&=&-\frac{m^4}{64\pi^2}
\{\log{(\frac{m^2}{\bar\mu^2})}-\frac{3}{2}\}.\nonumber\end{aligned}$$ In the above equations, $\lambda$, $g_2$, $g_1$ and $g_Y$ are coupling constants for the Higgs Boson, SU(2) gauge field, U(1) gauge field and top Yukawa respectively. The matrix ${\bf T}$ is orthogonal and diagonalizes the mass matrix for the Z Boson and photon at finite temperature. We renormalized the effective potential using the $\overline{MS}$ scheme with a renormalization scale $\bar\mu$. A zero-temperature contribution from the Higgs Boson is neglected since it is small in the mass region we consider. The ring diagrams are added only to the Weak Bosons and the Z-Boson since the Higgs Bosons have auxiliary large mass and do not need the resummation. We then extrapolate this effective potential at the auxiliary mass squared $M^2$ to that of the true mass squared $-\nu^2$ using an evolution equation. Since we add the auxiliary mass only to the Higgs Boson, the evolution equation is same as that for O(4)-invariant scalar model, which was constructed in[^1] [@Ogu2], $$\begin{aligned}
\frac{\partial V}{\partial m^{2}}&=&
\frac{1}{2}\bar{\phi}^{2}+\frac{1}{4\pi^{2}}
\int^{\infty}_{0}dk \frac{k^2}{ \sqrt{k^2
+\frac{\partial^{2}V}{\partial\bar\phi^{2}}}}
\frac{1}{ e^{\frac{1}{T}\sqrt{k^2
+\frac{\partial^{2}V}{\partial\bar\phi^{2}}}}-1}\nonumber\\
&&+\frac{3}{4\pi^{2}}
\int^{\infty}_{0}dk \frac{k^2}{ \sqrt{k^2
+\frac{1}{\bar\phi}\frac{\partial V}{\partial\bar\phi}}}
\frac{1}{ e^{\frac{1}{T}\sqrt{k^2
+\frac{1}{\bar\phi}
\frac{\partial V}{\partial\bar\phi}}}-1}.
\label{evo}\end{aligned}$$ A non-perturbative effective potential free from the infrared divergence can be obtained by solving the evolution equation (\[evo\]) with an initial condition Eq.(\[ini\]) numerically.
Before showing our numerical results, we relate the parameters $\nu^2$, $\lambda$, $g_2$, $g_1$ and $g_Y$ to physical quantities at the zero-temperature[@Arn], $$\begin{aligned}
\label{para}
\lambda
&=&
\frac{3m_{H0}^2}{\phi_0^2}-\frac{3}{32\pi^2}
\left[
\frac{3}{2}g_2^4
\left\{\log{\left(\frac{M_{W0}^2}{\bar\mu^2}\right)+\frac{2}{3}}
\right\}\right.
\nonumber\\
&&+
\frac{3}{4}\left(g_1^2+g_2^2\right)^2
\left\{\log{\left(\frac{M_{Z0}^2}{\bar\mu^2}\right)}+\frac{2}{3}
\right\}\nonumber\\
&&-\left.12g_Y^2\log{\left(\frac{m_{t0}^2}{\bar\mu^2}\right)}
\right] \nonumber\\
\nu^2
&=&
\frac{m_{H0}^2}{2}
-\frac{\phi_0^2}{64\pi^2}
\left\{
\frac{3}{2}g_2^4
+\frac{3}{4}(g_1^2+g_2^2)^2
-12g_Y^4
\right\}\\
M_{W0}^2
&=&
\frac{g_2^2 \phi_0^2}{4},
M_{Z0}^2
=
\frac{(g_2^2+g_1^2) \phi_0^2}{4},\nonumber\\
m_{t0}^2
&=&
\frac{g_Y^2 \phi_0^2}{2},
\phi_0=246 \ {\rm (GeV)}\nonumber\end{aligned}$$ Radiative corrections at the one-loop order are included in the equations for $\nu^2$ and $\lambda$ since they are large, especially in the case where the Higgs Boson mass is small. The effective potential Eq.(\[ini\]) does not depend on $\bar\mu$ using $\lambda$ in Eq.(\[para\]) in this order. We fix the masses of the Weak Bosons and the Z-Boson as $M_{W0}=80$ (GeV) and $M_{Z0}=92$ (GeV) below.
We first investigate a $SU(2)\times U(1)$ gauge plus Higgs theory, corresponding to the case $m_t=0$. We show results obtained by setting $M=T$ since similar results were obtained by setting $M=\frac{T}{2}$ and $M=2T$ as in the case of [@Ina]. This is quite natural since the ristriction on $M$ is $M\gtrsim T$. The effective potentials at the critical temperature are shown in Fig.\[cripot1\] for $m_H=$15, 30, 45 (GeV), respectively. The first order phase transition becomes weaker for smaller values of the Higgs mass and disappears finally. They are compared to effective potentials obtained by the ring resumed perturbation theory at the one-loop order without the high temperature expansion in Fig.\[cripot2\]. We find clearly that they are similar for smaller values of $m_H$ and different for larger values of $m_H$. This is consistent with the fact that the ring resumed perturbation theory is reliable only for smaller values of the Higgs mass $m_H\ll M_W$[@Arn]. We plot a ratio of the critical field expectation values to the critical temperature, $\phi_c/T_c$, as a function of $m_H$ in Fig.\[sp\]. This quantity indicates the strength of the first order phase transition and important in estimating the sphaleron rate, which plays a very important role in the Electroweak Baryogenesis[@Man; @Man2]. The end-point is determined as $m_{H,c}=38$ (GeV) from Fig.\[sp\]. This figure also shows that the results obtained by the auxiliary mass method and the perturbation theory is similar for smaller values of $m_H$ and different for larger values, $m_H \gtrsim 30$ (GeV).
(14,6)
(10,12)
(10,6)
We next investigate more realistic cases in which the top quark mass is finite. The same ratios are shown in Fig.\[sp2\] for various values of $m_t$. This figure shows that the strengths of the first order phase transition are almost same for $m_t\lesssim 100$ (GeV) and become weaker for $m_t\gtrsim 100$ (GeV) rapidly. The end-points are then shown in Fig.\[end\] as a function of $m_t$. The graph labeled “1-loop” is obtained using Eq.(\[ini\]) and Eq.(\[para\]), which take into account the zero-temperature radiative corrections from the top quark and gauge fields. The contribution from the top quark is much larger than that of the gauge fields. On the other hand, the graph labeled as “tree” is obtained without the zero-temperature radiative correction, omitting the contributions from $f_{G0}$ and $f_{F0}$ from Eq.(\[ini\]) and leaving only the first terms of Eq.(\[para\]) for $\lambda$ and $\nu^2$. They are not much different for smaller values of the top quark mass, $m_t\lesssim 100$ (GeV). Their behavior, however, differs drastically for larger values of the top quark mass, $m_t\gtrsim 100$ (GeV). Surprisingly, the end-point vanishes for $m_t\gtrsim 160$ (GeV) in the “1-loop” results though it increases in the “tree” results. These results tell us that fermionic degrees of freedom play significant roles in the phase transition through the renormalization effects at the zero-temperature. We also conclude that there are no the first order phase transitions for $m_t= 175$ (GeV), no matter how small the Higgs Boson mass.
(10,6)
(10,6)
In the present paper, we have calculated the effective potentials of the standard model using the auxiliary mass method at a finite temperature. We first investigated a $SU(2)\times U(1)$ gauge plus Higgs theory, corresponding to the case $m_t=0$. The phase transition was of the first order and similar to the results obtained by the perturbation theory for smaller $m_H\sim 15$ (GeV). The phase transition became weaker for larger $M_H\sim 30$ (GeV) and finally disappeared in contrast to the results from perturbation theory. We found that the end-point is at $m_{H,c}=38$ (GeV) in this case. This is consistent with the results of the Lattice Monte Carlo simulation [@Cas; @Ilg; @Kar; @Aok] and the other non-perturbative methods[@Buc; @Tet; @Hub] qualitatively. The value of the end-point, however, was smaller than those by these methods. This may be caused by the approximations, used to construct the evolution equation (\[evo\]), or used in the other papers. The two loop effect from the gauge fields may shift our results due to slow convergence of the perturbation theory. We next investigated the more realistic case in which the top quark mass is finite. We found that the end-point was strongly dependent on $m_t$ and disappeared for $m_t\gtrsim 160$ (GeV). The renormalization effects from the top quark were significant. Lattice Monte Carlo simulations, however, do not follow this behavior[@Rum]. We think of two possible reasons:(1)Since our results differ from that of the Lattice Monte Carlo simulation by factors of 2 in a $SU(2)\times U(1)$ gauge plus Higgs theory quantitatively, the similar behavior may be found at a larger top quark mass in the Lattice Monte Carlo simulation.(2)Since the one-loop correction to the [*effective potential*]{} at the zero-temperature is significant, the 3D effective theory, which has no Fermionic degrees of freedom, may not reflect the effect appropriately.
Finally, the strongly first order phase transition necessary for the Electroweak Baryogenesis was not found in the Standard Model. We will apply this method to extensions of the Standard Model.
The authors are supported by JSPS fellowship.
[99]{} V. Kuzmin, V. Rubakov and M. E. Shaposhnikov, Phys. Lett.[**B155**]{} (1985) 36.
M. E. Carrington, Phys. Rev. [**D45**]{} (1992) 2933.
P. Arnold and O. Espinosa, Phys. Rev.[**D47**]{} (1993) 3546.
K. Rummukainen, M. Tsypin, K. Kajantie, M. Laine and M. E. Shaposhniko,v Nucl.Phys. [**B532**]{} (1998) 283 (references therein)
F. Csikor, Z. Fodor and J. Heitger, Phys.Rev.Lett. [**82**]{} (1999) 21-24 (references therein)
E.- M. Ilgenfritz, A. Schiller and C. Strecha, Eur.Phys.J. [**C8**]{} (1999) 135-150 (references therein)
F. Karch, T. Neuhaus, A. Patós and J. Rank, Nucl.Phys.[**474**]{} (1996) 217
Y. Aoki, F. Csikor, Z. Fodor, and A. Ukawa, hep-lat/9901021
W. Buchmuller and O. Philipsen, Phys. Lett. [**B354**]{} (1995) 403
N. Tetradis, Phys.Lett. [**B409**]{} (1997) 355
S. J. Huber, A. Laser, M. Reuter and M. G. Schmidt, Nucl.Phys. [**B539**]{} (1999) 477
I. T. Drummond, R. R. Horgan, P. V. Landshoff and A. Rebhan, Phys. Lett.[**B398**]{} (1997) 326.
T. Inagaki, K. Ogure and J. Sato, Prog.Theor.Phys. [**99**]{} (1998) 119
K. Ogure and J. Sato, Phys.Rev. [**D57**]{} (1998) 7460
K. Ogure and J. Sato, Phys.Rev. [**D58**]{} (1998) 085010
F. Eberlein hep-ph/9811513
M. S. Manton, Phys.Rev. [**D28**]{} (1983) 2019
F. R.Kllinkhamer and M. S. Manton, Phys.Rev. [**D30**]{} (1984) 2212
[^1]: We neglected the momentum dependence of a full self-energy in Ref.[@Ogu]. This corresponds to the local potential approximation of the systematic derivative expansion of the effective action.
|
---
abstract: 'Recent studies have suggested that the tearing instability may play a significant role in magnetic turbulence. In this work we review the theory of the magnetohydrodynamic tearing instability in the general case of an arbitrary tearing parameter, which is relevant for applications in turbulence. We discuss a detailed derivation of the results for the standard Harris profile and accompany it by the derivation of the results for a lesser known sine-shaped profile. We devote special attention to the exact solution of the inner equation, which is the central result in the theory of tearing instability. We also briefly discuss the influence of shear flows on tearing instability in magnetic structures. Our presentation is self-contained; we expect it to be accessible to researchers in plasma turbulence who are not experts in magnetic reconnection.'
address:
- '${~}^1$Department of Physics, University of Wisconsin, Madison, WI 53706, USA'
- '${~}^2$Space Science Institute, Boulder, Colorado 80301, USA'
- '${~}^3$Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge MA 02139, USA'
author:
- 'Stanislav Boldyrev$^{1,2}$ and Nuno F. Loureiro$^{3}$'
title: Calculations in the theory of tearing instability
---
Introduction
============
Numerical simulations and analytic models have suggested that magnetic plasma turbulence tends to form anisotropic, sheet-like current structures at small scales [@matthaeus_turbulent_1986; @biskamp2003; @servidio_magnetic_2009; @servidio_magnetic_2011; @wan2013; @zhdankin_etal2013; @tobias2013; @zhdankin_etal2014; @davidson2017; @chen2017]. Such structures are not necessarily associated with the dissipation scale of turbulence. Rather, a hierarchy of sheet-like turbulent eddies is formed throughout the whole inertial interval [@boldyrev2005; @boldyrev_spectrum_2006; @chen_3D_2012; @chandran_intermittency_2015; @mallet_measures_2016]. Recently, it has been realized that given large enough Reynolds number, such anisotropic structures may become unstable to the tearing mode at scales well above the Kolmogorov-like dissipation scale [@loureiro2017; @mallet2017; @boldyrev_2017; @loureiro2017a; @mallet2017a; @comisso2018]. The Reynolds numbers for which such effects become significant are very large ($Re\gtrsim 10^6$), so their definitive study is beyond the capabilities of modern computers. Nevertheless, the rapid progress in in situ measurements of space plasma brings interest to small scales of magnetic turbulence, where such effects may be observed, e.g., [@vech2018]. It is, therefore, highly desirable to develop an understanding of the linear tearing theory in magnetic profiles such as those one might expect to find throughout the inertial range of turbulence, but not necessarily those associated with dissipative current sheets.
This brings attention to the two facets of the theory of tearing instability that are not traditionally covered in textbooks on magnetohydrodynamics or plasma physics. One is the theory of reconnection beyond the well-known Furth, Killeen & Rosenbluth regime of small tearing parameter [@FKR]. This regime assumes limited anisotropy of a reconnecting magnetic profile, so it is not applicable to very anisotropic tearing modes relevant for our study. The other is the theory of tearing instability for the magnetic profiles that are different from the canonical $\tanh$-like Harris profile [@harris_1962]. Such a profile assumes that the reconnecting magnetic field is uniform in space except for the region where it reverses its direction. This is, arguably, not a general situation encountered in turbulence, where the magnetic fields strength varies in space on similar scales both inside and outside the reconnection region. Different magnetic profiles may lead to different scalings of the corresponding tearing growth rates, e.g., [@boldyrev_2017; @loureiro2017a; @walker2018; @loureiro2018; @pucci2018]. To the best of our knowledge, there are currently no texts methodically covering these aspects of tearing instability. Rather, various relevant analytical results are scattered over the literature, e.g., [@coppi1966; @coppi_resistive_1976; @abc1978; @porcelli_viscous_1987; @loureiro_instability_2007].
In this work we review the derivation of the standard Harris-profile tearing mode, and accompany it by a parallel derivation of the results for the sine-shaped profile. We devote special attention to the discussion of the exact solution of the inner equation, for which we use a method different from those previously adopted in the literature [@coppi1966; @coppi_resistive_1976; @abc1978; @porcelli_viscous_1987; @loureiro_instability_2007]. Although our work is mostly devoted to the tearing instability in magnetic profiles not accompanied by velocity fields, in the end of our presentation we discuss to what extent shear flows, typically present within turbulent eddies, can modify the results. The goal of our work is to give a self-contained presentation of the theory of tearing effects that are most relevant for applications to turbulence. We believe it will be useful for researchers in turbulence who are not necessarily experts in reconnection.
Equations for the tearing mode
==============================
We assume that the background uniform magnetic field is in the $z$ direction, and the current sheet thickness, $a$, and length, $l$, are measured in the field-perpendicular plane. The current sheets are strongly anisotropic, $a\ll l$. We denote the reconnecting magnetic field, that is, the variation of the magnetic field across the current sheet, as $B$. Such structures can be created in turbulence if their life times are comparable to the Alfvénic time $\tau_A\sim l/V_{A}$, where $V_{A}$ is the Alfvén speed associated with $B$. They are formed at all scales, the thinner the structure the more anisotropic it becomes. A theory describing a hierarchy of such magnetic fluctuations, or turbulent eddies, in MHD turbulence suggests that their anisotropy increases as their scale decreases, $a/l\propto a^{1/4}$ [@boldyrev2005; @boldyrev_spectrum_2006; @mason_cb06; @mason2012; @perez_etal2012; @chandran_15].
It has been proposed that at small enough scales the tearing instability of very anisotropic eddies can compete with their Alfvénic dynamics.[^1] This means that below a certain critical scale the tearing time should become comparable to $\tau_A$, so that the turbulence is mediated by tearing instability [@loureiro2017; @mallet2017; @boldyrev_2017]. Such a picture has received some numerical and observational support [@walker2018; @dong2018; @vech2018]. The theory of tearing instability required to describe strongly anisotropic current sheets, goes beyond the simplified FKR theory and generally requires one to analyze structures that are different from the Harris-type current sheets.
It should be acknowledged that the first analysis of the tearing instability in structures formed by MHD turbulence dates back to 1990 [@carbone1990].[^2] That analysis was based on the Iroshnikov-Kraichnan model of MHD turbulence [@iroshnikov_turbulence_1963; @kraichnan_inertial_1965] that treats turbulent fluctuations as essentially isotropic (that is, characterized by a single scale) weakly interacting Alfvén wave packets. The anisotropy of turbulent fluctuations has therefore not been quantified in [@carbone1990]. Moreover, their model assumed the presence of a significant velocity shear in the current layer and adopted the tearing-mode growth rate calculated in the shear-modified FKR regime [@hofmann1975; @dobrowolny1983; @einaudi1986]. As a result, the approach of [@carbone1990] was qualitatively different from that of the recent studies [@loureiro2017; @mallet2017; @boldyrev_2017].
In our discussion we do not impose any limitations on the anisotropy of current structures, that is, we assume them to be anisotropic enough to accommodate the fastest growing tearing mode; this assumption is consistent with the model of MHD turbulence [@boldyrev_spectrum_2006] adopted in [@loureiro2017; @boldyrev_2017]. We do, however, make several important simplifications. First, we assume that the background magnetic field has only one component, $B_y(x)$. An optional uniform guide field in $z$-direction may also be present; it has no effect on the problem. Second, in most of the work we assume that the configuration is static, that is, there is no background flows. In section \[shear\_flow\] we briefly discuss the effects of a shear flow, where, similarly to the magnetic field, the shearing velocity field is assumed to have only one component, $v_y(x)$. (We refer the reader to, e.g., [@chen_morrison1990; @tolman2018; @loureiro2013] for a broader discussion of the possible effects of background flows and strong outflows). Finally, our analysis is limited to the MHD framework.
To obtain the equations governing the tearing mode, we follow the standard procedure and represent the magnetic field as ${\bf B}(x,y)=B_0f(x){\hat {\bf y}}+{\bf b}(x,y)$, where $f(x)$ describes the profile of the reconnecting field, see Fig. (\[B\_profile\]). Its typical scale, the thickness of the reconnection layer, is $a$. The weak perturbation field can be represented through the magnetic potential ${\bf b}=-{\hat z}\times \nabla \psi=\left(\partial\psi/\partial y, -\partial\psi/\partial x \right)$. We assume that the background velocity is zero. The incompressible velocity perturbation is represented through the stream function ${\bf v}=\left(\partial \phi/\partial y, -\partial\phi/\partial x\right)$. We will neglect the effects of viscosity, but will keep the magnetic diffusivity. The magnetic induction and velocity momentum equations take the form e.g., [@biskamp2003]: $$\begin{aligned}
\frac{\partial \psi}{\partial t}=\frac{\partial \phi}{\partial y} B_0f+\eta\nabla^2\psi, \\
\frac{\partial}{\partial t}\nabla^2\phi=B_0f\frac{\partial}{\partial y}\nabla^2\psi-B_0f''\frac{\partial\psi}{\partial y}.\end{aligned}$$ We can use the Fourier transform in the $y$ direction, and represent the fluctuating fields as $$\begin{aligned}
\psi={\tilde \psi}(\xi)\exp(ik_0 y)\exp(\gamma t), \\
\phi=-i{\tilde \phi}(\xi)\exp(ik_0 y)\exp(\gamma t).\end{aligned}$$ In these expressions and in what follows we will use the dimensionless variables $$\begin{aligned}
\xi=x/a,\label{xi}\\
{\tilde \eta}=\eta/\left(k_0V_Aa^2\right),\label{eta}\\
\lambda=\gamma/\left(k_0V_A\right),\label{lambda}\\
\epsilon=k_0a,\label{epsilon}\end{aligned}$$ where $v_A$ is the Alfvén velocity associated with $B_0$. The anisotropy parameter $\epsilon$ can be arbitrary. In applications to turbulence, however, the most relevant cases are those corresponding to $\epsilon\ll 1$, as only very anisotropic turbulent eddies become significantly affected by the tearing instability, e.g., [@loureiro2017; @mallet2017; @boldyrev_2017; @loureiro2017a; @mallet2017a; @comisso2018; @walker2018]. We, therefore, assume $\epsilon\ll 1$ in our discussion, which is the most difficult case to analyze. The results we derive can be extended to $\epsilon\approx 1$ if necessary.
![Sketch of a general profile of the background magnetic field.[]{data-label="B_profile"}](B_profile.eps){width="\columnwidth"}
In what follows we will use the tilded variables and omit the tilde sign. The dimensionless equations take a simple form: $$\begin{aligned}
\lambda \psi=f\phi +\eta\left[\psi''-\epsilon^2\psi\right],\label{general_eq_1}\\
\lambda\left[\phi''-\epsilon^2\phi\right]=-f\left(\psi''-\epsilon^2\psi \right)+f''\psi,\label{general_eq_2}\end{aligned}$$ where we denote by primes the derivatives with respect to $\xi$. An additional simplification of these equations can be obtained from the following consideration. We assume that $\eta$ and $\lambda$ are small parameters (this assumption can be verified a posteriori, from the obtained solution). The range of scales where the terms including these parameters can be neglected will be called the [*outer region*]{}. The range of scales where they become significant will be called the [*inner region*]{}.
We will solve equations (\[general\_eq\_1\]) and (\[general\_eq\_2\]) in the outer and inner regions separately, and then asymptotically match the solutions. In the inner region, where as we will see, $\xi\ll 1$, we have $\partial^2/\partial \xi^2\gg 1$. Due to the smallness of $\eta$ and $\lambda$, the terms in the square brackets can be relevant only in the inner region, and, therefore, small $\epsilon^2$ terms can always be neglected in the square brackets. The equations then take the form $$\begin{aligned}
\lambda \psi-f\phi =\eta \psi'',\label{eqs1} \\
-f\left(\psi''-\epsilon^2\psi \right)+f''\psi=\lambda\phi''.\label{eqs2}\end{aligned}$$ Those are the equations describing the tearing instability in the very anisotropic case $\epsilon \ll 1$, and they are the main equations we are going to discuss in this work. The right-hand-side terms in these equations are relevant only in the inner region. In the outer region, they can be neglected.
The outer equation
==================
We start with the outer region. We need to solve the equations $$\begin{aligned}
\phi=\frac{\lambda}{f}\psi,\label{out2}\\
\psi''=\left(\frac{f''}{f}+\epsilon^2\right)\psi, \label{out1}\end{aligned}$$ subject to the boundary conditions $\psi, \phi\to 0$ at $\xi\to\pm\infty$ (or to the periodic boundary conditions if the magnetic profile $f(\xi)$ is periodic). Before we discuss the solution we note that Eq. (\[out1\]) is the Schrodinger equation with zero energy. In general, it does not have a solution corresponding to given boundary conditions. So the solutions should be found separately for $\xi>0$ and $\xi<0$, but not in the region of small $\xi$ where this equation is not applicable. The solutions thus found will not, therefore, match smoothly, but will have a discontinuity in the derivative (a break) at $\xi=0$.
We consider two exactly solvable model cases that correspond to particular profiles $f(\xi)$ of the reconnecting magnetic field. The first case is $f(\xi)=\tanh(\xi)$. It is the so-called Harris profile [@harris_1962]. It corresponds to a magnetic field that value does not change in space except for a region of width $\xi\sim 1$, where it reverses its direction. The second solvable case is $f(\xi)=\sin(\xi)$ [@ottaviani1993]. In this case the magnetic field changes its strength and reverses its direction on the same scales. The latter case is arguably more relevant for the structures encountered in turbulence, and it is especially convenient for numerical studies as it allows for periodic boundary conditions.
In the first case, $f=\tanh(\xi)$, the solution of Eq. (\[out1\]) for the magnetic field is (e.g., [@white1983]): $$\begin{aligned}
\psi(\xi)=Ae^{-\epsilon \xi }\,\left[1+\frac{1}{\epsilon}\tanh(\xi)\right], \quad \xi\geq 0,\label{case1}\\
\psi(-\xi)=\psi(\xi).\end{aligned}$$ The solution for the velocity function $\phi(\xi)$ is then easily found from Eq. (\[out2\]). In order to match with the inner solution, it is important to know the asymptotic forms of the velocity and magnetic fields for $\xi\ll 1$. Taking into account that $\epsilon \ll 1$, one obtains by expanding the $\tanh(\xi)$ in Eq. (\[case1\]) that $\phi'(\xi)\sim -{A\lambda}/{\xi^2}+(2A\lambda \xi)/(3\epsilon)$. The second term can be neglected when $\xi^3\ll \epsilon$. The velocity $\phi'$ can then be formally expressed in this limit as $$\begin{aligned}
\phi'(\xi)\sim -\frac{\lambda}{\xi^2}\psi(0).\label{asympt_tanh}\end{aligned}$$ As we will see later, in order to match the magnetic field one can define the tearing parameter $$\begin{aligned}
\Delta'=\frac{\psi'(\xi)-\psi'(-\xi)}{\psi(0)}, \quad \xi>0.\end{aligned}$$ It is easy to see that in the region $\xi\ll 1$, the tearing parameter approaches a constant value $$\begin{aligned}
\Delta'= \frac{2}{\epsilon}.\label{delta_tanh}\end{aligned}$$
In the second case, $f(\xi)=\sin(\xi)$, the solution periodic in $[-\pi, \pi]$ is: $$\begin{aligned}
\psi(\xi)=A \sin\left[\sqrt{1-\epsilon^2} \left( \xi+\frac{\pi}{2\sqrt{1-\epsilon^2}}-\frac{\pi}{2}\right) \right], \quad \xi\geq 0,\\
\psi(-\xi)=\psi(\xi).\end{aligned}$$ The derivative of the $\phi$ function is then found as $\phi'\sim -(A\lambda/\xi^2)(\pi\epsilon^2/4)+(A\lambda\xi/3)$. The second term can be neglected when $\xi^3\ll \epsilon^2$, in which case the $\phi'$ function has the asymptotic behavior that is formally identical to that obtained for the first case, $$\begin{aligned}
\phi'\sim -\frac{\lambda}{\xi^2}\psi(0).\label{asympt_sin}\end{aligned}$$ Indeed, in this case $\psi(0)=A\pi\epsilon^2/4$. In the region $\xi\ll 1$, the tearing parameter approaches a constant value $$\begin{aligned}
\Delta'= \frac{8}{\pi\epsilon^2}.\label{delta_sin}\end{aligned}$$ Note the different scaling of this parameter with $\epsilon$ as compared to the previous result (\[delta\_tanh\]).
It is important to check how deeply into the asymptotic region $\xi\ll 1$ the outer solutions can extend. In this region we estimate $f''\sim f \sim \xi$ for both tanh- and sine-shaped magnetic profiles. From Eq. (\[out1\]) we have $\psi''\sim \psi$. Then from Eq. (\[out2\]) we estimate $\phi''\sim (\lambda/\xi^3)\psi$. The right-hand sides in Eqs. (\[eqs1\], \[eqs2\]) are, therefore, small if $$\begin{aligned}
\eta\ll \lambda,\label{constr1}\\
\lambda^2\ll \xi^4. \label{constr2}\end{aligned}$$
The inner equation
==================
In the inner region we need to keep the right hand sides of Eqs. (\[eqs1\]) and (\[eqs2\]). For that the second derivatives of the fields should be large. For instance, we have to assume $\psi''\gg \psi$, which holds for $\xi\ll 1$. This allows us to simplify Eqs. (\[eqs1\],\[eqs2\]) in the following way: $$\begin{aligned}
\lambda \psi-\xi \phi=\eta \psi''\label{inner1},\\
-\xi \psi'' = \lambda\phi''. \label{inner2}\end{aligned}$$ By differentiating Eq. (\[inner1\]) twice, we get $\lambda\psi''=\xi\phi''+2\phi'+\eta\psi''''$. We now exclude $\psi''$ and $\psi''''$ from this equation by using Eq. (\[inner2\]), $\psi''=-(\lambda/\xi) \phi''$. A few lines of algebra allow one to cast the resulting equation in the form $$\begin{aligned}
\lambda^2\phi''+\left(\xi^2\phi' \right)'-\lambda\eta\left[\phi''''-2\left\{\frac{\phi''}{\xi} \right\}' \right]=0.\end{aligned}$$ This equation can trivially be integrated once. Also, noting that it contains only derivatives of $\phi$, we may reduce the order by denoting $Y\equiv \phi'$. We then get: $$\begin{aligned}
Y''-\frac{2}{\xi}Y'-\frac{1}{\eta\lambda}\left(\lambda^2+\xi^2\right)Y=C.\label{the_eq}\end{aligned}$$ Before we analyze this equation further, we note that the first two terms in the left-hand side come from the resistive term in the induction equation. Also, one can directly verify that the term $\lambda^2$ in the parentheses would be absent if we used a common approximation treating $\psi(\xi)$ as a constant in the inner region, the so-called “constant-$\psi$” approximation.
In order to find the constant of integration $C$ we need to match asymptotically the solution of the inner Eq. (\[the\_eq\]) with the solution of the outer equation. We know that the outer solution exists in $\xi^4\gg \lambda^2$, see Eq. (\[constr2\]), which, for $\xi<1$ also implies that $\xi^2\gg \lambda^2$. What asymptotic does the solution of Eq. (\[the\_eq\]) have in region (\[constr2\])? There are two possibilities: $Y\sim \exp\left\{\xi^2/\sqrt{4\lambda\eta}\right\}$ and $Y\sim -C\eta\lambda/\xi^2 $. By evaluating different terms in Eq. (\[the\_eq\]) for these asymptotic solutions, one can check that they hold for $\xi^4\gg \eta\lambda$, which is less restrictive than Eq. (\[constr2\]).
Obviously, the first asymptotic is not the solution we need, since the outer solution does not have an exponential growth at these scales. We, therefore, are interested in the inner solution with the asymptotic $$\begin{aligned}
Y\sim -C\frac{\eta\lambda}{\xi^2}. \label{y_asympt}\end{aligned}$$ In order to match this asymptotic expression with the outer solution $Y\equiv \phi'=-(\lambda/\xi^2)\psi(0)$, see expressions (\[asympt\_tanh\]) or (\[asympt\_sin\]), we need to choose $C=\psi(0)/\eta$.
We note that the region where we asymptotically matched the two solutions is $\lambda^2\ll \xi^4\ll \epsilon^{4/3}$ in the case of the $\tanh$-profile, and $\lambda^2\ll \xi^4\ll \epsilon^{8/3}$ in the case of the sine-profile. Our solution, therefore, makes sense only when $$\begin{aligned}
&\lambda^2/\epsilon^{4/3}\ll 1, \quad \mbox{for tanh-shaped profile},\\
&\lambda^2/\epsilon^{8/3}\ll 1, \quad \mbox{for sine-shaped profile},\end{aligned}$$ which, as can be checked when the solution is obtained, are not restrictive conditions.
So far, we have matched the solutions for the velocity field, the $\phi'$ functions. To complete the asymptotic matching of the inner and outer solutions, we now need to match the magnetic fields, that is, the $\psi'$ function. This can be done in the following way. From Eq. (\[inner2\]) we get for the inner solution $\psi''=-(\lambda/\xi)\phi''$. Integrating this equation from $\xi\ll-\sqrt{\lambda}$ to $\xi \gg \sqrt{\lambda}$, which for the inner solution is equivalent to integrating from $-\infty$ to $\infty$, we obtain $$\begin{aligned}
-\lambda\int\limits_{-\infty}^{+\infty}\frac{\phi''}{\xi}d\xi=\int\limits_{-\infty}^{+\infty}\psi''d\xi,\end{aligned}$$ which, recalling that $\phi'\equiv Y$, can be rewritten as $$\begin{aligned}
\int\limits_{-\infty}^{+\infty}\frac{Y^\prime}{\xi}d\xi=-\frac{\psi(0)}{\lambda}\Delta'.\label{condition}\end{aligned}$$ This asymptotic matching condition will define the growth rate $\lambda$.
It is convenient to change the variables in the following way. Let us introduce a function $G$ such that $Y=(\psi(0)/\lambda)G$, and the independent variable $\zeta=\xi^2/\lambda^2$. Then, the velocity-function equation Eq. (\[the\_eq\]) takes the form $$\begin{aligned}
4\zeta G''-2G'-\beta^2\left(1+\zeta\right)G=\beta^2,\label{the_equation}\end{aligned}$$ where primes denote the derivatives with respect to $\zeta$, and we have denoted $\beta^2\equiv \lambda^3/\eta$. The matching condition (\[condition\]) then takes the form $$\begin{aligned}
-2\int\limits_0^\infty \frac{1}{\sqrt{\zeta}}\frac{\partial G}{\partial\zeta}d\zeta=\lambda \Delta'.\label{boundary_condition}\end{aligned}$$ We thus need to solve Eq. (\[the\_equation\]), and then find the tearing growth rate $\lambda$ from the matching condition (\[boundary\_condition\]).
The analytic theory of tearing mode is essentially based on Eqs. (\[the\_equation\]) and (\[boundary\_condition\]). These equations can be solved exactly. Historically, however, a better known case is a simpler case of $\beta\ll 1$, the so-called FKR case [@FKR]. The simplifying assumptions going into the FKR derivation are, however, easier to understand if one knows the exact solution of the problem. Here we, therefore, first concentrate on the exact solution.
Solution of the inner equation
==============================
Here we present the exact solution of the tearing equation (\[the\_equation\]). This is an inhomogeneous equation, so its solution is a linear combination of a particular solution of the original inhomogeneous equation (\[the\_equation\]) and a general solution of the homogeneous equation $$\begin{aligned}
4\zeta {G_0}''-2{G_0}'-\beta^2\left(1+\zeta\right){G_0}=0.\label{the_homogeneous_equation}\end{aligned}$$ The solution of the homogeneous equation (\[the\_homogeneous\_equation\]) has two possible asymptotics, $G_0\propto \exp{(\pm\beta\zeta/2)}$, at $\zeta\to\infty$. We obviously need to consider only the solutions behaving as $G_0\propto \exp{(-\beta\zeta/2)}$.
In order to figure out what this boundary condition means in terms of the original function $Y(\xi)$, we need to study the solutions of Eq. (\[the\_homogeneous\_equation\]) in more detail. As can be checked directly, at small $\zeta$ the solution has the following asymptotic behavior $$\begin{aligned}
G_0(\zeta)\sim a_0\left(1-\frac{\beta^2}{2} \zeta +\frac{\beta^2}{4}\left\{1-\frac{\beta^2}{2}\right\}\zeta^2+\dots\right)+b_0\,\zeta^{3/2}\left( 1+\frac{3}{2}\zeta + \dots\right), \label{G_asympt}\end{aligned}$$ where $a_0$ and $b_0$ are two arbitrary parameters. The “$a_0$-part” of the solution, which is a regular function at $\zeta=0$, corresponds to a solution of the homogeneous version of the original velocity equation (\[the\_eq\]) that is even in $\xi$, while the singular “$b_0$-part” corresponds to a solution odd in $\xi$.[^3] Equation (\[the\_eq\]) is symmetric with respect to $\xi\to-\xi$, therefore, each solution of (\[the\_eq\]) is a linear combinations of even and odd solutions.
One can see from the asymptotic behavior (\[G\_asympt\]) and from Eq. (\[the\_homogeneous\_equation\]) itself that in the odd solutions, the signs of the first and second derivatives are the same and they do not change on positive or negative axes. This means that every odd solution of the homogeneous version of equation (\[the\_eq\]) diverges at both $\xi\to \infty$ and $\xi\to -\infty$. In the case of even solutions, the same analysis can be applied to the function $G_0(\zeta)\exp\left(\beta^2\zeta/2\right)$ when $\beta<1$, from which it follows that every even solution of homogeneous Eq. (\[the\_eq\]) diverges at both infinity limits as well.[^4] Only by choosing a particular relation between $a_0$ and $b_0$ can one cancel these divergences either at positive or negative infinity (but not at both).
The method that we will use reproduces all the solutions that decline at $\zeta\to\infty$. Therefore, as we will see, our derived expression for $G_0$ will contain both even and odd parts, but with a rigid relation between $a_0$ and $b_0$, to cancel this divergence. We will need to remove such solutions since, as has been explained, they diverge either at $\xi\to-\infty$ or $\xi\to\infty$. Later, we will use this condition to uniquely define the solution.
In order to find the general solution of Eq. (\[the\_equation\]) we use the following method. The tearing equation (\[the\_equation\]) is defined only for positive $\zeta$. We can, however, consider this equation on the whole $\zeta-$axis by formally extending the function $G$ to the negative values of the argument. For that we define $$\begin{aligned}
G(\zeta)=G_1(\zeta)\theta(\zeta)+G_2(\zeta)\left(1-\theta(\zeta)\right),\label{Gextended}\end{aligned}$$ Where $G_1$ and $G_2$ are solutions of Eq. (\[the\_equation\]) such that $G_1\to 0$ at $\zeta\to +\infty$, and $G_2\to 0$ at $\zeta\to -\infty$, and $\theta(\zeta)$ is the Heaviside step function. These solutions at positive and negative arguments are defined up to arbitrary, declining at infinity solutions of the homogeneous equation, therefore, they can always be chosen so that their amplitudes match at the origin, $G_1(0)=G_2(0)$. This provides a formal extension of the $G$ function to the negative arguments. Note that we match only the values of functions $G_1$ and $G_2$ at $\zeta=0$, but not their derivatives.
If we consider the operator $$\begin{aligned}
{\hat L}=4\zeta \frac{\partial^2}{\partial \zeta^2}-2\frac{\partial}{\partial \zeta}-\beta^2(1+\zeta),\end{aligned}$$ we can directly verify that $$\begin{aligned}
{\hat L}G=\theta {\hat L}G_1+\left(1-\theta\right){\hat L}G_2=\beta^2.\end{aligned}$$ Therefore, the extended function (\[Gextended\]) satisfies the same equation (\[the\_equation\]) on the entire real axis. This function declines at $\pm\infty$, therefore, we can Fourier transform equation (\[the\_equation\]) using the standard definition $$\begin{aligned}
G(\zeta)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}G(k)e^{ik\zeta}dk.\label{fourier_transform}\end{aligned}$$ In the Fourier space the equation takes the form $$\begin{aligned}
\left(4k^2+\beta^2\right)G'+\left(10k-i\beta^2\right)G=2\pi i\beta^2\delta(k).\label{fourier}\end{aligned}$$ The first-order ordinary differential equation (\[fourier\]) can easily be solved for $k<0$ and $k>0$. The solutions are $$\begin{aligned}
G_{\pm}(k)=2\pi i {A}_{\pm}\left[1+\frac{4k^2}{\beta^2} \right]^{-\frac{5}{4}}\left[\frac{1+\frac{2ik}{\beta}}{1-\frac{2ik}{\beta}} \right]^{\frac{\beta}{4}},\end{aligned}$$ where ${A}_{\pm}$ are two complex constants, and $\pm$ signs stand for the solutions defined on the positive and negative real $k$ axes, respectively. The function $G(\zeta)$ is real, therefore ${A}_{-}=-{A}_{+}^*$.
The delta-function in the right-hand side of Eq. (\[fourier\]) implies that the function $G(k)$ is discontinuous on the real $k$ axis at $k=0$, with the discontinuity condition ${A}_+-{A}_-=1$. In what follows we will simply denote ${A}_+=A$ and ${A}_-=-A^*$, so that the discontinuity condition reads $A+A^*=1$. As a result, the function $G$ can be represented as: $$\begin{aligned}
G(\zeta)=i A\int\limits_{0}^{+\infty}\left[1+\frac{4k^2}{\beta^2} \right]^{-\frac{5}{4}}\left[\frac{1+\frac{2ik}{\beta}}{1-\frac{2ik}{\beta}} \right]^{\frac{\beta}{4}}e^{ik\zeta}\,dk
-i A^*\int\limits_{0}^{+\infty}\left[1+\frac{4k^2}{\beta^2} \right]^{-\frac{5}{4}}\left[\frac{1-\frac{2ik}{\beta}}{1+\frac{2ik}{\beta}} \right]^{\frac{\beta}{4}}e^{-ik\zeta}\,dk.\,\,\label{Gintegral1}\end{aligned}$$ In these integrals the branches of the integrands must be chosen so that they coincide at $k=0$, and the integration is performed along the positive real line in a complex plane, see Fig.(\[k\_contour\]). We note that the discontinuity condition defines only the real part of the complex coefficient $A$, but leaves its imaginary part arbitrary. This reflects the fact the solution is not defined uniquely, but only up to an arbitrary solution of the homogeneous equation (\[the\_homogeneous\_equation\]).[^5]
![Contour of integration in the $k$ plane in formula (\[Gintegral1\]). Possible branch cuts necessary to define analytic continuations of the integrands in (\[Gintegral1\]) are also shown.[]{data-label="k_contour"}](k_plane_contour.eps){width="\columnwidth"}
For practical calculations, it is convenient to modify Eq. (\[Gintegral1\]) further. In the first integral of Eq. (\[Gintegral1\]) we change the variable of integration $$\begin{aligned}
k=\left(\frac{\beta}{2i}\right)\frac{q-1}{q+1},\label{subst1}\end{aligned}$$ while in the second integral we choose $$\begin{aligned}
k=-\left(\frac{\beta}{2i}\right)\frac{p-1}{p+1}.\label{subst2}\end{aligned}$$ Expression (\[Gintegral1\]) now takes the form $$\begin{aligned}
G(\zeta)=-A\frac{\beta}{4\sqrt{2}}\int\limits_{-1}^{1}\frac{(q+1)^{\frac{1}{2}}}{q^{\frac{5}{4}}}q^{\frac{\beta}{4}}e^{\frac{\beta}{2}\frac{q-1}{q+1}\zeta}\,dq -A^*\frac{\beta}{4\sqrt{2}}\int\limits_{-1}^{1}\frac{(p+1)^{\frac{1}{2}}}{p^{\frac{5}{4}}}p^{\frac{\beta}{4}}e^{\frac{\beta}{2}\frac{p-1}{p+1}\zeta}\,dp.\label{Gintegral2}\end{aligned}$$
The integrals in Eq. (\[Gintegral2\]) look identical. However, they differ by the contours of integration that follow from the changes of variables (\[subst1\]) and (\[subst2\]). If in each integral of Eq. (\[Gintegral1\]), the contours of integration lie along the real axis in the complex $k$ plane, see Fig. (\[k\_contour\]), then the corresponding contours in the $q$ and $p$ planes are defined as shown in Fig (\[qp\_contour\]).
![Contours of integration in the $q$ and $p$ complex planes in formula (\[Gintegral2\]).[]{data-label="qp_contour"}](qp_plane.eps){width="\columnwidth"}
![Equivalent contours of integration in the $q$ and $p$ complex planes in formula (\[Gintegral3\]).[]{data-label="qp_contour_new"}](qp_plane_contour.eps){width="\columnwidth"}
It is easy to see that for $\zeta>0$, these integrals will not change if we deform the contours to coincide with the real axis, as shown in Fig (\[qp\_contour\_new\]). This way, one of the contours of integration has to go above the branch cut, and the other one below. It is also useful to integrate by parts once, in order to avoid dealing with too strong a singularity at the origin, $$\begin{aligned}
G(\zeta)=\frac{A \beta}{1-\beta}\left[1-\frac{1}{\sqrt{2}}\int\limits_{-1}^1q^{\frac{\beta-1}{4}}\frac{d}{dq}\left\{ \left(q+1\right)^{\frac{1}{2}}e^{\frac{\beta}{2}\zeta\frac{q-1}{q+1}}\right\}dq\right]\nonumber \\
+\frac{A^* \beta}{1-\beta}\left[1-\frac{1}{\sqrt{2}}\int\limits_{-1}^1q^{\frac{\beta-1}{4}}\frac{d}{dq}\left\{ \left(q+1\right)^{\frac{1}{2}}e^{\frac{\beta}{2}\zeta\frac{q-1}{q+1}}\right\}dq\right].
\label{Gintegral3}\end{aligned}$$
In the interval $(0, 1]$ both integrals are the same, and their sum can be simplified since $A+A^*=1$. In the interval $[-1, 0)$, however, the integrals have different phases. The integrals over the dashed lines add up to $A\exp\left[i\pi(\beta-1)/4\right]+A^*\exp\left[-i\pi(\beta-1)/4\right]$. Since the imaginary part of $A$ is arbitrary, this sum is arbitrary as well (we assume $\beta<1$). As a result the solution for $\zeta>0$ can be represented as $$\begin{aligned}
G(\zeta)=\frac{\beta}{1-\beta}\left[1-\frac{1}{\sqrt{2}}\int\limits_0^1q^{\frac{\beta-1}{4}}\frac{d}{dq}\left\{ \left(q+1\right)^{\frac{1}{2}}e^{\frac{\beta}{2}\zeta\frac{q-1}{q+1}}\right\}dq\right]\nonumber \\
+\, C_0\int\limits_{-1}^0 |q|^{\frac{\beta-1}{4}}\frac{d}{dq}\left\{ \left(q+1\right)^{\frac{1}{2}}e^{\frac{\beta}{2}\zeta\frac{q-1}{q+1}}\right\}dq, \label{Gintegral4}\end{aligned}$$ where $C_0$ is an arbitrary parameter. The first term in this expression is a particular solution of the inner equation (\[the\_equation\]). It has been originally derived in [@coppi1966; @coppi_resistive_1976; @abc1978], where a different approach involving expansion in the Laguerre polynomials, has been used. The second term, including a free parameter $C_0$, represents the solution of the homogeneous equation (\[the\_homogeneous\_equation\]), the zero mode.
It is easy to see that the particular solution is an analytic function at $\zeta=0$, so it describes an even solution of Eq. (\[the\_eq\]). Also, this solution converges at $\zeta\to\infty$. Therefore, it represents a solution of Eq. (\[the\_eq\]) that converges at $\xi\to \pm\infty$. The zero mode, on the contrary, is non-analytic at $\zeta=0$, since its second derivative diverges there. It is therefore a combination of odd and even solutions of homogeneous equation (\[the\_eq\]). According to what was said in the beginning of this section, a solution of the homogeneous equation diverges at either $\xi \to -\infty$ or $\xi \to +\infty$. We thus have to require $C_0=0$, which removes these divergences. The zero mode is therefore absent and the solution is given by the first term in expression (\[Gintegral4\]).
Tearing rate in the limit $\beta\ll 1$ (the FKR case)
=====================================================
Now that we have obtained the general solution for the inner region, we can find the tearing mode growth rate by substituting this solution into the matching condition (\[boundary\_condition\]). Before considering the general case, however, we discuss the important limit of $\beta\ll 1$, the so-called FKR case [@FKR] (we recall that $\beta^2=\lambda^3/\eta$). In this limit, we can approximate $q^{(\beta-1)/4}\approx q^{-1/4}$ in Eq. (\[Gintegral4\]). It is possible to show that this approximation is equivalent to the “constant-$\psi$” approximation discussed after Eq. (\[the\_eq\]), which demonstrates the equivalence of the constant-$\psi$ approximation to the FKR case. We see that in this case the integral depends on $\zeta$ only through the combination $\beta\zeta$. The matching condition (\[boundary\_condition\]) now reads $$\begin{aligned}
C\beta^{3/2}=\lambda\Delta',\label{bc_fkr}\end{aligned}$$ where the constant $C$ is given by the integral $$\begin{aligned}
C=\int\limits_0^{\infty} \sqrt{\frac{2}{x}}\frac{d}{dx}\left[\int\limits_0^1q^{-\frac{1}{4}}\frac{d}{dq}\left\{ \left(q+1\right)^{\frac{1}{2}}e^{\frac{x}{2}\frac{q-1}{q+1}}\right\}dq\right]dx.\quad\quad\end{aligned}$$ One can easily do this integral by changing the order of integrations. The answer is expressed through the gamma functions, $$\begin{aligned}
C=\left(\frac{\pi}{2}\right)\frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{5}{4}\right)}\approx 1.45.\label{c_fkr}\end{aligned}$$
From Eq. (\[bc\_fkr\]) we find the growth rate of the tearing mode in the FKR regime of $\beta\ll 1$, $$\begin{aligned}
\lambda=C^{-4/5}\eta^{3/5}{\Delta'}^{\,4/5}.\label{lambda_fkr}\end{aligned}$$ In the [*dimensional*]{} form, this expression can be rewritten as: $$\begin{aligned}
\gamma=2^{4/5}C^{-4/5}\eta^{3/5}k_0^{-2/5}V_A^{2/5}a^{-2}\label{gamma_tanh}\end{aligned}$$ for the $\tanh$-shaped magnetic profile, and $$\begin{aligned}
\gamma=8^{4/5}\pi^{-4/5}C^{-4/5}\eta^{3/5}k_0^{-6/5}V_A^{2/5}a^{-14/5}\label{gamma_sin}\end{aligned}$$ for the sine-shaped magnetic profile. In order to obtain these results we have substituted the expressions (\[delta\_tanh\]) and (\[delta\_sin\]) for the corresponding parameters $\Delta'$.
Two important points should be made about the FKR solution. First, since in this limit the inner function $G(\zeta)$ depends on scale only through the combination $\beta\zeta$, this function has a characteristic length scale, $\zeta=1/\beta$, which is the so-called [*inner scale*]{}. In terms of the $\xi$ variable, this scale is $\xi=(\lambda\eta)^{1/4}\gg \lambda$. In dimensional form the inner scale $x=\delta$ is given by $$\begin{aligned}
\delta=\left(\frac{\gamma\eta a^2}{k_0^2 V_A^2} \right)^{1/4},\label{inner_scale}\end{aligned}$$ where the growth rate $\gamma$ is given by either (\[gamma\_tanh\]) or (\[gamma\_sin\]) depending on the chosen magnetic profile.
Second, we see that the growth rate and the inner scale formally diverge for $k_0\to 0$. This is obviously an unphysical behavior. It reflects the fact that the approximation $\beta\ll 1$ is not valid in this case. We will now present the solution for the growth rate in the general case, where the inner function $G(\zeta)$ is given by the exact expression (\[Gintegral4\]). We will see that the growth rate does not diverge, but reaches a maximal value at a certain small value of $k_0a$.
Tearing rate in the general case
================================
Here we analyze the general case, the so-called Coppi case [@coppi1966; @coppi_resistive_1976; @abc1978]. We need to evaluate the integral in the left-hand-side of (\[boundary\_condition\]) using the exact expression for the $G$ function given by (\[Gintegral4\]). This can be easily done in the same way as we obtained (\[c\_fkr\]), $$\begin{aligned}
-2\int\limits_0^\infty \frac{1}{\sqrt{\zeta}}\frac{\partial G}{\partial\zeta}d\zeta=-\frac{\pi}{8}\beta^{3/2}\frac{\Gamma\left(\frac{\beta-1}{4}\right)}{\Gamma\left(\frac{\beta+5}{4}\right)}.\end{aligned}$$ The growth rate is found from the transcendental equation [@coppi_resistive_1976]: $$\begin{aligned}
-\frac{\pi}{8}\beta^{3/2}\frac{\Gamma\left(\frac{\beta-1}{4}\right)}{\Gamma\left(\frac{\beta+5}{4}\right)}=\lambda\Delta'.\label{general_condition}\end{aligned}$$ We see that the left-hand-side of this expression is positive when $\beta<1$, and, therefore, the instability is possible only in this case. In the limit of $\beta\ll 1$, we recover the results discussed in the previous section. The left-hand-side of Eq. (\[general\_condition\]) is small in this limit. The low-$\beta$ approximation, however, breaks down at $k_0\to 0$. As we will see momentarily, in this limit the solution corresponds to $\beta\to 1$. Indeed, equation (\[general\_condition\]) can be approximated in this case as $$\begin{aligned}
\sqrt{\pi}\frac{\beta^{3/2}}{1-\beta}=\lambda\Delta'.\end{aligned}$$ Recalling now that $\beta^2\equiv \lambda^3/\eta$, we arrive at the equation for the growth rate $$\begin{aligned}
\sqrt{\pi}\frac{\beta^{5/6}}{1-\beta}=\eta^{1/3}\Delta'.\end{aligned}$$ From the definitions of the dimensionless parameters $\eta$ and $\Delta'$, we see that as $k_0a\to 0$, the right-hand-side of this equation diverges. This is possible only if $\beta\to 1$ for such solutions. We, therefore, see that $\lambda^3=\eta$ is the equation that defines the tearing growth rate in this case. In the [*dimensional*]{} units, this equation gives $$\begin{aligned}
\gamma=\eta^{1/3}V_A^{2/3}k_0^{2/3}a^{-2/3},\label{gamma_coppi}\end{aligned}$$ which is termed the Coppi solution in [@loureiro_instability_2007]. The remarkable fact is that as $k_0$ decreases, the Coppi growth rate decreases as well. This is opposite to the behavior of the FKR growth rate that increases at decreasing $k_0$. This means that there must exist a maximal growth rate of the tearing instability, $\gamma_*$, attainable at a certain wave number ${k_0}_*$. One can [*define*]{} this critical wave number as the wave number at which the Coppi growth rate (\[gamma\_coppi\]) formally matches the FKR growth rate (\[gamma\_tanh\]) for the $\tanh$-shaped magnetic profile or (\[gamma\_sin\]) for the sine-shaped profile.
For the tanh-shaped magnetic profile, a simple algebra gives for the critical wavenumber and the corresponding maximal tearing growth rate: $$\begin{aligned}
{k_0}_*=(2/C)^{3/4}\eta^{1/4}V_A^{-1/4}a^{-5/4},\\
\gamma_*=(2/C)^{1/2}\eta^{1/2}V_A^{1/2}a^{-3/2}.\label{gamma_tanh_coppi}\end{aligned}$$ For the sine-shaped profile, the answer is: $$\begin{aligned}
{k_0}_*=(8/\pi C)^{3/7}\eta^{1/7}V_A^{-1/7}a^{-8/7}, \\
\gamma_*=(8/\pi C)^{2/7}\eta^{3/7}V_A^{4/7}a^{-10/7}.\label{gamma_sin_coppi}\end{aligned}$$ In the general case, the inner solution $G(\zeta)$ is not a universal function depending only on $\beta\zeta$. However, for $\beta\approx 1$, this function approaches its asymptotic behavior $G(\zeta)\sim 1/\zeta$ at $\zeta\gg 1$. The typical scale (the inner scale) of this solution is therefore $\zeta=1$, which in terms of the $\xi$ variable reads $\xi=\lambda$. In dimensional variables, the inner scale in this case is $$\begin{aligned}
\delta=\frac{\gamma a}{k_0v_A},\end{aligned}$$ where $\gamma$ is given by Eq. (\[gamma\_coppi\]), or by expressions (\[gamma\_tanh\_coppi\]) or (\[gamma\_sin\_coppi\]) for the fastest growing modes.
Tearing rates in the presence of a shear flow {#shear_flow}
=============================================
In turbulent systems, magnetic field fluctuations are accompanied by velocity fluctuations, so that both magnetic and velocity shears are present in a turbulent eddy, e.g., [@boldyrev_spectrum_2006]. It is therefore useful to comment on the influence of a velocity shear on the tearing instability. A velocity shear across the current layer is expected to be less intense than the magnetic shear, otherwise, such a layer would be destroyed by the Kelvin-Helmholtz instability, e.g., [@biskamp2003; @walker2018]. In MHD turbulence, fluctuations indeed have excess of the magnetic energy over the kinetic energy; the difference between the kinetic and magnetic energies, the so-called residual energy, is negative in the inertial interval, e.g., [@boldyrev_res2011; @wang_res2011; @boldyrev_res2012; @boldyrev_res2012a; @chen_res2013].
We assume that, similarly to the magnetic field, the background velocity has the structure ${\bf v}_0(x,y)=v_0(x){\hat {\bf y}}$. In the presence of this velocity field, the dimensionless system of equations (\[general\_eq\_1\]), (\[general\_eq\_2\]) becomes $$\begin{aligned}
\lambda \psi= f\phi +\eta\left[\psi''-\epsilon^2\psi\right]-i{\tilde v}_0\psi,\label{v_general_eq_1}\\
\lambda\left[\phi''-\epsilon^2\phi\right]=-f\left(\psi''-\epsilon^2\psi \right)+f''\psi -i{\tilde v}_0\left(\phi''-\epsilon^2\phi\right)+i{\tilde v}^{\prime\prime}_0\phi,\label{v_general_eq_2}\end{aligned}$$ where ${\tilde v}_0=v_0(x)/V_A$ is the background velocity profile normalized by the Alfvén speed associated with the magnetic profile. Similarly to equations (\[general\_eq\_1\]), (\[general\_eq\_2\]), the modified equations can be simplified in the case of small $\eta$ and $\lambda$ as: $$\begin{aligned}
\lambda \psi= f\phi +\eta\psi''-i{\tilde v}_0\psi,\label{v_eqs_1}\\
\lambda\phi''=-f\left(\psi''-\epsilon^2\psi \right)+f''\psi -i{\tilde v}_0\left(\phi''-\epsilon^2\phi\right)+i{\tilde v}^{\prime\prime}_0\phi.\label{v_eqs_2}\end{aligned}$$
In the [*outer region*]{}, we have from Eqs. (\[v\_eqs\_1\]), (\[v\_eqs\_2\]): $$\begin{aligned}
\lambda \psi= f\phi -i{\tilde v}_0\psi,\label{v_eqs_1a}\\
-f\left(\psi''-\epsilon^2\psi \right)+f''\psi -i{\tilde v}_0\left(\phi''-\epsilon^2\phi\right)+i{\tilde v}^{\prime\prime}_0\phi=0.\label{v_eqs_2a}\end{aligned}$$ A general analysis of the problem is, unfortunately, not very transparent, e.g., [@chen_morrison1990]. A simplified but quite informative treatment is, however, possible when the velocity profile is similar to that of the magnetic field [@hofmann1975]. We therefore assume that ${\tilde v}_0(x)=\alpha f(x)$, where $-1<\alpha <1$. We see from Eq. (\[v\_eqs\_1a\]) that the shear velocity introduces a Doppler shift that competes with the growth rate (${\tilde v}(\xi)/\lambda=k_0v(\xi)/\gamma$ in dimensional units). Since in the outer region $v\lesssim V_A$, the Doppler shift dominates, and we can neglect the term containing $\lambda$ in Eq. (\[v\_eqs\_1a\]). Expressing $\phi$ from Eq. (\[v\_eqs\_1a\]), and substituting it into Eq. (\[v\_eqs\_2a\]) one obtains after simple algebra [@hofmann1975]: $$\begin{aligned}
\left(1-\alpha^2\right)\left\{\psi''-\epsilon^2\psi-\frac{f''}{f}\psi \right\}=0. \end{aligned}$$ Since $\alpha^2\neq 1$, the magnetic-field outer equation in this case is identical to the outer equation without a velocity shear (\[out2\]). The asymptotic behavior of the outer solution for the velocity field is, however, different from Eqs. (\[asympt\_tanh\], \[asympt\_sin\]), and it can be found from Eq. (\[v\_eqs\_1a\]) to the zeroth order in $\lambda/{\tilde v}_0$: $$\begin{aligned}
\phi (\xi)\sim i\alpha \psi(\xi), \quad \xi\ll 1.\label{v_phi_asympt}\end{aligned}$$ This expression holds in the asymptotic matching region $\delta \ll \xi \ll 1$, where $\delta$ is the inner scale.
In the [*inner region*]{} $\delta\sim \xi\ll 1$ we have $$\begin{aligned}
\lambda\psi +i\alpha\xi\psi=\xi\phi+\eta\psi'' ,\label{v_inner_1}\\
\lambda\phi''+i\alpha\xi\phi''=-\xi\psi'' .\label{v_inner_2}\end{aligned}$$ In order to match the inner solution for the velocity field with the outer solution, we derive from Eq. (\[v\_inner\_2\]): $$\begin{aligned}
\lambda \int\limits^{+\infty}_{-\infty}\frac{\phi''}{\xi}d\xi=-i\alpha\phi'\vert^{+}_{-}-\psi'\vert^{+}_{-},\end{aligned}$$ where the integral goes from $\xi\ll -\delta$ to $\xi\gg \delta$, and we denote by $\vert^+_-$ the jumps of the quantities across the inner layer between the indicated limits. We can use the asymptotic form (\[v\_phi\_asympt\]) to evaluate the jump of $\phi'$. From Eq. (\[v\_phi\_asympt\]) we get $$\begin{aligned}
\lambda \int\limits^{+\infty}_{-\infty}\frac{\phi''}{\xi}d\xi \,\sim \, -\left(1-\alpha^2\right)\psi(0)\Delta'.\label{matching}\end{aligned}$$ In order to do the integral in Eq. (\[matching\]) we need to know the velocity function $\phi(\xi)$ in the inner region. To the best of our knowledge, the exact solution of the inner equations (\[v\_inner\_1\], \[v\_inner\_2\]) is not available. We may, however, estimate the integral in the left-hand-side of Eq. (\[matching\]) in the following way. We note that if $\lambda\gg \alpha\delta$, then the shear flow does not affect the inner region of the tearing mode. The inner solution (and the resulting scaling of the growth rate with the Lundquist number $S=aV_A/\eta$ and the anisotropy parameter $k_0a$) can, therefore, be qualitatively affected by the shear flow only if $\lambda \ll \alpha \delta$, which is essentially the FKR limit. We thus assume this limit in what follows.
Similarly to the case without a flow, the shear-modified FKR limit implies the “constant-$\psi$” approximation, which reads to the zeroth order in the small parameter $\lambda/(\alpha\delta)$: $\phi(\xi)\sim i\alpha\psi(\xi)\sim$ const at $\xi\ll 1$. This solution trivially satisfies Eqs. (\[v\_inner\_1\], \[v\_inner\_2\]) to the zeroth order. Obviously, the zeroth order solution does not contribute to the integral in (\[matching\]), so in order to evaluate this integral we need to go to the first order in $\lambda/(\alpha\delta)$. From Eq. (\[v\_inner\_2\]) we estimate $\psi''\sim -i\alpha \phi''$ and substituting this into Eq. (\[v\_inner\_1\]), we get $$\begin{aligned}
\lambda\psi(0) +\alpha^2\xi\phi=\xi\phi-i{\eta}{\alpha}\phi'' ,\label{b_inner_1}\end{aligned}$$ where in the first term in the left-hand-side we have substituted the zeroth order solution. In the region $\xi \gg \delta$, the resistive term is not important, and we obtain the asymptotic form for the first-order velocity field: $$\begin{aligned}
\phi\sim \frac{\lambda\psi(0)}{(1-\alpha^2)\xi}.\label{v_inner_asympt}\end{aligned}$$ This expression does contribute to the integral in Eq. (\[matching\]). It diverges as $\xi$ decreases, until $\xi$ becomes as small as $\delta$, at which scale the resistive term becomes important and $\phi$ does not grow anymore. We may then estimate the integral in Eq. (\[matching\]) as $$\begin{aligned}
\lambda\int\limits^{+\infty}_{-\infty}\frac{\phi''}{\xi}d\xi\,\,\sim \,\, 2\lambda\int\limits^{+\infty}_{\delta}\frac{\phi''}{\xi}d\xi
\,\,\sim \,\, -\frac{\lambda^2\psi(0)}{(1-\alpha^2)\delta^3}\,,\label{estimate}\end{aligned}$$ where the inner scale $\delta$ is, in turn, estimated from balancing the resistive term in Eq. (\[b\_inner\_1\]) with the other terms: $\delta^3\sim \alpha\eta/(1-\alpha^2)$. Substituting this into Eq. (\[estimate\]) and then into Eq. (\[matching\]) we finally obtain $$\begin{aligned}
\lambda \sim \left[\eta\,|\alpha | \left(1-\alpha^2\right)\Delta'\right]^{1/2}.\end{aligned}$$ This result coincides with the more detailed derivations performed in [@hofmann1975; @chen_morrison1990] up to a numerical coefficient of order unity.
![Sketch of the tearing-mode growth rate as a function of $(k_0a)$ for the tanh-like magnetic profile (upper panel) and the sine-like profile (lower panel). The solid lines correspond to the growth rates without a shear flow. The dashed lines show the growth rates in the shear-modified FKR regimes. Here $S=aV_A/\eta$ and $\kappa=|\alpha|\left(1-\alpha^2 \right)<1$.[]{data-label="v_gammas"}](gamma_flow_tanh.eps "fig:"){width="\columnwidth"} ![Sketch of the tearing-mode growth rate as a function of $(k_0a)$ for the tanh-like magnetic profile (upper panel) and the sine-like profile (lower panel). The solid lines correspond to the growth rates without a shear flow. The dashed lines show the growth rates in the shear-modified FKR regimes. Here $S=aV_A/\eta$ and $\kappa=|\alpha|\left(1-\alpha^2 \right)<1$.[]{data-label="v_gammas"}](gamma_flow_sine.eps "fig:"){width="\columnwidth"}
In [*dimensional*]{} variables, this growth rate takes the following form for the tanh-like magnetic profile: $$\begin{aligned}
\gamma\sim \kappa^{1/2}\eta^{1/2}V_A^{1/2}a^{-3/2},\label{v_gamma_tanh}\end{aligned}$$ while for the sine-like profile we obtain $$\begin{aligned}
\gamma\sim \kappa^{1/2}\eta^{1/2}V_A^{1/2}a^{-2}k_0^{-1/2},\end{aligned}$$ where we denote $\kappa\equiv |\alpha | \left(1-\alpha^2\right)< 1$. These solutions are shown in Fig. (\[v\_gammas\]), together with the expressions (\[gamma\_tanh\]), (\[gamma\_sin\]), and (\[gamma\_coppi\]), corresponding to the cases without a flow. We see that a shear flow changes the scaling of the growth rate in the FKR regime, but not in the Coppi regime. In particular, it does not affect the scaling of the fastest growing mode.
Note that the growth rate (\[v\_gamma\_tanh\]) corresponding to the tanh-like profile is degenerate in that it is independent of the anisotropy parameter $k_0 a$. This explains why the early analysis of [@carbone1990] based on the Iroshnikov-Kraichnan theory of MHD turbulence, which assumes isotropy of the turbulent fluctuations (i.e, it implies $k_0a\sim$ const), formally led to the same scaling of the fastest growing mode, $\gamma_* \sim S^{-1/2}$, as the analysis of [@loureiro2017; @mallet2017] for the tanh-like magnetic profile, cf. Eq. (\[gamma\_tanh\]). In the non-degenerate sine-like case, however, the fastest growing rate scales as $\gamma_*\sim S^{-3/7}$ cf. Eq. (\[gamma\_sin\]), which is different from $\gamma_* \sim S^{-1/2}$ assumed in [@carbone1990].
Conclusions
===========
We have reviewed the derivation of the anisotropic tearing mode by considering in detail two solvable cases corresponding to the tanh-shaped and sine-shaped magnetic shear profiles. Given large enough anisotropy, the dominating tearing mode has the dimension $\sim 1/{k_0}_*$ and grows with the rate $\gamma_*$. We see that these parameters depend on the assumed magnetic shear profile, and they are not universal. Their derivation requires one to go beyond the simplified FKR model generally covered in textbooks. We have presented an effective method for solving the inner equation for the current layer in the general case. We have also discussed the influence on the tearing instability of shearing flows that typically accompany magnetic profiles generated by turbulence. Our work provides a detailed and self-contained discussion of the methods required for the study of tearing effects in turbulent systems.
SB was partly supported by the NSF grant no. PHY-1707272, NASA Grant No. 80NSSC18K0646, and by the Vilas Associates Award from the University of Wisconsin - Madison. NFL was supported by the NSF-DOE Partnership in Basic Plasma Science and Engineering, award no. DE-SC0016215 and by the NSF CAREER award no. 1654168.
References {#references .unnumbered}
==========
[10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} W H and [Lamkin]{} S L 1986 [*Physics of Fluids*]{} [**29**]{} 2513–2534
D 2003 [*[Magnetohydrodynamic Turbulence, Cambridge, UK: Cambridge University Press.]{}*]{} (Cambridge University Press)
S, [Matthaeus]{} W H, [Shay]{} M A, [Cassak]{} P A and [Dmitruk]{} P 2009 [*Physical Review Letters*]{} [**102**]{} 115003
S, [Dmitruk]{} P, [Greco]{} A, [Wan]{} M, [Donato]{} S, [Cassak]{} P A, [Shay]{} M A, [Carbone]{} V and [Matthaeus]{} W H 2011 [*Nonlinear Processes in Geophysics*]{} [**18**]{} 675–695
M, [Matthaeus]{} W H, [Servidio]{} S and [Oughton]{} S 2013 [*Physics of Plasmas*]{} [**20**]{} 042307
V, [Uzdensky]{} D A, [Perez]{} J C and [Boldyrev]{} S 2013 [*[The Astrophysical Journal]{}*]{} [**771**]{} 124 (*Preprint* )
S M, [Cattaneo]{} F and [Boldyrev]{} S 2013 [*[MHD Dynamos and Turbulence, [*in*]{} Ten Chapters in Turbulence, ed. P.A. Davidson, Y. Kaneda, and K.R. Sreenivasan : Cambridge University Press, p. 351-404. ]{}*]{} (*Preprint* )
V, [Boldyrev]{} S, [Perez]{} J C and [Tobias]{} S M 2014 [*[The Astrophysical Journal]{}*]{} [ **795**]{} 127 (*Preprint* )
Davidson P A 2017 [*An introduction to magnetohydrodynamics*]{} 2nd ed vol 25 (Cambridge university press)
C H K and [Boldyrev]{} S 2017 [*The Astrophysical Journal*]{} [**842**]{} 122 (*Preprint* )
S 2005 [*[The Astrophysical Journal]{}*]{} [**626**]{} L37–L40 (*Preprint* )
S 2006 [*Physical Review Letters*]{} [**96**]{} 115002 (*Preprint* )
C H K, [Mallet]{} A, [Schekochihin]{} A A, [Horbury]{} T S, [Wicks]{} R T and [Bale]{} S D 2012 [*Astrophysical Journal*]{} [**758**]{} 120 (*Preprint* )
B D G, [Schekochihin]{} A A and [Mallet]{} A 2015 [*The Astrophysical Journal*]{} [**807**]{} 39 (*Preprint* )
A, [Schekochihin]{} A A, [Chandran]{} B D G, [Chen]{} C H K, [Horbury]{} T S, [Wicks]{} R T and [Greenan]{} C C 2016 [*Monthly Notices of the Royal Astronomical Society*]{} [**459**]{} 2130–2139 (*Preprint* )
Loureiro N F and Boldyrev S 2017 [*Physical Review Letters*]{} [**118**]{}(24) 245101
A, [Schekochihin]{} A A and [Chandran]{} B D G 2017 [*[Monthly Notices of the Royal Astronomical Society]{}*]{} [ **468**]{} 4862–4871 (*Preprint* )
S and [Loureiro]{} N F 2017 [*[The Astrophysical Journal]{}*]{} [**844**]{} 125 (*Preprint* )
N F and [Boldyrev]{} S 2017 [*[The Astrophysical Journal]{}*]{} [**850**]{} 182 (*Preprint* )
A, [Schekochihin]{} A A and [Chandran]{} B D G 2017 [*Journal of Plasma Physics*]{} [**83**]{} 905830609 (*Preprint* )
L, [Huang]{} Y M, [Lingam]{} M, [Hirvijoki]{} E and [Bhattacharjee]{} A 2018 [*[The Astrophysical Journal]{}*]{} [**854**]{} 103 (*Preprint* )
D, [Mallet]{} A, [Klein]{} K G and [Kasper]{} J C 2018 [*[The Astrophysical Journal]{}*]{} [**855**]{} L27 (*Preprint* )
Furth H P, Killeen J and Rosenbluth M N 1963 [*Physics of Fluids*]{} [**6**]{} 459–484 ISSN 1070-6631
E G 1962 [*Il Nuovo Cimento*]{} [**23**]{} 115–121
J, [Boldyrev]{} S and [Loureiro]{} N 2018 [*[Physical Review E]{}*]{} (*Preprint* )
N F and [Boldyrev]{} S 2018 [*ArXiv e-prints*]{} (*Preprint* )
F, [Velli]{} M, [Tenerani]{} A and [Del Sarto]{} D 2018 [*Physics of Plasmas*]{} [**25**]{} 032113 (*Preprint* )
B, [Greene]{} J M and [Johnson]{} J L 1966 [*Nuclear Fusion*]{} [**6**]{} 101
B, [Galvão]{} R, [Pellat]{} R, [Rosenbluth]{} M and [Rutherford]{} P 1976 [*Soviet Journal of Plasma Physics*]{} [**2**]{} 961–966
G, [Basu]{} B, [Coppi]{} B, [Laval]{} G, [Rosenbluth]{} M N and [Waddell]{} B V 1978 [*Annals of Physics*]{} [**112**]{} 443–476
F 1987 [*Physics of Fluids*]{} [**30**]{} 1734–1742
Loureiro N F, Schekochihin A A and Cowley S C 2007 [*Physics of Plasmas*]{} [**14**]{} 100703–100703–4 ISSN 1070664X
J, [Cattaneo]{} F and [Boldyrev]{} S 2006 [*Physical Review Letters*]{} [**97**]{} 255002 (*Preprint* )
J, [Perez]{} J C, [Boldyrev]{} S and [Cattaneo]{} F 2012 [*Physics of Plasmas*]{} [**19**]{} 055902–055902 (*Preprint* )
J C, [Mason]{} J, [Boldyrev]{} S and [Cattaneo]{} F 2012 [*Physical Review X*]{} [**2**]{} 041005 (*Preprint* )
B D G, [Schekochihin]{} A A and [Mallet]{} A 2015 [*The Astrophysical Journal*]{} [**807**]{} 39 (*Preprint* )
Pucci F and Velli M 2014 [*The Astrophysical Journal Letters*]{} [**780**]{} L19 ISSN 2041-8205 <http://iopscience.iop.org/2041-8205/780/2/L19>
D A and [Loureiro]{} N F 2016 [*Physical Review Letters*]{} [ **116**]{} 105003 (*Preprint* )
A, [Velli]{} M, [Pucci]{} F, [Landi]{} S and [Rappazzo]{} A F 2016 [ *Journal of Plasma Physics*]{} [**82**]{} 535820501 (*Preprint* )
C, [Wang]{} L, [Huang]{} Y M, [Comisso]{} L and [Bhattacharjee]{} A 2018 [ *ArXiv e-prints*]{} (*Preprint* )
V, [Veltri]{} P and [Mangeney]{} A 1990 [*Physics of Fluids A*]{} [ **2**]{} 1487–1496
P S 1963 [*Astron. Zh.*]{} [**40**]{} 742
R H 1965 [*Physics of Fluids*]{} [**8**]{} 1385–1387
I 1975 [*Plasma Physics*]{} [**17**]{} 143–157
M, [Veltri]{} P and [Mangeney]{} A 1983 [*Journal of Plasma Physics*]{} [**29**]{} 393–407
G and [Rubini]{} F 1986 [*Physics of Fluids*]{} [**29**]{} 2563–2568
X L and [Morrison]{} P J 1990 [*Physics of Fluids B*]{} [**2**]{} 495–507
E A, [Loureiro]{} N F and [Uzdensky]{} D A 2018 [*Journal of Plasma Physics*]{} [**84**]{} 905840115 (*Preprint* )
N F, [Schekochihin]{} A A and [Uzdensky]{} D A 2013 [*[Physical Review E]{}*]{} [ **87**]{} 013102 (*Preprint* )
M and [Porcelli]{} F 1993 [*Physical Review Letters*]{} [**71**]{} 3802–3805
R B 1983 [*[Resistive instabilities and field line reconnection.]{}*]{} (Handbook of plasma physics. Vol. 1: Basic plasma physics I.. A. A. Galeev, R. N. Sudan (Editors).North-Holland Publishing Company, Amsterdam - New York - Oxford. 19+751 pp. (1983).)
S, [Perez]{} J C, [Borovsky]{} J E and [Podesta]{} J J 2011 [*[The Astrophysical Journal]{}*]{} [**741**]{} L19 (*Preprint* )
Y, [Boldyrev]{} S and [Perez]{} J C 2011 [*[The Astrophysical Journal]{}*]{} [**740**]{} L36 (*Preprint* )
S, [Perez]{} J C and [Zhdankin]{} V 2012 [*American Institute of Physics Conference Series*]{} ([*American Institute of Physics Conference Series*]{} vol 1436) ed [Heerikhuisen]{} J, [Li]{} G, [Pogorelov]{} N and [Zank]{} G pp 18–23 (*Preprint* )
S, [Perez]{} J C and [Wang]{} Y 2012 [*Numerical Modeling of Space Plasma Slows (ASTRONUM 2011)*]{} ([*Astronomical Society of the Pacific Conference Series*]{} vol 459) ed [Pogorelov]{} N V, [Font]{} J A, [Audit]{} E and [Zank]{} G P p 3 (*Preprint* )
C H K, [Bale]{} S D, [Salem]{} C S and [Maruca]{} B A 2013 [*[The Astrophysical Journal]{}*]{} [ **770**]{} 125 (*Preprint* )
[^1]: These ideas stem from the observation that if current sheets were allowed to have arbitrarily large aspect ratios, they would be tearing unstable at rates that diverge when the Lundquist number tends to infinity [@pucci_reconnection_2014; @uzdensky_loureiro2016; @tenerani2016].
[^2]: We were not aware of this important early work at the time when our previous studies [@loureiro2017; @boldyrev_2017] were published.
[^3]: This follows from the fact that the velocity function $Y(\xi)$ is analytic at $\xi=0$.
[^4]: Indeed, the function $P_0=G_0\exp\left(\beta^2\zeta/2\right)$ satisfies the equation $4\zeta P_0''=\left[2+4\zeta\beta^2\right]P_0'+\left[\beta^2-\beta^4\right]\zeta P_0$. Its even solution has the asymptotic behavior $P_0\sim a_0+a_0\left(\beta^2/4\right)\left(1-\beta^2\right)\zeta^2$ at small $\zeta$. So, when $\beta<1$, the function itself and, due to the equation it satisfies, its first and second derivatives have the same sign at $\zeta>0$. Therefore, the function $P_0$ diverges at infinity, which means that $G_0$ must diverge as well.
[^5]: It is easy to see that the solution of the homogeneous equation is given by the integral $$\begin{aligned}
G_0(\zeta)=A_0\int\limits_{-\infty}^{+\infty}\left[1+\frac{4k^2}{\beta^2} \right]^{-\frac{5}{4}}\left[\frac{1+\frac{2ik}{\beta}}{1-\frac{2ik}{\beta}} \right]^{\frac{\beta}{4}}\exp(ik\zeta)\,dk,\end{aligned}$$ where $A_0$ is an arbitrary real constant.
|
There is growing evidence that electron-phonon coupling plays an important role in determining exotic properties of novel materials such as colossal magnetoresistance [@jin] and high-$T_c$ compounds [@bednorz]. Since electrons in these materials are strongly correlated, the interplay between an attractive electron phonon interaction and Coulomb repulsion may be important in determining physics at finite doping. In particular, when the electron-phonon interaction is local, as is the case in the Holstein model, finite Coulomb repulsion leads to the formation of an intra-site bipolaron [@proville; @fehske0; @bonca1], with an effective mass of the order of the polaron effective mass [@bonca1].
It has been recently discovered that a longer-range electron-phonon interaction leads to a decrease in the effective mass of a polaron in the strong-coupling regime [@alex; @fehske]. The lower mass can have important consequences, because lighter polarons and bipolarons are more likely to remain mobile, and less likely to trap on impurities or from mutual repulsion. Motivated by this discovery, we investigate a simplified version of the Fr" ohlich model in the case of two electrons, $$\begin{aligned}
H = &-&t \sum_{js} ( c_{j+1,s}^\dagger c_{j,s} + H.c.) \label{ham}\\
&-&\omega g_0\sum_{jls} f_l(j) c_{j,s}^\dagger c_{j,s}
( a_l + a_l^\dagger) \nonumber \\
&+& \omega \sum_j a_j^\dagger a_j +
U\sum_{j}n_{j\uparrow}n_{j\downarrow},\nonumber\end{aligned}$$ where $c_{j,s}^\dagger$ creates an electron of spin $s$ and $a_{j}^\dagger$ creates a phonon on site $j$. The second term represents the coupling of an electron on site $j$ with an ion on site $l$, where $g_0$ is the dimensionless electron-phonon coupling constant. While in general long range electron-phonon coupling $f_l(j)$ is considered [@alex; @fehske], we further simplify this model by placing ions in the interstitial sites located between Wannier orbitals, as occurs in certain oxides [@tsuda], shown in Fig. (\[model\]a). In this case it is natural to investigate a simplified model, where an electron located on site $j$ couples only to its two neighboring ions, [*i.e.*]{} $l=j\pm 1/2$. We describe such coupling with $f_{j\pm1/2}(j)=1$ and 0 otherwise, and refer to this model as the extended Holstein Hubbard model (EHHM). We can view the EHHM as the simplest model with longer range than a single site, and use it to explore the qualitative change in physics in the simplest possible setting. While it is clear that in comparison to the Fr" ohlich model, our simplified EHHM lacks long range tails in the electron phonon interaction, the physical properties that depend predominantly on the short range interaction should be similar. For example, calculating the polaron energy of the original Fr" ohlich model as defined in Refs. [@alex; @fehske], one finds that 94% of the total polaron energy comes from the first two sites.
In the case when $f_l(j)=\delta_{l,j}$, the model in Eq. (\[ham\]) maps onto a Holstein-Hubbard model (HHM) (see also Fig.(\[model\]b)). The last two terms in Eq. (\[ham\]) represent the energy of the Einstein oscillator with frequency $\omega$ and the on-site Coulomb repulsion between two electrons. We consider the case where two electrons with opposite spins ($S_z = 0$) couple to dispersionless optical phonons with polarization perpendicular to the chain.
In this Letter we use a recently developed, highly accurate numerical technique [@bonca; @bonca1], combined with a strong coupling expansion to study the simplified EHHM. Our main goal is to calculate physical properties such as the binding energy, effective mass, isotope effect, and the phase diagram of the EHHM bipolaron and compare them to the Holstein bipolaron that has been thoroughly studied recently [@bonca1]. Even though the two models appear very similar, we find profound differences between the physical properties of bipolarons within the EHHM and the HHM.
The numerical method that we use creates a systematically expandable variational space of phonon excitations in the vicinity of the two electrons [@bonca; @bonca1]. The variational method is defined on an infinite lattice and is not subject to finite-size effects. It allows the calculation of physical properties at any wavevector $k$. In the intermediate coupling regime where it is most accurate, it provides results that are variational in the thermodynamic limit and gives energies accurate to 14 digits for the polaron case and up to 7 digits for the bipolaron case.
To investigate the strong coupling regime of the EHHM, we use a Lang-Firsov [@lang] unitary transformation $\tilde H = e^S H e^{-S}$, where $S = g_0\sum_{jls} f_l(j) n_{js}(a_l-a_l^\dagger)$. This incorporates the exact distortion and interaction energies for static electrons into $H_0$, and leads to a transformed Hamiltonian $$\begin{aligned}
\tilde H &=& H_0 + T, \label{hamtilde}\\
%
H_0&=&\omega\sum_j a_j^\dagger a_j
-\omega g_0^2\sum_{ijl}f_l(i)f_l(j)n_i n_j
+ U\sum_{j}n_{j\uparrow}n_{j\downarrow} , \nonumber \\
%H_0&=&\omega\sum_j
%a_j^\dagger a_j - \omega g_0^2 \sum_{jl} f_l(j)^2 n_j \nonumber +
%U\sum_{j}n_{j\uparrow}n_{j\downarrow}\nonumber \\
%&-&2\omega g_0^2\sum_{jl}f_l(j)^2 n_{j\uparrow}n_{j\downarrow}\nonumber
%-\omega g_0^2\sum_{i\not =jl}f_l(i)f_l(j)n_i n_j ,\\
%
T &=& -t e^{-\tilde g^2}\sum_{js} c_{j+1,s}^\dagger c_{j,s}
e^{-g_0\sum_l\left(f_l(j+1) - f_l(j)\right)a_l^\dagger} \nonumber \\
&&e^{ g_0\sum_l\left(f_l(j+1) - f_l(j)\right)a_l}
+ {\rm H.c.},\nonumber\end{aligned}$$ where $n_{j} = n_{j \uparrow} + n_{j \downarrow}$ and $\tilde g^2 = g_0^2\sum_l [ f_l(0)^2-f_l(0)f_l(1) ];$ $ \tilde g = g_0$ for the EHHM. The second term in $H_0$ gives the polaron energy, which in the EHHM case is $\epsilon_p=2\omega g_0^2$, while for the HHM, $\epsilon_p=\omega
g_0^2$. This term also includes the interaction between electrons located on neighboring sites, a consequence of the non-local electron-phonon interaction. As noted by Alexandrov and Kornilovitch [@alex], in the strong coupling regime a Fr" ohlich polaron has a much smaller effective mass than a Holstein polaron with the same polaron energy $\epsilon_p$. The reason for lower mass in the Fr" ohlich case (as well as EHHM) is that the effective electron-phonon coupling that renormalizes hopping $\tilde g^2=\gamma
\epsilon_p/\omega$ is smaller (in EHHM, $\gamma
= 1/2$) than in the case of the HHM with $\gamma=1$. In the strong coupling EHHM polaron, the phonon is displaced on two sites. It is identical on one of these sites in the initial and the final state after the electron hop, resulting in a smaller mass enhancement from phonon overlap.
In the anti-adiabatic limit where $g_0\to 0$ and $\omega\to\infty$ with $\omega g_0^2$ constant, the phonon interaction is instantaneous and our simplified EHHM model maps onto a generalized Hubbard model $$\begin{aligned}
H &=& -t \sum_{js} ( c_{j+1,s}^\dagger c_{j,s} + H.c.) \nonumber \\
&+&\tilde U\sum_{j}n_{j\uparrow}n_{j\downarrow}
+V\sum_{j}n_{j}n_{j+1},
\label{hubb}\end{aligned}$$ with an effective Hubbard interaction $\tilde U = U-4\omega g_0^2$ and $V=-2\omega g_0^2$. In the case of two electrons an analytical solution can be found. As many as three bound states may exist: two singlets and a triplet. In the case when $U=0$ there is always at least one singlet bound state. A triplet bound state with an energy $E=-2\omega
g_0^2-2t^2/\omega g_0^2$ exists only when $\omega g^2_0 > t$.
In the strong coupling limit, $T$ in Eq. (\[hamtilde\]) may be considered as a perturbation. In the case when $U<2\omega g_0^2$, the single site or S0 bipolaron, defined as $\phi_{S0}=c^\dagger_{0\uparrow}c^\dagger_{0\downarrow}\vert0\rangle$, has the lowest energy to zeroth order. In this regime the binding energy is $\Delta=E_{bi}^{S0}-2\epsilon_p=U-4\omega g_0^2$, where $E_{bi}^{S0}$ denotes the S0 bipolaron energy and $\epsilon_p$ is the energy of a polaron in zeroth order. In the opposite regime, when $U>2\omega g_0^2$, the inter-site or S1 bipolaron, $\phi_{S1}^{S=0,1}={1\over \sqrt 2}
(c^\dagger_{0\uparrow}c^\dagger_{1\downarrow}\pm
c^\dagger_{0\downarrow} c^\dagger_{1\uparrow})\vert0\rangle$, has the lowest energy. Its binding energy $\Delta=-2\omega g_0^2$ does not depend on $U$, which also leads to a degeneracy between the spin-singlet (S=0) and the spin-triplet (S=1) state. This simple analysis predicts that a EHHM bipolaron (EHB) remains bound in the strong coupling regime even in the limit when $U\to\infty$.
It is worth stressing that in the limit $U\to\infty$, singlet and triplet bipolarons become degenerate. We can therefore predict the existence of a singlet and a triplet bipolaron, where at finite $U$ the singlet bipolaron has lower energy. It is also obvious that the energy of the triplet bipolaron should not depend on $U$. In contrast to these predictions, a triplet Holstein bipolaron (HB) is never stable, and furthermore in the limit $U\to\infty$ no bound HB exists [@bonca1].
Next, we focus on the effective mass of the EHB in the strong coupling regime. First order perturbation theory does not lead to energy corrections for the S0 EHB. Second order perturbation theory gives $${m^*_{S0}}^{-1} = {4t^2e^{-2g^2}}
\sum_{n= 0}{(-2g^2)^n\over n!}{1\over
\epsilon_p-U + n\omega},
\label{mbi}$$ where $m^{* -1} \equiv d^2 E(k) / dk^2 $. Equation (\[mbi\]) is only valid in the limit when $1/\lambda \equiv 2t/\epsilon_p\to 0$ and $U \ll \epsilon_p$. In the limit of large $g$ and $U=0$, ${m^*_{S0}}\propto \exp(2\epsilon_p/\omega)$, which should be compared to the HB effective mass that scales as ${m^*_{S0}}\propto \exp(4\epsilon_p/\omega)$ [@bonca1; @kabanov]. In the strong coupling regime the EHB should be much lighter than the Holstein bipolaron. There is a particularly interesting EHB regime when $U=\epsilon_p$. In this case the zero order energies of the $\phi_{S0}$ and $\phi_{S1}^{S=0,1}$ bipolarons are degenerate. Degenerate first order perturbation theory can be applied to the spin-singlet EHB in this case, which leads to a substantial decrease in the effective mass $$m_{EHB}^*(U=\epsilon_p)=
{\sqrt 2\over t} e^{\epsilon_p/2\omega}.
\label{massf1}$$ The EHB in this regime consists of a superposition of $\phi_{S0}$ and $\phi_{S1}^{S=0}$, and moves through the lattice in a crab-like motion. Its binding energy is $\Delta = -\epsilon_p - 2 \sqrt 2 t \exp(-\epsilon_p/2\omega)$.
In the $U\to\infty$ limit we apply the second-order perturbation theory to the S1 bipolaron. We take into account processes where one of the electrons within the S1 bipolaron jumps to the left (right) and then the other follows. This leads to $$m_{EHB}^*(U=\infty)={\lambda\over t}e^{\epsilon_p/\omega}.
\label{massf2}$$ Strong coupling approach thus predicts nonmonotonous dependence of the effective EHB mass as a function of $U$ as can be seen from different exponents in Eqs. (\[mbi\],\[massf1\],\[massf2\]).
We next present numerical results. To achieve sufficient accuracy, we have used up to $ 3~\times~10^6$ variational states. We use units where the bare hopping constant is $t=1$. The ground state energy of the EHB at $\lambda = 0.5$, $\omega = U = 1$, is E = -5.822621, which is accurate to the number of digits shown. (For the same parameters, $U=0$, the Holstein bipolaron energy is E = -5.4246528.) The accuracy of our plotted results in the thermodynamic limit is well within the line-thickness. In Fig. (\[fmass1\]a) we present the inverse effective masses of the EHB and the HB at $U=0$ and of the EHB at $U=\epsilon_p$. Our results for the bipolaron mass are in qualitative agreement with results for the polaron effective mass by Alexandrov and Kornilovitch [@alex]. In the weak coupling regime we find the EHB slightly heavier than the HB, while in the strong coupling regime the opposite is true. Setting the Coulomb interaction to $U=\epsilon_p$, the effective mass becomes even lighter, which is a consequence of the smaller exponent in Eq. (\[massf1\]). In the strong coupling regime ($\lambda \geq 1$), we find good agreement with our strong coupling predictions in Eqs. (\[mbi\],\[massf1\]), depicted by thin lines. While the absolute values may differ by up to a factor of 4 (in the case of $U=\epsilon_p$), the strong coupling approach almost perfectly predicts the exponential dependence (seen as parallel straight lines in Fig. (\[fmass1\]a)) of the effective masses on $\epsilon_p = 2t \lambda $.
To obtain better understanding of the effect of on-site Coulomb repulsion on the bipolaron effective mass in the strong coupling regime, we present in Fig. (\[fmass1\]b) effective masses of the EHB and HB at fixed coupling strength $\lambda=1.45$ as a function of $U$. The most prominent finding is that the EHB is two orders of magnitude lighter than the HB when $U=0$. While the effective mass of the HB decreases monotonously with $U$, the EHB effective mass reaches a shallow minimum near $U=\epsilon_p$ as predicted by the strong coupling approach. At larger $U>\epsilon_p$ we observe a slight increase in the effective mass. In the same regime HB effective mass drops below EHB effective mass. This crossing coincides with a substantial decrease of the HB binding energy and consequently with separating of HB into two separate polarons. Numerical results for the EHB agree reasonably well with analytical predictions for small $U$, Eq. (\[mbi\]), and also in the limit of large $U$, Eq. (\[massf2\]).
To gain an insight into the symmetry of the bound EHB state, we have calculated the binding energy $\Delta^{(0,1)}=E_{bi}^{(0,1)}-2E_{po}$ where $E_{bi}^{(0,1)}$ are the ground state and the first excited energy of the EHHM or HHM for two electrons with opposite spins, $S_z=0$, and $E_{po}$ is the ground state energy of the corresponding model with one electron. In Fig. (\[fbind\]a) we present binding energies of the bipolaron ground and first excited states as a function of $U$. An important difference between the HHM and the EHHM is that in the former case a critical $U_c$ exists for any coupling strength $\lambda$ when the HB unbinds, while the EHB remains bound even in the limit $U\to\infty$ when $\lambda>\lambda_c=0.76$. At small $U$ excited states of both models correspond to bipolaronic singlets, spaced approximately $\omega$ above the ground state. Singlets can be recognized by the fact that their binding energies depend on $U$. As $U$ increases, the excited state of the HB unbinds while the excited state of the EHB undergoes a transition from a singlet to a triplet state which is also bound.
By solving $\Delta^{(0,1)}(\lambda,U_c)=0$ we arrive at the phase diagram $(U_c,\lambda)$ of the EHHM calculated at fixed $\omega=1$, presented in Fig. (\[fbind\]b). We indicate three different regimes. For small $\lambda$ and large $U$ no bound bipolarons exist. With increasing $\lambda$ there is a phase transition into a bound singlet bipolaron state. Increasing $\lambda$ even further, a triplet bipolaron becomes bound as well at $\lambda=\lambda_c$. For comparison we also include the phase boundary of the HHM (open circles). Note that only a singlet bipolaron exists in the HHM.
In Fig. (\[fbind\]c) we present a cross section through the phase diagram in Fig. (\[fbind\]b) at fixed $U=5$, and plot ${m^*}^{-1}$ and the isotope effect $\alpha \equiv d \ln m_{bi}/ d \ln
M $ vs. $\lambda$ (see also discussion of the isotope effect in Ref. [@bonca1]). The effective mass increases by approximately a factor of 2.5 from its noninteracting value in the regime where only a spin-singlet bipolaron exists (between the two vertical dashed lines). The increase of the effective mass is followed by an increase in the isotope effect. The binding energy (not plotted) reaches a value $\Delta\sim -0.5 t$ at $\lambda=\lambda_c=0.76$.
To conclude, we have shown that a light EHB exists even in the strong coupling regime with an effective mass that can be a few orders of magnitude smaller than the HB effective mass at small $U$. At finite $U=\epsilon_p$ a regime of extremely light EHB is found where bipolaron effective mass scales with the same exponent as the polaron effective mass. This mobile bipolaron arises as a superposition of a $\phi_{S0}$ and a $\phi_{S1}$ state and it moves through a lattice in a crab-like motion. As found in ref. [@bonca1], HB becomes very light with increasing $U$ close to the transition into two unbound polarons at $U=U_c$. Near this transition, its binding energy diminishes substantially and reaches $\Delta=0$ at the transition point $U_c$. In contrast, EHB can have a small effective mass even in the regime where its binding energy is large ( in the strong coupling regime $\Delta$ approaches $\Delta = -\epsilon_p$). Furthermore, EHB remains bound in the limit when $U\to\infty$. As a consequence of a longer-range electron-phonon interaction, a bound spin-triplet bipolaron exists in the EHHM for $\lambda>\lambda_c$ The difference between the binding energies of the spin-singlet and the spin-triplet bipolaron is proportional to $1/U$. In the weak to intermediate coupling regime of the EHHM ($\lambda<\lambda_c$ and finite $U$) $S=0$ bipolarons exist with substantial binding energy close to $\lambda\sim \lambda_c$, and an effective mass of the order of noninteracting electron mass.
The existence of a singlet and a triplet EHB state has important implications in the case of finite doping. As was established previously, there is no phase separation in the low-density limit of the HHM despite a substantially renormalized bandwidth [@bonca1]. The reason is in part that a triplet bipolaron is always unstable. The lack of phase separation in the low-density limit and in the strong coupling regime has a simple intuitive explanation: a third particle, added to a bound singlet bipolaron, introduces a triplet component to the wavefunction. The opposite is true in the strong coupling limit of EHHM where singlet and triplet bipolarons coexist. In this case, the third added particle simply attaches to the existing singlet bipolaron and thus gains in the potential energy. We therefore expect that the EHHM phase separates in the case of finite doping for $\lambda$ sufficiently large. To stabilize a system of EHHM bipolarons against phase separation, a long-range Coulomb repulsion should be taken into account. This prediction is in agreement with recent findings by Alexandrov and Kabanov [@kabanov1] that state, that there is no phase segregation in the Fr" ohlich model in the presence of long-range Coulomb interactions.
J.B. gratefully acknowledges the support of Los Alamos National Laboratory where part of this work has been performed, and financial support by the Slovene Ministry of Science, Education and Sport. This work was supported in part by the US DOE.
S. Jin [*et al.*]{}, Science [**264**]{}, 413 (1994).
J.G. Bednorz and K.A. M" uller, Z. Phys. B [ **64**]{}, 189 (1986).
L. Proville and S. Aubry, Physica D [**133**]{}, 307 (1998); Eur. Phys. J. B [**11**]{}, 41 (1999).
H. Fehske, H. Röder, G. Wellein, and A. Mistriotis, Phys. Rev. B [**51**]{}, 16582 (1995).
J. Bonča, T. Katrašnik, and S. A. Trugman, Phys. Rev. Lett. [**84**]{}, 3153 (2000).
A.S. Alexandrov and P.E. Kornilovitch, Phys. Rev. Lett. [**82**]{}, 807 (1999).
H. Fehske, J. Loos, and G. Wellein, Phys. Rev. B [**61**]{}, 8016 (2000).
N. Tsuda, K. Nasu, A. Yanase, and K. Siratori, [*Electronic Conduction in Oxides*]{}, Springer-Verlag (1991).
J. Bonča, S. A. Trugman and I. Batistič, Phys. Rev. B [**60**]{}, 1633 (1999).
I. G. Lang and Yu. A. Firsov, Sov. Phys. JETP [**16**]{}, 1301 (1963); Sov. Phys. Solid State [**5**]{}, 2049 (1964).
A.S. Alexandrov and V.V. Kabanov, Sov. Phys. Solid State [**28**]{}, 631 (1986). A.S. Alexandrov and V.V. Kabanov, JETP Lett. [**72**]{}, 569 (2000).
|
---
abstract: 'We theoretically study the phonon mediated intersurface electron-electron interactions on the pseudo two-dimensional metallic states at the two surfaces of a three-dimensional topological insulator. From a model of a three-dimensional topological insulator including the phonon excitation in it, we derive the effective Lagrangian which describes the two surface metallic states and the interaction between them. The intersurface electron-electron interactions can be either repulsive or attractive depending on parameters such as temperature, the speed of the phonon, and the Fermi velocity of the surface states. The attractive interaction removes the Dirac nodes from the two surface states as a result of the spontaneous symmetry breaking. On the basis of the calculated results, we also discuss how to tune the inter-surface interaction.'
author:
- Tetsuro Habe
bibliography:
- 'TI.bib'
title: 'Intersurface interaction via phonon in three-dimensional topological insulator'
---
introduction
============
The three-dimensional(3D) topological insulators(TIs) host two-dimensional(2D) massless electric states on their surface[@Fu2007; @Fu2007-2; @Moore2007; @Chen2009]. The massless surface state represented by a Dirac Hamiltonian in high-energy physics is well preserved as far as the $\mathbb{Z}_2$ topological number being nontrivial in the bulk states in the presence of time-reversal symmetry[@Tanaka2009; @Hsieh2009; @Liu2009; @Hor2010; @Chen2010; @Wray2010; @Habe2012]. The statement, however, is not valid when the electron-electron interactions derive the spontaneous symmetry breaking(SSB). In fact, a general theory of the Dirac particle[@Nambu1961; @Nambu1961-2] indicates that the mass term appears to the Dirac Hamiltonian in association with the SSB of the chiral $U(1)$ symmetry. Two key features should be satisfied for the SSB: the presence of two degenerated Dirac cones and the attractive interaction between them. According to the theory, the inter-surface interactions are necessary for opening the gap on the surface metallic states of the time-reversal invariant 3D TIs because only a single Dirac cone stays at each surface.
So far, the SSB has been theoretically discussed in a thin film of TI where the inter-surface interactions are mediated by the repulsive Coulomb interaction[@Seradjeh2009; @Wang2011; @LiuM2011; @Cho2011; @Moon2012; @Sodemann2012; @Efimkin2012]. In this situation, to realize the attractive interactions between the two Dirac cones, the large enough applied gate voltage, i.e., the difference of Fermi energies, between the two surfaces is necessary so that electrons appear in one surface and holes appear in the other. As a result, the attractive interactions between an electron and a hole enable forming the excitonic condensation. In the real TIs, however, the electron-electron interactions are mainly mediated by phonons because the dielectric constant in TIs is usually much larger than that in the vacuum. The effects of the electron-electron interactions via phonon have been a hot issue in the recent studies of 3D TIs[@Cheng2011; @Giraud2011; @Pan2012; @Zhu2011; @Giraud2012; @Zhu2012]. The theoretical results assuming the interaction via phonons show a good agreement with the experimental results[@Cheng2011; @Hatch2011].
In this paper, we theoretically discuss the intersurface interactions of TIs on a simple model in which two quasi two-dimensional surface states interact with each other via phonons excited in the bulk. The coupling constant of the inter-surface electron-electron interactions is estimated by the perturbation expansion with respect to the electron-phonon interactions. We find that the inter-surface interaction via phonons can be attractive when the phonon speed comes close to the Fermi velocity of the surface state at low temperature. The attractive inter-surface interaction implies a possibility of the phase transition from the metallic surface state to the insulating state breaking the chiral $U(1)$ gauge symmetry.
This paper is organized as follows. In Sec. II, we explain our theoretical model. In Sec. III, we derive the coupling constant of the inter-surface electron-electron interaction. The comparison between our results and the real materials or the experimental situation is discussed in Sec. IV. The conclusion is given in Sec. V.
Theoretical model
=================
We consider the two quasi (2+1)-dimensional surface states staying on the different surfaces of a 3D TI as shown in Fig. \[fig:1\]. The three-dimensional wave function is represented by $$\begin{aligned}
\Psi
=&\begin{pmatrix}
f_+(x_3)&0\\
0& f_-(x_3)
\end{pmatrix}\psi\label{3Dwave}\end{aligned}$$ where $f_\pm(x_3)$ localizes at $x_3=\pm L/2$ and exponentially decreases into the bulk insulating region and $$\begin{aligned}
\psi=&\begin{pmatrix}
\xi(t,x_1x_2)\\
\eta(t,x_1x_2)
\end{pmatrix},\end{aligned}$$ contains the spinor wave functions $\xi$ and $\eta$ in a two-dimensional plane as shown in Fig. \[fig:1\].
![The conceptual scheme of two surface states $\xi$ and $\eta$ which are two-dimensional components at the surface of TIs. The two cones represent the energy dispersions of the surface states. The arrows on the cones show the direction of spin direction locked by the momentum. []{data-label="fig:1"}](Fig1.eps){width="80mm"}
We assume that the penetration length $\lambda$ is a constant value for each quasi two-dimensional state on the opposite surface. In this case, the $f_\pm(x_3)$ can be written by, $$\begin{aligned}
f_\pm(x_3)=&\frac{1}{\sqrt{\lambda}}\exp\left[~\mp\frac{1}{2\lambda}\left(x_3\pm \frac{L}{2}\right)~\right].\nonumber\end{aligned}$$ with the thickness $L$ along the $x_3$ axis. The Lagrangian of the surface states is equivalent to that of the Weyl field as $$\begin{aligned}
\mathscr{L}_{e}=&i\bar{\Psi}\gamma^{\mu}\tilde{\partial}_{\mu}\Psi\label{Weyl},\\
\bar{\Psi}=&\Psi^\dagger \gamma^0\nonumber\end{aligned}$$ where the index which appears twice in a single term means the summation for $\mu=0,1,2$, and $\gamma^\mu$ is Dirac gamma matrices for $\mu=0-3$, and $5$. The $\tilde{\partial}_\mu$ represents the derivative for time $\partial_0=\tilde{\partial}_0=\partial/\partial t$ and in-plane coordinates $\tilde{\partial}_j=v_F\partial_j=v_F\partial/\partial x_j$ for $j=1$ and $2$ with the Fermi velocity $v_F$. The Lagrangian has no mass term represented by $m\gamma^3$ or $m\gamma^5$ with a constant $m$. In the presence of time-reversal symmetry, the mass term proportional to $\gamma^3$ vanishes because it represents magnetization or magnetic field. The other mass term proportional to $\gamma^5$ mixes the two surface states $f_+$ and $f_-$ on either surface in Eq. (\[3Dwave\]). In the absence of electron-electron interactions, the mass term is negligible in a thick TI because $\lambda/L\ll1$ is satisfied. Namely, the direct hopping between the two two-dimensional metallic states on the opposite surface is suppressed. Therefore the inter-surface interaction is intermediated only by the phonon field with a large dielectric constant.
Effective 2D Lagrangian of electron-phonon coupling
---------------------------------------------------
We estimate the coupling constant of the interaction between the pseudo two-dimensional electric states and the three-dimensional phonon. In the bulk of the TI, the electron-phonon interaction between electron and phonon can be ignored because the Fermi energy lies in the insulating gap. Thus, in the bulk of TIs, the three-dimensional phonon can be written by the bare scalar boson field $\Phi$. The scalar boson’s Lagrangian is a Klein-Gordon-type one, $$\begin{aligned}
\mathscr{L}_{p}=\left(\partial_{0}\Phi\right)^2-\sum_{j=1-3}\left(v_B\partial_{j}\Phi\right)^2-\frac{m^2}{2}\Phi^2
,\label{Lp}\end{aligned}$$ with the speed $v_B$ and the mass $m$ of the scalar boson. The mass term (the third term in Eq. (\[Lp\])) appears in the case of the optical phonon or the acoustical phonon under pressure[@Cheng2011]. The bare scalar boson field $\Phi$ has the eigen energy $\omega_k=\sqrt{{v_B}^2|\boldsymbol{k}|^2+m^2}$ with the momentum $\boldsymbol{k}$. The wave function is represented by a product of the plane wave along the $x_3$ axis and the in-plane wave component of $\phi(t,x_1,x_2,k_3)$ as $$\begin{aligned}
\Phi(t,x)=\int \frac{dk_3}{\sqrt{2\pi}}\phi(t,x_1,x_2,k_3)\frac{1}{\sqrt{2\pi}}\sin k_3x_3,\end{aligned}$$ under the Dirichlet boundary condition of $\Phi(t,x)=0$ at $x_3=0$.
At the vicinity of the surface, the amplitude of the electron-phonon coupling is much larger than that in the bulk because of the metallic property at the surface. The Lagrangian of the electron-phonon interaction is calculated by the overlap integration of the wave functions for the surface states and that of the phonon, $$\begin{aligned}
L_{ep}=\int d^3\boldsymbol{x}U&\left\{\xi^\dagger(x)\xi(x){f_{+}}^2(x_3)\right.\nonumber\\
&\left.+\eta^\dagger(x)\eta(x){f_{-}}^2(x_3)\right\}\Phi,\label{Lep}\end{aligned}$$ where $U$ is the coupling function and the overlap is integrated in $(x_1,x_2,x_3)$. In the limit of a small penetration length $\lambda$ as $L/\lambda\rightarrow\infty$ and $1+k_3\lambda\rightarrow1$, the overlap integration along the $x_3$-axis can be written by the correction of the coupling constant $\chi$: $$\begin{aligned}
\chi=&\frac{1}{2\pi\lambda}\int^{L}_{0}dx_3\sin k_3x_3\exp\left[-\frac{x_3}{\lambda}\right]\nonumber\\
=&\frac{1}{2\pi} \frac{1}{1+(k_3\lambda)^2}\left(e^{-\frac{L}{\lambda}}[\sin k_3L-k_3\lambda\cos k_3L]+k_3\lambda\right)\nonumber\\
\simeq&\frac{k_3\lambda}{2\pi}.\end{aligned}$$ Thus, the interaction Lagrangian $L_{ep}$ is independent of $L$, $$\begin{aligned}
L_{ep}=&\int dx_1dx_2~\tilde{\phi}(x)\{\xi^\dagger(x)\xi(x)+\eta^\dagger(x)\eta(x)\},\label{efLep}\\
\tilde{\phi}(x)=&\int dk_3\chi(k_3) U(x,k_3)\phi(x,k_3),\end{aligned}$$ with $x=(t,x_1,x_2)$.
Intersurface interaction of Weyl field
======================================
From the interaction Lagrangian in Eq. (\[efLep\]), we will derive the effective intersurface interaction represented by $$\begin{aligned}
V=-g~\xi^\dagger(x)\xi(x)~\eta^\dagger(x)\eta(x),\label{int}\end{aligned}$$ where the coupling constant $g$ reflects the dynamics of the phonon and depends on temperature, phonon mass, and the ratio of the Fermi velocity to the phonon speed.
In the lowest order of the perturbation expansions with respect to the phonon field, the bare inter-surface interaction is represented by the Lagrangian of $$\begin{aligned}
\mathscr{L}_{\mathrm{int}}=-\int dk_3&{u}^2(x,k_3)\xi^\dagger(x)\xi(x)\nonumber\\
&\times D(x-x')\zeta^\dagger(x')\zeta(x'),\label{Lint}\end{aligned}$$ with the bare vertex function $u^2(x,k_3)=\chi^2U(x,k_3)U(x',k_3)$ and the phonon propagator $D(x-x')$ (see Appendix \[AP-effective action\]). The Lagrangian of interaction can be interpreted easily as the exchange of phonons between the two electrons staying on the different surfaces. The event represented by the Lagrangian is schematically shown by using a Feynman diagram in Fig.\[fig:SE1\]-(b).
![The Feynman diagrams of (a) the vertex function in Eq.(\[Lep\]) and (b) the lowest order term for electron-electron interaction via the phonon exchanging are shown. The label $\zeta$ denotes the surface Weyl field for $\zeta=\xi,~\eta$. In Fig. (b), $\zeta$ and $\bar{\zeta}$ represent opposite surface field operators, [i.e.]{}, $(\zeta,\bar{\zeta})=(\xi,\eta)$ or $(\zeta,\bar{\zeta})=(\eta,\xi)$. The diagram of (b) represents the first-order effective inter-surface interaction through a phonon. []{data-label="fig:SE1"}](Fig2.eps){width="80mm"}
The effective inter-surface interaction in Eq. (\[int\]) can be calculated by a summation of the bare intersurface interaction in Eq. (\[Lint\]) and the one-particle irreducible(1PI) vertex function $\kappa^{[\mathrm{1PI}]}$ $$\begin{aligned}
V=-\mathscr{L}_{\mathrm{int}}-\kappa^{[\mathrm{1PI}]},\end{aligned}$$ where the 1PI vertex function contains the loop diagrams consisting of the bare phonon propagators and fermion propagators e.g. the diagrams in Fig. \[fig:SE2\]. To calculate the coupling constant in Eq. (\[int\]), we consider the effective potential instead of the one-loop 1PI vertex function as shown in Fig. \[fig:SE2\]. The effective potential is the vertex function under the condition that the initial state and the final state have the same energy and the same momentum. In this case, the interaction between any states $\xi$ and $\eta$ in momentum space can be represented by using a single coupling constant $g$ as $$\begin{aligned}
V=-g(p')\xi^\dagger(p_1)\xi(p_1+p')\eta^\dagger(p_2)\eta(p_2-p')\label{eq:15}\end{aligned}$$ where $p=(p_0,\boldsymbol{p})$ is the vector with the energy $p_0$ and the two-dimensional momentum $\boldsymbol{p}=(p_1,p_2)$. The intersurface interaction with $p'=0$ is dominant in the low energy theory, even if the coupling constant contains no divergent term in the infrared region. At $p'=0$, the Fourier transformation of Eq. (\[eq:15\]) is essentially equivalent to Eq. (\[int\]). Therefore, the coupling constant $g$ in Eq. (\[int\]) is calculated from $g(0)$ in Eq. (\[eq:15\]).
![The Feynman diagrams of the second order electron-phonon interaction are shown. []{data-label="fig:SE2"}](Fig3.eps){width="80mm"}
We introduce the propagators of the Weyl field, those of the phonon, and the bare vertex function $u=\chi U$ in momentum space. The propagators of the Weyl field $G(p_0,\boldsymbol{p})$ and the phonon $D(k_0,\boldsymbol{k})$ in energy-momentum space are represented by $$\begin{aligned}
G(p_0,\boldsymbol{p})=&\frac{1}{p_0\sigma^0-v\boldsymbol{p}\cdot\boldsymbol{\sigma}+i\epsilon},\\
D(p_0,\boldsymbol{p})=&\frac{2\sqrt{{v_B}^2\boldsymbol{p}^2+\tilde{m}^2}}{{p_0}^2-{v_B}^2\boldsymbol{p}^2-(\tilde{m}^2-i\epsilon)}\label{phonon},\\
\tilde{m}=&m+{v_B}^2{k_3}^2\end{aligned}$$ where $\sigma^{\nu}$ is $2\times2$ unit matrix for $\nu=0$ and Pauli matrix for $\nu=1-3$. We consider two types of the electron-phonon coupling: the deformation coupling and polar one[@Mahan2000]. The piezoelectric coupling is suppressed by inversion symmetry of the materials[@Huang2008; @Giraud2011]. Thus we ignore the piezoelectric coupling in this paper. According to Ref. , the two electron-phonon couplings are characterized by the different bare vertex functions $M_p^{(D)}$ and $M_p^{(P)}$ for the deformation and polar couplings, respectively. The bare vertex functions are the Fourier components of $u$ and are represented by $$\begin{aligned}
\left(M_p^{(D)}\right)^2=&C_1\chi^2\frac{v_F|\boldsymbol{p}|^2}{p_D}\label{CPd}\\
\left(M_p^{(P)}\right)^2=&C_2\chi^2\sqrt{{v_B}^2|\boldsymbol{p}|^2+\tilde{m}^2}\label{CPp},\end{aligned}$$ where $C_1$ and $C_2$ are the coupling constants. The coupling constants are $$\begin{aligned}
C_1=&\frac{p_D}{v_F}R^2\\
C_2=&2\pi e^2\left(\frac{1}{\varepsilon_{\infty}}-\frac{1}{\varepsilon}\right)\end{aligned}$$ where $R$ is the deformation constant, $p_D$ is a Debye momentum, and $\varepsilon_{\infty}$ and $\varepsilon$ are the dielectric constants for a high-frequency and a low-frequency, respectively[@Mahan2000]. We consider both couplings in this paper because it is unclear which coupling is more dominant in TIs. The deformation coupling comes from the interaction between the electron and the acoustical phonon. On the other hand, the polar coupling accounts for the interaction between the electron and the optical phonon. Some experiments suggest that the optical phonon play a dominant role[@Shahil2010; @Qi-J2010], however another experiment[@Hatch2011] on the life-time of a quasi-particle has good agreement with the theory of the acoustical phonon[@Giraud2011].
Deformation coupling
--------------------
At first, we calculate the coupling constant $g$ for the interaction via the deformation coupling. In the no-loop calculation, the deformation coupling constant vanishes, $$\begin{aligned}
\mathscr{L}_{\mathrm{int}}=&g^{(0)}\xi^\dagger\xi(p)\zeta^\dagger\zeta(-p)\\
g^{(0)}=&-\lim_{p_\mu\rightarrow0}\int dk_3{M_p^{(D)}}^2D(p_0,\boldsymbol{p})=0,\end{aligned}$$ where the superscript of $g^{(0)}$ denotes the number of loops. Thus we consider the one-loop effective potentials $V_a^{(1)}$ and $V_b^{(1)}$, where the subscript denotes two diagrams in Fig. \[fig:SE2\]. The one-loop effective potentials are represented by $$\begin{aligned}
V_a^{(1)}=&\int dk_3\int \frac{d^3p}{(\sqrt{2\pi})^3}{M_p^{(D)}}^4\left(\xi^\dagger G(p_0,\boldsymbol{p})\xi\right)\nonumber\\
&\times\left(\eta^\dagger G(-p_0,-\boldsymbol{p})\eta\right)D(p_0,\boldsymbol{p})^2\\
V_b^{(1)}=&\int dk_3\int \frac{d^3p}{(\sqrt{2\pi})^3}{M_p^{(D)}}^4\left(\xi^\dagger G(p_0,\boldsymbol{p})\xi\right)\nonumber\\
&\times\left(\eta^\dagger G(p_0,\boldsymbol{p})\eta\right)D(p_0,\boldsymbol{p})^2,\end{aligned}$$ where $\xi$ and $\eta$ are independent of $(p_0,\boldsymbol{p})$. The net effective potential is $$\begin{aligned}
V^{(1)}=\int dk_3\int \frac{d^3p}{(\sqrt{2\pi})^3}&{M_p^{(D)}}^4(~
{p_0}^2\Gamma_1(p)\xi^\dagger\xi~\eta^\dagger\eta\nonumber\\
&+{p_i}^2\Gamma_2(p)\xi^\dagger\sigma^i\xi~\eta^\dagger\sigma^i\eta~
).
\label{inp}\end{aligned}$$ Using the Matsubara method, only the spin independent term remains at finite temperature $T=\beta^{-1}$, $$\begin{aligned}
V^{(1)}=-g^{(1)}\xi^\dagger\xi~\eta^\dagger\eta\label{effp},\end{aligned}$$ where the second term in Eq. (\[inp\]) disappears. The effective coupling constant $g^{(1)}$ is $$\begin{aligned}
g^{(1)}=&\frac{1}{2}\int dk_3\int dp~p(\omega_p(M_p^{(D)})^2)^2\Gamma_1(p)\label{eq27}\\
\Gamma_1(p)=&\Gamma_{FB}(p)+\Gamma_F(p)+\Gamma_B(p)\nonumber\\
\Gamma_{FB}(p)=&\beta[\tilde{D}_A(p)^2+\tilde{D}_R(p)^2]\nonumber\\
&\times[n_F(\varepsilon_p)n_F(-\varepsilon_p)+n_B(\omega_p)n_B(-\omega_p)]\\
\Gamma_F(p)=&-\frac{1}{\varepsilon_p}\left([1-4{\varepsilon_p}^2\tilde{D}_A(p)]\tilde{D}_A(p)^2n_F(\varepsilon_p)\right.\nonumber\\
&\left.-[1-4{\varepsilon_p}^2\tilde{D}_R(p)]\tilde{D}_R(p)^2n_F(-\varepsilon_p)\right)\\
\Gamma_B(p)=&\frac{1}{\omega_p}\left([1+4{\omega_p}^2\tilde{D}_A(p)]\tilde{D}_A(p)^2n_B(\omega_p)\right.\nonumber\\
&\left.-[1+4{\omega_p}^2\tilde{D}_R(p)]\tilde{D}_R(p)^2n_B(-\omega_p)\right)\end{aligned}$$ where $\omega_p=\sqrt{{v_B}^2|\boldsymbol{p}|^2+\tilde{m}^2}$ and $\varepsilon_p=v_F|\boldsymbol{p}|$ are the energy of the phonon and the surface Weyl field respectively. The integral range of momentum is $0<\sqrt{p^2+{k_3}^2}<p_D$ with the Debye momentum $p_D$. The Bose and Fermi distribution functions $n_B(z)$ and $n_F(z)$ are represented by $$\begin{aligned}
n_B(z)=&\frac{1}{e^{\beta z}-1},\\
n_F(z)=&\frac{1}{e^{\beta z}+1}.\end{aligned}$$ The propagator $\tilde{D}_{A(R)}(p)$ is defined by $$\begin{aligned}
\tilde{D}_{A(R)}(p)=\frac{1}{{\omega_p}^2-{\varepsilon_p}^2-(+)i\epsilon}.\end{aligned}$$ In Fig. \[fig:4\], we show $g^{(1)}$ calculated numerically with a unit of $\varepsilon_{\mathrm{def}}={C_1}^2p_D/v_F$ as a function of three parameters $\beta$, $\gamma=v_B/v_F$, and $m$; $\beta$ and $m$ are normalized by using the energy $\omega_D=\sqrt{m^2+{v_B}^2{p_D}^2}$.
![The coupling constant $g^{(1)}$ of the inter-surface interaction via deformation in Eq. \[effp\] is shown as a function of temperature $\beta^{-1}$ and the phonon mass $m$. The potential becomes attractive in the red region and repulsive in the blue region. The zero-potential is represented by the green line. []{data-label="fig:4"}](Fig4.eps){width="80mm"}
In the real TIs, the $\gamma$ is approximately estimated as $10^{-2}$; i.e., the phonon speed is much smaller than the surface Fermi velocity $v_B\ll v_F$. For small $\gamma$, the phonon contributes to the repulsive intersurface interaction as shown in Fig. \[fig:4\]-(a). Increasing the speed ratio $\gamma$, however, the region of the repulsive intersurface interaction is decreasing in Fig. \[fig:4\]-(b). Eventually, the intersurface interaction changes to an attractive one under any mass $m$ and temperature $\beta^{-1}$ in Fig. \[fig:4\]-(c).
Polar coupling
--------------
Next, we consider the coupling constant $g$ of the interaction via the polar coupling. The interaction is attractive in the no-loop approximation, $$\begin{aligned}
g^{(0)}=&\lim_{p_\mu\rightarrow0}\int dk_3{M_p^{(P)}}^2D(p_0,\boldsymbol{p})=2{C_2}\kappa,\label{opt}\\
\kappa=&\frac{\lambda^2}{8{v_B}^3}\left(
v_Bp_D{\omega_D}(2{\omega_D}^2-m^2)-m^4\ln\frac{v_Bp_D+\omega_D}{m}
\right),\nonumber\end{aligned}$$ where $\kappa$ is always a positive value. The no-loop term of the phonon has the opposite sign to that of the photon, i.e., the electromagnetic field(see Appendix \[AP1\]). We also calculate the one-loop correction $g^{(1)}$ with a unit of $\varepsilon_{\mathrm{pol}}={C_2}^2p_D/v_F$ shown in Fig. \[fig:5\]. For the polar coupling, the crossover between a repulsive interaction and an attractive one occurs at lower $\gamma$ than that of the deformation coupling.
![The coupling constant $g^{(1)}$ with an unit of $\varepsilon_{\mathrm{pol}}={C_2}^2p_D/v_F$ of the inter-surface interaction via the polar coupling in Eq. \[effp\] is shown. The potential is attractive in red region and repulsive in the blue region. The zero-potential is represented by the green line. []{data-label="fig:5"}](Fig5.eps){width="80mm"}
It is difficult to decide which interaction via deformation or polar coupling is more dominant because the coupling constants $C_i$ in Eqs. (\[CPd\]) and (\[CPp\]) are unknown in TIs. According to our calculation, the coupling constants in both cases become attractive when the materials have the slow Dirac mode or the fast phonon mode i.e. the speed ratio $\gamma$ is larger than the ordinary 3D TIs of Bi$_2$Se$_3$, Bi$_2$Te$_3$, and Sb$_2$Te$_3$. In Fig. \[fig:6\], the coupling constant $g^{(1)}$ is shown as a function of $\gamma$ and the temperature $T=\beta^{-1}$. The coupling constant $g^{(1)}$ must become attractive when the speed ratio $\gamma$ is larger than $0.25$ for the deformation coupling and $0.015$ for the polar coupling.
![The coupling constants of the inter-surface interaction via the two couplings are shown. These figures are calculated by use of (a) deformation and (b) polar coupling with $m=10^{-2}[\omega_D]$. The interaction becomes attractive in the red region and repulsive in the blue region, and the zero-potential is represented by the green line. []{data-label="fig:6"}](Fig6.eps){width="80mm"}
At the end of this section, we show the resultant Lagrangian density describing the electric states on the opposite surface, $$\begin{aligned}
\mathscr{L}=i\bar{\psi}\gamma^{\mu}\tilde{\partial}_{\mu}\psi(x)+\frac{g}{4}\left[\{\bar{\psi}\psi(x)\}^2+\{\bar{\psi}i\gamma^{5}\psi(x)\}^2\right].\end{aligned}$$ where the coupling constant of the interaction is equal to the summation of $g=g^{(0)}+g^{(1)}$. The model Lagrangian density is so-called Nambu-Jona Lasino model in 2+1 dimension $x=(t,x_1,x_2)$. The model shows a dynamical mass generation, i.e. a dynamical gap generation, for a large positive $g$ in association with the spontaneous symmetry breaking of chiral $U(1)$ gauge symmetry[@Nambu1961; @Nambu1961-2].
Discussion
==========
We show that the intersurface interaction via phonon opens a gap on the energy dispersion of the surface electric states in association with the spontaneous symmetry breaking. The dynamical gap generation occurs when the inter-surface interaction is attractive. In this section, we compare the conditions for the attractive interaction in the present theory with those of real materials. The Fermi velocity of the surface states on 3D TIs, e.g., Bi$_2$Se$_3$, Bi$_2$Te$_3$ and Sb$_2$Te$_3$ is approximately estimated as $v_F$ $10^{4}$-$10^{5}$\[m/s\] in the first principles calculation[@Liu2010]. The phonon speed $v_B$ in 3D TIs is approximately estimated to be $10^{3}$\[m/s\]. Therefore, since $\gamma=v_F/v_B$ is $10^{-2}$-$10^{-3}$, the attractive intersurface interaction is possible in such materials when the polar coupling is dominant. Even when the deformation coupling is dominant, the attractive coupling is possible at low temperature below sub-Kelvin. Generally speaking, the Fermi velocity is proportional to the strength of a spin-orbit interaction. Therefore, the dynamical gap generation may also occur in TIs whose spin-orbit coupling is weak enough to have large $\gamma$.
Conclusion
==========
We have studied the intersurface electron-electron interaction mediated by phonon propagating in the bulk region of a three-dimensional topological insulator. The sign and magnitude of the coupling constant for the interaction strongly depend on parameters such as temperature, the phonon mass, and the ratio of the Fermi velocity of surface Dirac states to the phonon speed. When the phonon speed is much smaller than the Fermi velocity, the intersurface interaction is repulsive. When the phonon speed is not so much smaller than the Fermi velocity, on the other hand, the intersurface interaction can be attractive within the accessible temperature. The attractive intersurface interaction opens a gap on the metallic surface states in association with the appearance of the spontaneous symmetry breaking electric states at the surface.
I would like to thank Yasuhiro Asano for helpful comments.
Effective action of Weyl field {#AP-effective action}
==============================
In this appendix, we derive the effective intersurface interaction like Eq. (\[Lint\]) from the Lagrangian of Eqs. (\[Lp\]) and (\[efLep\]) by the path-integral method. The effective action which contains the effective interaction is acquired from the generating functional of a propagator $Z$ represented by $$\begin{aligned}
Z[z,\bar{z},J]&=\int \mathscr{D}\psi\mathscr{D}\bar{\psi}\mathscr{D}\phi
\exp i\int d^3x \mathscr{L}\\
\mathscr{L}&=
\mathscr{L}_e+\mathscr{L}_p+u_0(\bar{\psi}\gamma_0\psi)\phi
+J\phi+\bar{z}\psi+z\bar{\psi},\label{Lagrangian}\end{aligned}$$ where $z(x)$, $\bar{z}(x)$ and $J(x)$ are the virtual external sources and we introduce the coupling constant $u_0$. The effective action is the generating functional of the vertex function[@Negele1998]. To omit an explicit phonon field, we consider that there is no phonon source as $J=0$ and calculate the path-integral for $\phi$, $$\begin{aligned}
Z[z,\bar{z}]=&\int \mathscr{D}\psi\mathscr{D}\bar{\psi}
\exp\left[ i\int d^3x \mathscr{L}_e'\right]Z_0[j(x)],\\
\mathscr{L}_e'=&\mathscr{L}_e+\bar{z}\psi+z\bar{\psi},\nonumber\\
Z_0[j(x)]=&\exp\left[-i\int d^3x\int d^3x'j(x)
D(x-x')j(x')\right],\nonumber\\
j(x)=&u(\bar{\psi}\gamma_0\psi)(x),\nonumber\end{aligned}$$ with the phonon propagator $D(x)$. The calculation is the same as in the case of a non-interacting scalar boson with external source of $j(x)$ and provides the generating functional of the phonon propagator as $Z_0[j(x)]$. We obtain the four point term in $Z_0$ which represents an effective interaction in the Weyl field, $$\begin{aligned}
\mathscr{L}_{\mathrm{int}}=-j(x)D(x-x')j(x')\label{AP2-1}.\end{aligned}$$ The interaction can be interpreted easily as exchanging the phonon. The Lagrangian contains an intra-surface interaction proportional to the fourth-order term of $\xi$ or $\eta$ in Eq. (\[int\]) and an intersurface interaction which consists of the product of the quadratic term of $\xi$ and the quadratic term of $\eta$. The effective action can be derived as a summation of one-particle irreducible vertex functions which consist of the four-field potential in Eq. (\[AP2-1\]).
The Coulomb interaction {#AP1}
=======================
The interaction Lagrangian via a photon in Feynman gauge is represented by $$\begin{aligned}
\mathscr{L}_{\mathrm{int}}=&-e^2\bar{\psi}\gamma^{\mu}\psi D_{\mu\nu}(p)\bar{\psi}\gamma^{\nu}\psi\\
D_{\mu\nu}(p)=&\frac{-g_{\mu\nu}}{{p_0}^2-|\boldsymbol{p}|^2+i\varepsilon}
\label{photon},\end{aligned}$$ where the speed of light is unity $c=1$ and $g_{\mu\nu}$ is the metric tensor as $$\begin{aligned}
g_{\mu\nu}=\mathrm{diag}[1,-1,-1,-1],\end{aligned}$$ with $\mu,~\nu=0-3$. Using the longitudinal component given by $D_{00}(p)$, the interaction without energy transfer $p_0=0$ can be represented by $$\begin{aligned}
\mathscr{L}_{\mathrm{int}}=&-(\xi^\dagger\xi+\eta^\dagger\eta)\frac{e^2}{|\boldsymbol{p}|^2}(\xi^\dagger\xi+\eta^\dagger\eta)\\
=&-\frac{e^2}{|\boldsymbol{p}|^2}
\left(\sum_{\zeta=\xi,\eta}\zeta^\dagger\zeta\zeta^\dagger\zeta
+2\xi^\dagger\xi\eta^\dagger\eta\right).\end{aligned}$$ The first and second terms are the intra-surface and inter-surface Coulomb interactions, respectively. This interaction is repulsive in contrast to that mediated by phonons in Eq. (\[opt\]) because the photon propagator in Eq. (\[photon\]) is opposite in sign to the phonon propagator in Eq. (\[phonon\]).
|
---
abstract: 'Two basic classes of electromagnetic media, recently defined and labeled as those of P media and Q media, are generalized to define the class of PQ media. Plane wave propagation in the general PQ medium is studied and the quartic dispersion equation is derived in analytic form applying four-dimensional dyadic formalism. The result is verified by considering various special cases of PQ media for which the dispersion equation is known to decompose to two quadratic equations or be identically satisfied (media with no dispersion equation). As a numerical example, the dispersion surface of a PQ medium with non-decomposable dispersion equation is considered.'
author:
- 'I.V. Lindell'
date: |
Department of Radio Science and Engineering\
Aalto University, School of Electrical Engineering\
Espoo, Finland\
[ismo.lindell@aalto.fi]{}\
title: |
Plane-Wave Propagation in\
Electromagnetic PQ Medium
---
Introduction
============
The most general linear (bi-anisotropic) electromagnetic medium can be represented in terms of 36 scalar medium parameters in different representations. In engineering form applying Gibbsian vector fields and 3D dyadics it is typical to apply the form [@Kong; @Methods] \#D\
\#B= =& =\
=& =.\#E\
\#H. ł[DB]{}Another form favored by the physicists is [@Hehl] \#D\
\#H= =& =’\
=\^[-1]{} & =.\#B\
\#E. ł[DH]{}A more compact form is obtained by applying the 4D formalism in terms of differential forms [@Deschamps; @Difform] where the electromagnetic fields are characterized by two six-dimensional quantities, the two-forms $\%\Psi$ and $\%\Phi$, %= \#D - \#H\_4, %=\#B + \#E\_4. In a linear medium they are related by the medium bidyadic $\=M$ as %= M|%. The medium bidyadic can be expanded in terms of 3D medium dyadics as M = =+ =’\#e\_4 + \_4=\^[-1]{} + \_4=\#e\_4. For details in the present notation [@Difform] or [@MDEM] should be consulted.
It is not easy to get a grasp of the most general medium defined by the bidyadic $\=M$. This is why many classes of media with $\=M$ restricted by special forms involving less than 36 parameters, and bearing strange names, have been defined and studied. Because $\=M$ corresponds to a $6\x6$ matrix, some classes have been based on expressing the bidyadic $\=M$ in terms of a dyadic corresponding to a $4\x4$ matrix involving only 16 parameters. As the most obvious one, the class of P media [@P] has been defined in terms of a dyadic $\=P\in\SE_1\SF_1$ as the double-wedge square M= P\^[(2)T]{} = P\^TP\^T. As another obvious example, the class of Q media [@416] has been defined through the modified medium bidyadic M\_m = \#e\_NŁM = \#e\_NŁQ\^[(2)]{}\_2\_2, in terms of a (quasi-) metric dyadic $\=Q\in\SE_1\SE_1$. Properties of both medium classes have been recently studied. For a plane wave, a field of the dependence $\exp(\%\nu|\#x)$ of the spacetime variable $\#x$, the wave one-form $\%\nu$ is normally restricted by a dispersion equation D(%)=0, ł[Dnu]{}which is a quartic equation in general [@PIER05] and its form depends on the medium bidyadic $\=M$. For example, for the Q medium the dispersion equation is of the form D(%)= \_Q(Q||%%)\^2=0, ł[DQ]{}which means that, for a dyadic of full rank satisfying $\De_Q=\ve_N\ve_N||\=Q{}^{(4)}\not=0$, the quartic equation reduces to a quadratic equation. On the other hand, one can show that, for a P medium, the dispersion equation is actually an identity which is satisfied by any one-form $\%\nu$ [@P]. Also, if $\=Q$ is of rank lower than 4, we have $\De_Q=0$ and is satisfied identically. Media of this kind have previously been called NDE media (media with no dispersion equation) [@NDE].
As simple generalizations of the classes of P and Q media we can define M = P\^[(2)T]{} + \_NŁ\#D\#C, ł[Pext]{}and M\_m = Q\^[(2)]{} + \#A\#B, ł[Qext]{}which have respectively been called extended (or generalized) P [@P] and Q [@421] media. Here, $\#A,\#B,\#C$ and $\#D$ are any bivectors. The number of medium parameters in both cases has been increased from 16 to 23.
Plane waves propagating in extended P and Q media have been previously studied and the dispersion equations have been shown to take the respective form [@P; @421] D(%) &=& \_P((\#DŁP\^T)||%%)((\#CŁP\^[-1T]{})||%%)=0, ł[DPext]{}\
D(%) &=& \_Q(Q||%%)(Q + \#A\#BQ\^[-1T]{})||%%=0. ł[DQext]{}Here we denote $\De_P=\tr\=P{}^{(4)}$. In both cases, for full-rank dyadics $\=P,\=Q$, the dispersion equations can be decomposed in two quadratic equations. Such media make two examples of what have been called decomposable media [@deco]. For dyadics $\=P$ and $\=Q$ of rank less than 3, the two media again fall in the class of NDE media with dispersion equations satisfied identically for any $\%\nu$ [@NDE]. For ranks equaling 3, some quadratic functions of $D(\%\nu)$ may decompose to products of linear functions of $\%\nu$.
Class of PQ Media
=================
In the present study we make a further generalization of both P and Q media by considering medium bidyadics of the composite form M = P\^[(2)T]{} + \_NŁQ\^[(2)]{}. ł[PQ]{}Media defined by bidyadics of this form will be called PQ media. It is easy to see that medium dyadics of this form do not cover all possible linear media. In fact, the number of parameters must certainly be less than $2\x16=32$, which falls short of the 36 parameters of the most general medium.
To find a 3D representation of the PQ medium bidyadic the dyadics $\=P$ and $\=Q$ can be expanded in terms of spatial vectors $\#a_s,\#b_s,\#p_s$, a spatial one-form $\%\pi_s$ and spatial dyadics $\=Q_s,\=P_s$ as Q &=& Q\_s + \#e\_4\#a\_s + \#b\_s\#e\_4 + c\#e\_4\#e\_4,\
P &=& P\_s + \#e\_4%\_s + \#p\_s\_4 + p\#e\_4\_4. The spatial medium dyadics of the Q medium can be expressed as [@Difform] =&=& \_[123]{}ŁQ\_s\#a\_s, ł[AQ]{}\
=’ &=& \_[123]{}Ł(cQ\_s -\#b\_s\#a\_s),\
=\^[-1]{} &=& -\_[123]{}ŁQ\_s\^[(2)]{}, ł[MQ]{}\
=&=& \_[123]{}Ł(\#b\_sQ\_s), ł[BQ]{} while those of the P medium have the form [@P] =&=& P\_s\^[(2)T]{}, ł[AP]{}\
=’ &=& -%\_sP\_s\^T,\
=\^[-1]{}&=& -P\_s\^T\#p\_s, ł[MP]{}\
=&=& %\_s\#p\_s -pP\_s\^T. ł[PB]{}The medium dyadics of the PQ medium are obtained by summing up the corresponding expressions. To write the medium equations in the Gibbsian form requires that the dyadic $\=\M$ exist. Denoting the Gibbsian dyadics by the subscript $()_g$, they are obtained by inserting the previous expansions in the expressions [@Difform] =\_g &=& \#e\_[123]{}Ł(=’-=|=|=) ,\
=\_g &=& \#e\_[123]{}Ł=|=,\
=\_g &=& -\#e\_[123]{}Ł=|=,\
=\_g &=&\#e\_[123]{}Ł=. The dyadic denoted here by $\=\M$ stands for the inverse of the sum of the two $\=\M{}^{-1}$ dyadics of and . The full analytic form of the Gibbsian medium dyadics of the PQ medium would have quite an extensive form.
Plane Wave in PQ medium
=======================
The main task of this study is to find properties of a plane wave propagating in the general PQ medium. Expressing the field two-form of a plane wave in terms of a potential one-form as $\%\Phi=\%\nu\W\%\phi$, the potential satisfies %%&=& %M|(%%) =0\
&=& %(P\^[(2)T]{} + \_NŁQ\^[(2)]{})|(%%)\
&=& %(P\^T|%)(P\^T|%) + \_NŁ(Q\^[(2)]{}%%)|%. Operating this by $\#e_N\L$ yields the equation \#e\_NŁ(%%) &=& \#e\_NŁ(%(%|P)P\^T)|%+ (Q\^[(2)]{}%%)|%\
&=& (\#FŁP\^T +Q\^[(2)]{}%%)|%=0. ł[eq]{}Here we have introduced the bivector \#F = \#F(%)= \#e\_NŁ(%(%|P)), which is simple since it satisfies [@MDEM] \#F\#F=0, (\#FŁĪ\^T)\^[(2)]{}=\#F\#F, (\#FŁĪ\^T)\^[(3)]{}=0, and it can be expressed in the form $\#F=\#a\W\#b$ in terms of two vectors. The equation for the potential, D(%)|%=0, ł[Dnuphi]{}is defined by the dispersion dyadic D(%) = \#FŁP\^T +Q\^[(2)]{}%%. Because of and $\=D(\%\nu)|\%\nu=0$, the rank of $\=D(\%\nu)$ must be less than three, provided $\%\phi$ and $\%\nu$ are linearly independent, i.e., when $\%\Phi=\%\nu\W\%\phi\not=0$, which is assumed here. As a consequence, $\%\nu$ is restricted by the dyadic dispersion equation [@MDEM] D\^[(3)]{}(%) &=& (\#FŁP\^T +Q\^[(2)]{}%%)\^[(3)]{}\
&=& (\#FŁĪ\^T)\^[(3)]{}|P\^[(3)T]{} + ((\#FŁĪ\^T)\^[(2)]{}|P\^[(2)T]{})(Q\^[(2)]{}%%)\
&&+ (\#FŁP\^T)(Q\^[(2)]{}%%)\^[(2)]{} + (Q\^[(2)]{}%%)\^[(3)]{}\
&=& (\#F\#F|P\^[(2)T]{})(Q\^[(2)]{}%%) + (\#FŁP\^T)(Q\^[(2)]{}%%)\^[(2)]{} + (Q\^[(2)]{}%%)\^[(3)]{}\
&=& 0. ł[Disp3]{}Applying the expansion rules [@MDEM] Q\^[(2)]{}%%&=& (Q||%%)Q - (Q|%)(%|Q)\
(Q\^[(2)]{}%%)\^[(2)]{} &=& (Q||%%)((Q||%%)Q\^[(2)]{} - Q(Q|%)(%|Q))\
&=& (Q||%%)(Q\^[(3)]{}%%)\
(Q\^[(2)]{}%%)\^[(3)]{} &=& (Q||%%)\^2((Q||%%)Q\^[(3)]{} - Q\^[(2)]{}(Q|%)(%|Q)\
&=& (Q||%%)\^2(Q\^[(4)]{}%%)= \_Q(Q||%%)\^2(\#e\_N\#e\_N%%), with $\De_Q=\ve_4\ve_4||\=Q{}^{(4)}$, the dispersion equation can be written as D\^[(3)]{}(%) = C\_1(%) + C\_2(%) + C\_3(%) =0, ł[D3nu]{}where we denote C\_1(%) &=& (\#F\#F|P\^[(2)T]{})(Q\^[(2)]{}%%)\
C\_2(%) &=& (Q||%%)(\#FŁP\^T)(Q\^[(3)]{}%%)\
C\_3(%) &=& \_Q(Q||%%)\^2(\#e\_N\#e\_N%%).Now one can show that is equivalent to a scalar dispersion equation . For that we expand the three dyadics $\=C_i(\%\nu)$ as follows.
- For the dyadic $\=C_1(\%\nu)$ we apply the identity \#A\_i(\#B\_iŁ%) &=& (\#A\_i\#B\_i)Ł%- \#B\_i(\#A\_iŁ%)\
&=& \_N|(\#A\_i\#B\_i)(\#e\_NŁ%) - \#B\_i(\#A\_iŁ%), valid for any bivectors $\#A_i,\#B_i$ and one-form $\%\A$. Assuming that $\=A_i$ and $\%\A$ satisfy $\=A_i\L\%\A=0$, we can construct the dyadic rule \#A\_1\#A\_2(\#B\_1\#B\_2%%) = \_N\_N||(\#A\_1\#A\_2\#B\_1\#B\_2)(\#e\_N\#e\_N%%). Because of $\#F\L\%\nu=0$ and $(\#F|\=P{}^{(2)T})\L\%\nu = (\#F\L(\%\nu|\=P))\L\=P{}^T=0$, we can set $\#A_1=\#F$, $\#A_2=\#F|\=P{}^{(2)T}$ and $\%\A=\%\nu$. Since the rule is linear in the dyadic $\#B_1\#B_2$, we can set $\#B_1\#B_2=\=Q{}^{(2)}$ and apply the rule as C\_1(%) = (\#F\#F|P\^[(2)T]{})(Q\^[(2)]{}%%) = D\_1(%)(\#e\_N\#e\_N%%), with D\_1(%) &=& \_N\_N||((\#F\#F|P\^[(2)T]{})(Q\^[(2)]{})) =(\_N\_NQ\^[(2)]{})||(\#F\#F|P\^[(2)T]{})\
&=&\_Q Q\^[(-2)T]{}||(\#F\#F|P\^[(2)T]{}) = \_Q\#F\#F||(P\^T|Q\^[-1]{})\^[(2)]{} . ł[D1]{} In the last expression we have assumed that $\=Q$ is of full rank, $\De_Q\not=0$. ’
- To expand the dyadic $\=C_2(\%\nu)$ we proceed as C\_2(%) &=& (Q||%%)(Q\^[(3)]{}%%)(\#FŁP\^T)\
&=& \_Q(Q||%%)((\#e\_N\#e\_N(Q\^[-1T]{}%%))(\#FŁP\^T)\
&=& \_Q(Q||%%)\#e\_N\#e\_N((Q\^[-1T]{}%%)(\#FŁP\^T))\
&=& \_Q(Q||%%)(\#e\_N\#e\_N%%)(Q\^[-1T]{}||(\#FŁP\^T))\
&=& \_Q(Q||%%)(\#e\_N\#e\_N%%)(Q\^[-1T]{}||(\#FŁP\^T))\
&=& D\_2(%)(\#e\_N\#e\_N%%), with D\_2(%) = \_Q(Q||%%)(\#FŁP\^T|Q\^[-1]{}).
- Finally, we have C\_3(%) = D\_3(%)(\_N\_N%%), with D\_3(%) = \_Q (Q||%%)\^2. ł[D3]{}
Because each of the dyadics $\=C_i(\%\nu)$ is a scalar multiple of $\#e_N\#e_N\LL\%\nu\%\nu$, the dyadic dispersion equation equals the scalar dispersion equation as D(%) &=& D\_1(%) + D\_2(%) + D\_3(%)\
&=& \_Q\#F\#F||(P\^T|Q\^[-1]{})\^[(2)]{} + \_Q(Q||%%)(\#FŁP\^T|Q\^[-1]{})\
&&+\_Q (Q||%%)\^2 =0. ł[Disp]{}Substituting $\#F=\#e_N\L(\%\nu\W(\%\nu|\=P))$, the quartic form of the dispersion equation can be . This is the main result of the present paper.
For $\De_Q\ra0$ we must replace \_Q Q\^[(-2)]{} \_N\_NQ\^[(2)T]{}, \_QQ\^[-1]{} \_N\_NQ\^[(3)T]{}, ł[DeQ]{}in the expression .
Special Cases
=============
Let us consider the expression for a few special cases of the PQ medium for which we know the dispersion equation.
1. For the pure P medium case $\=Q=0$, after inserting we obtain the identity $D(\%\nu)=0$ for all $\%\nu$. This proves that a pure P medium belongs to the class of NDE media [@NDE].
2. For the pure Q medium with $\=P=0$ is reduced to the known quadratic dispersion equation .
3. The case $\=P = \A\=I$ corresponds to a Q-axion medium. Since we now have $\#F=\A\#e_N\L(\%\nu\W\%\nu)=0$, only the last term of the expression in survives. This, again, yields the dispersion equation of the Q medium. In fact, it is well known that adding an axion term $\A\=I{}^{(2)T}$ to the medium bidyadic does not change the dispersion equation. Here one should note that the P-axion medium $\=M=\=P{}^{(2)T}+ \A\=I{}^{(2)T}$ with $\A\not=0$ is not a special case of the PQ medium.
4. Choosing $\=Q=\#a_1\#b_1+ \#a_2\#b_2$ we obtain M = P\^[(2)T]{} + \_NŁ(\#a\_1\#a\_2)(\#b\_1\#b\_2), which yields a special case of the extended P medium with $\#D\#C=(\#a_1\W\#a_2)(\#b_1\W\#b_2)$. Applying for $\De_Q\ra0$ in yields the dispersion equation whose form can be expanded as D(%) &=& \#F|(P\^[(2)T]{}|Q\^[(2)T]{})|\#F\
&=& \#F|P\^[(2)T]{}|(\_NŁ(\#b\_1\#b\_2))((\#a\_1\#a\_2)\_N)|\#F\
&=& (%(%|P))\#e\_N|P\^[(2)T]{}|\_NŁ(\#b\_1\#b\_2))((\#a\_1\#a\_2)|(%(%|P))\
&=& \_P(%(%|P)|P\^[(-2)]{}|(\#b\_1\#b\_2))((\#a\_1\#a\_2)|(%(%|P))\
&=& \_P((%|P\^[-1]{})%)|(\#b\_1\#b\_2))((\#a\_1\#a\_2)|(%(%|P)) =0.Assuming $\De_P=\tr\=P{}^{(4)}\not=0$ the equation is split in two quadratic equations as %|(P%)|(\#a\_1\#a\_2) &=& %|(P(\#a\_1\#a\_2))|%=0,\
%|(P\^[-1]{}%)|(\#b\_1\#b\_2) &=& %|(P\^[-1]{}(\#b\_1\#b\_2))|%=0 , ł[nuP-1]{}which coincide with for $\#D\#C=(\#a_1\W\#a_2)(\#b_1\W\#b_2)$. In the case $\De_P=0$ we must replace $\=P{}^{-1}$ by $\#e_N\ve_N\LL\=P{}^{(3)T}$ in .
5. Choosing $\=P = \#a_1\%\A_1+ \#a_2\%\A_2$ we have $\=P{}^{(3)}=0$ and M= \_NŁQ\^[(2)T]{} +(%\_1%\_2)(\#a\_1\#a\_2), which corresponds to a special case of the extended Q medium with $\#A\#B=\#e_N\L(\%\A_1\W\%\A_2)(\#a_1\W\#a_2)$. Expanding \#F|P\^[(2)T]{} = \#e\_NŁ(%(P\^T|%))|P\^[(2)T]{}= \#e\_N|(%(P\^[(3)T]{}Ł%)) =0, the first term of obviously vanishes. Expanding further \#FŁP\^T &=& \#e\_NŁ(%(P\^T|%)P\^T) =\#e\_NŁ(%(P\^[(2)T]{}Ł%))\
&=& -%(\#e\_NŁP\^[(2)T]{})Ł%= (\#e\_NŁP\^[(2)T]{})%%, we have (\#FŁP\^T|Q\^[-1]{}) = (\#e\_NŁP\^[(2)T]{})%%)||Q\^[-1T]{} = ((\#e\_NŁP\^[(2)T]{})Q\^[-1T]{})||%%, whence the dispersion equation can be written as D(%) = \_Q(Q||%%)(Q + (\#e\_NŁP\^[(2)T]{})Q\^[-1T]{})||%%=0, which coincides with .
6. Finally, let us assume that $\=Q$ is an antisymmetric dyadic, which can be expressed in terms of some bivector $\#A$ in the form Q = \#AŁĪ\^T. This implies $\=Q||\%\nu\%\nu=0$, whence the dispersion equation reduces to D(%) = \_Q\#F|P\^[(2)T]{}|Q\^[(-2)]{}|\#F=0. ł[Disp1]{} Applying the expansion [@MDEM] (\#AŁĪ\^T)\^[(2)]{} = \#A\#A +\#e\_NŁĪ\^[(2)T]{}, =-\_N|(\#A\#A), the PQ medium bidyadic has the form of an extended P-axion medium bidyadic [@deco] M = P\^[(2)T]{} + \_NŁ\#A\#A + Ī\^[(2)T]{}. Applying the rule [@MDEM] Q\^[(-2)]{} = \_N\_NQ\^[(2)T]{} = (\_N\_N\#A\#A + \_NŁĪ\^[(2)]{}), one can show that its last term has no effect on the dispersion equation. In fact, inserting \#F|P\^[(2)T]{} &=& \#e\_N|(%(P\^T|%)P\^[(2)T]{}) =%|(\#e\_NŁP\^[(3)T]{})Ł%\
&=& \_P%|(P\^[-1]{}\#e\_N)Ł%, in we have \#F|P\^[(2)T]{}|(\_NŁĪ\^[(2)]{})|\#F &=& \_P(%|(P\^[-1]{}\#e\_N)Ł%)|(\_NŁ(\#e\_NŁ(%(P\^T|%)))\
&=& \_P(%|(P\^[-1]{}\#e\_N)|(%%(P\^T|%))=0. Thus, is reduced to D(%) &=& \_Q\#F|P\^[(2)T]{}|Q\^[(-2)]{}|\#F\
&=&\_P%|(P\^[-1]{}\#e\_N)Ł%|(\_N\_N\#A\#A)|(\#e\_NŁ(%(P\^T|%))\
&=&\_P%|(P\^[-1]{}\#e\_N)Ł%|(\_NŁ\#A)(\#A|(%(P\^T|%)))\
&=&\_P((%|P\^[-1]{}%)|\#A)(\#A|(%(P\^T|%)))\
&=&\_P((P\^[-1]{}\#A)||%%)((P\#A)||%%)=0. Since the axion component does not affect the dispersion equation, the result coincides with that of the extended P medium for the special case $\#C=\#D=\#A$.
Example
=======
As a numerical example of a PQ medium let us consider a special one by restricting the dyadic $\=P$ as P = P(Ī + P\_o), P\_o=\#a%, \#a|%=0,and assuming that $\=Q$ be a symmetric dyadic, Q=S, S\^T=S.In the corresponding medium bidyadic M = P\^2Ī\^[(2)T]{} + P\^2%\#aĪ\^T + \_NŁS\^[(2)]{} ł[MS2]{}the three terms can actually be recognized as components of the Hehl-Obukhov decomposition, respectively called as the axion, skewon and principal components [@Hehl; @MDEM]. In fact, the axion part is a multiple of the unit bidyadic $\=I{}^{(2)T}$ while the skewon and principal parts are trace free. Also, any skewon bidyadic is known to be of the form $(\=B_o\WW\=I)^T$ with a skewon-free dyadic $\=B_o$ while any bidyadic $\=C\in\SF_2\SE_2$ satisfying $\=C\LL\=I=0$ is a principal bidyadic. Actually, one can show that $(\ve_N\L\=S{}^{(2)})\LL\=I$ vanishes for any symmetric dyadic $\=S$, [@MDEM].
To find the dispersion equation for the present PQ medium from , let us first expand \#F|P\^[(2)T]{} &=& \#e\_N|(%(P\^[(3)T]{}Ł%))\
&=& P\^3\#e\_N|(%(Ī\^[(3)T]{}+ %\#aĪ\^[(2)T]{}))Ł%\
&=& P\^3\#e\_NŁ(%%Ī\^[(2)T]{}) + P\^3 \#e\_N|(%(%(\#a|%)Ī\^[(2)T]{}\
&& + %(%\#a)Ī\^T)\
&=& P\^3(\#a|%)\#e\_NŁ(%%), ł[FP2T]{}\
\#FŁP\^T&=& \#e\_NŁ(%(P\^[(2)T]{}))Ł%)\
&=& P\^2\#e\_NŁ(%(Ī\^[(2)T]{}+%\#aĪ\^T)Ł%)\
&=& P\^2\#e\_NŁ(%%Ī\^T+%(%(\#a|%)Ī\^T+ %%\#a))\
&=& P\^2(\#a|%)(\#e\_NŁ(%%))ŁĪ\^T. Since the last expression is an antisymmetric dyadic and $\De_S\=S{}^{-1}= \ve_N\ve_N\LL\=S{}^{(3)}$ is a symmetric dyadic or zero, we have \_Q(\#FŁP\^T|Q\^[-1]{}) = \_S(\#FŁP\^T)||S\^[-1]{}=0. Thus, becomes D(%) = \_S\#F|(P\^T|S\^[-1]{})\^[(2)]{}|\#F +\_S (S||%%)\^2. Finally, applying , we can expand \_S\#F|(P\^T|S\^[-1]{})\^[(2)]{}|\#F&=& P\^4(\#a|%)(\#e\_NŁ(%%))|S\^[(2)]{}|(\#e\_NŁ(%(Ī\^T+%\#a)|%)))\
&=& P\^4(\#a|%)\^2(%%)|S\^[(2)]{}|(%%))\
&=& P\^4(\#a|%)\^2(%%S\^[(2)]{})||%%, whence the dispersion equation for the special PQ medium has the form D(%) &=& P\^4(\#a|%)\^2(%%S\^[(2)]{})||%%+\_S (S||%%)\^2\
&=& P\^4(\#a|%)\^2(%%||S)(S||%%) +P\^4(\#a|%)\^2(%|S|%)\^2\
&&+\_S (S||%%)\^2=0. ł[DispSpec]{}Unlike for all the special cases considered above, the quartic dispersion equation corresponding to the medium defined by does not necessarily decompose in two quadratic equations. In the special case when the dyadic $\=S$ satisfies $\=S|\%\A=\la\#a$, whence we have $\=S||\%\A\%\A=0$, the dispersion equation is decomposable.
Setting $\=S=0$ in we obtain $D(\%\nu)=0$ for any $\%\nu$, a property shared by all skewon-axion media. For $\#a\%\A=0$ we are left with $\De_S(\=S||\%\nu\%\nu)^2=0$, valid for the Q-axion medium.
To be able to work on this example numerically, let us choose S = SG\_s - s\#e\_4\#e\_4, where $\=G_s$ denotes the metric dyadic G\_s = \#e\_1\#e\_1 + \#e\_2\#e\_2+ \#e\_3\#e\_3. Thus, we have \_S = S\^[(4)]{} = -S\^3s. Let us further choose %=\_2, \#a=\#e\_3. Inserting these in , the dispersion equation becomes D(%) &=& P\^4\_3\^2S(S(\_1\^2+\_2\^2+\_3\^2)-sk\^2) +P\^4\_3\^2S\^2\_2\^2\
&&-S\^3s (S(\_1\^2+\_2\^2+\_3\^2)-sk\^2)\^2=0. ł[DispSpec1]{}For a given value of $k$, this corresponds to a quartic dispersion surface in the three-dimensional space spanned by $\nu_i$.
Let us assume that $S,s$ and $P$ are real and positive quantities and simplify the expressions by denoting x\_i=\_i, K = k, =, with $0\leq\t\leq1$. The dispersion equation then takes the form (x\_1\^2+x\_2\^2+(1-)x\_3\^2-K\^2)(x\_1\^2+x\_2\^2+x\_3\^2-K\^2) -x\_3\^2x\_2\^2 =0. ł[DispSpec2]{}
An idea of the dispersion surface can be obtained by considering the three main sections.
- Assuming $x_3=0$, leads to x\_1\^2+x\_2\^2-K\^2=0,which corresponds to a circle of unit radius.
- Assuming $x_2=0$, yields (x\_1\^2+(1-)x\_3\^2-K\^2)(x\_1\^2+x\_3\^2-K\^2)=0. This splits in two separate curves, one of which is a circle of unit radius x\_1\^2+x\_3\^2-K\^2=0, and, the other one, a quadratic curve x\_1\^2+(1-)x\_3\^2-K\^2=0. For $\t<1$ the latter defines an ellipse with axial ratio $\sqrt{1-\t}$. For $\t>1$ the curve is a hyperbola.
- Finally, assuming $x_1=0$, yields (x\_2\^2+(1-)x\_3\^2-K\^2)(x\_2\^2+x\_3\^2-K\^2) -x\_3\^2x\_2\^2 =0. ł[Dx1]{}This corresponds to a curve of the fourth order, the form of which depends on the parameter $\t$. For $\t<1$ the curve is closed and for $\t\geq1$ it is open. For $\t\ra0$ the curve approaches a circle of unit radius, in which case the PQ medium approaches a Q medium.
The cross sections $x_1=0$ and $x_2=0$ are depicted for the parameter value $\t=0.7$ in Fig. \[fig:EM26a08\] in terms of normalized axis parameters ${\rm nu}i=x_i/K$. It is seen that, for this particular PQ medium, there is no birefringence for waves whose wave one-form satisfies $\#e_3|\%\nu=0$.
![Two cross sections of the quartic dispersion surface corresponding to $\nu_1=\#e_1|\%\nu=0$ and $\nu_2=\#e_2|\%\nu=0$. For $\nu_2=0$ the cross-section reduces to a circle and an ellipse. For $\nu_3=0$ the dispersion surface reduces to a single circle of unit radius. Here the parameter value $\t=P^4/S^3s=0.7$ has been assumed.[]{data-label="fig:EM26a08"}](EM26a07.pdf){width="\textwidth"}
Conclusion
==========
A novel class of electromagnetic media, called that of PQ media, was introduced as a generalization of the previously studied classes of P media and Q media. Plane-wave propagating in the general PQ medium was studied and the quartic dispersion equation was derived in analytic form. The equation was verified for six special cases of PQ media for which the analytic form has been found from previous studies. In all of these special cases the quartic equation either reduces to two quadratic equations or becomes an identity. As an example of a medium yielding a more general quartic dispersion equation, another special case of the PQ medium was considered.
Acknowledgment
==============
Discussion with Dr. Alberto Favaro on the topic of this paper is acknowledged.
[99]{}
Kong, J. A. [*Electromagnetic Wave Theory*]{}, Cambridge MA: EMW Publishing, 2005.
Lindell, I. V., [*Methods for Electromagnetic Field Analysis*]{}, 2nd ed., Oxford: University Press, 1995.
Hehl, F. W. and Yu. N. Obukhov, [*Foundations on Classical Electrodynamics*]{}, Boston: Birkhäuser, 2003.
Deschamps, G. A., “Electromagnetics and differential forms,” [*Proc. IEEE*]{}, Vol. 69, No. 6, pp. 676–696, 1981.
Lindell, I. V., [*Differential Forms in Electromagnetics*]{}, New York: Wiley, 2004.
Lindell, I. V., [*Multiforms, Dyadics, and Electromagnetic Media*]{}, Hoboken, N.J.: Wiley, 2015.
Lindell, I. V., L. Bergamin and A. Favaro, “The class of electromagnetic P-media and its generalization,” [*Prog. Electro. Res*]{} B, vol.28, pp.143–162, 2011.
Lindell, I. V. and H. Wallén, “Differential-form electromagnetics and bi-anisotropic Q-media,” [*J. Elecromagn. Waves Appl.*]{}, vol. 18, no.7, 957–968, 2004.
Lindell, I. V. “Electromagnetic wave equation in differential-form representation,” [*Prog. Electro. Res*]{}, vol.54, pp.321–333, 2005.
Lindell, I. V. and A. Favaro, “Electromagnetic media with no dispersion equation,” [*Prog. Electro. Res B*]{}, vol.51, pp.269–289, 2013.
Lindell, I. V. and H. Wallén, “Generalized Q-media and field decomposition in differential-form approach,” [*J. Electromagn. Waves Appl.*]{}, vol. 18, no.8, 1045–1056, 2004.
Lindell, I. V., L. Bergamin and A. Favaro, “Decomposable medium condition in four-dimensional representation,” [*IEEE Trans. Antennas Propag.*]{}, vol.60, no.1, pp.367–376, Jan. 2011.
Lindell, I. V., and A. Favaro, “Decomposition of Electromagnetic Q and P media,” [*Prog. Electro. Res. B*]{}, vol.63, pp.79–93, 2015.
|
Introduction
============
In a granular medium at rest the grains can be disposed in an enormous number of different configurations. A weak external disturbance, but powerful enough to overcome locally the friction force between two grains, allows the granular systems to rearrange and to switch between these ”blocked” configurations. The macroscopic behavior of a weakly disturbed granular medium is, therefore, essentially controlled by statistical properties of such transitions. It is of great interest to study this kind of problem, since it may be a prototype of slow dynamics behaviors observed in other physical systems[@Barrat2000]. The slow dynamics of weakly disturbed granular media has been evidenced by the classical compaction experiments of Knight [*et al.*]{}[@07; @Knight] Another approach is the experiment of Albert [*et al*]{}.[@03; @Albert], in which a large solid object is pulled slowly through a granular medium. The motion of the object appears as resisted by chains of jammed particles[@01; @Cates][@02; @Liu], which support compressive stress. Beyond an elastic regime at very small pulling force, the macroscopic motion of the object is a succession of stick-slip events, where compressive stress is continuously built up in particle chains, and abruptly released. At these successive unjamming events the system switches between blocked configurations.
In this paper we implement a novel experimental method to study the problem: we exploit a forced torsion oscillator[@04; @DAnna; @oscillator] immersed in the granular medium, and [*in presence*]{} of external weak vibration, as shown in Fig. 1. We use a [*dynamic method*]{}, which provides more information than the simple increase toward the unjamming threshold. In fact, the amplitude of the angular displacement of the oscillator increases sharply when the unjamming threshold forcing torque amplitude is approached, and at the same time the angular displacement lags behind the sinusoidal torque. This ”phase lag” is determined by an energy dissipation which occurs in the granular medium when grains start to slip one against the other. The dynamic method gives access to both elastic and dissipation parameters of the granular material during the slow dynamics.
Experimental
============
In the experiment, we hold a granular medium at a given high-frequency vibration intensity, quantified by the normalized acceleration $\Gamma
=a_{s}\omega _{s}^{2}/g$, with $a_{s}$ and $f_{s}=\omega _{s}/2\pi $ the amplitude and frequency of the vertical sinusoidal vibration, $g$ the acceleration of gravity. At the same time, we measure the complex frequency response, $G$, of the granular medium (or the susceptibility $\chi=G^{-1}$) by a low-frequency forced torsion oscillator[@04; @DAnna; @oscillator], at the forcing frequency $f_{p}=\omega _{p}/2\pi $, with $f_{p}\ll f_{s}$. In the oscillator method (see Fig. 1), the rotating probe of the oscillator is immersed at a depth $L$ into a large metallic bucket (height 96 mm, diameter 94 mm) filled with glass beads of diameter $d=1.1\pm 0.05$ mm with smoothly polished surfaces. The probe is covered by a layer of beads, glued on by an epoxy, and its effective radius is $R_{e}$. All data presented here are obtained with $L=20$ mm and $R_{e}\approx 2$ mm. We perform dynamic experiments: the oscillator is forced into torsion oscillation by a torque $%
T\left( t\right) =T_{o}\exp (i\omega _{p}t)$ of frequency in the range $%
10^{-4}$ Hz to 5 Hz, and the angular displacement, $\theta (t)$, is optically detected. An analyzer measures the complex frequency response of the oscillator, given by $G=T/\theta$. Typically we record the argument, $\arg (G_{1})$, and the absolute modulus, $%
|G_{1}|$, of the first harmonic, as a function of either $T_{0}$, $\Gamma $, or $f_{p}$. We report the quantity $\tan [\arg (G_{1})]$, which for a linear system coincides with the loss factor. The oscillator, when not immersed, can be assumed elastic, with $T=G_{p}\theta $, where $G_{p}=18\times 10^{-3}$ N m/rad is the torsion constant of the suspension wires. Notice that in this work the maximum displacement of a point at the surface of the $2$ mm probe is of the order of $0.1$ mm, i.e., much smaller than the glass bead diameter.
An accelerometer provides a precise measurement of $\Gamma $, which can be varied from $2\times 10^{-3}$ to above 1. The minimum value of $\Gamma $ is limited by the accelerometer sensitivity. We vary $\Gamma $ by changing $%
a_{s}$ at fixed $f_{s}$, while $f_{s}$ is selected in the range 50 Hz to 200 Hz. The whole system is placed on an anti-seismic table. Moisture-induced ageing effects[@07; @Bocquet][@04; @DAnna; @oscillator], and interstitial gas effects[@Pak], are not observed for the large bead size used here, and measurements are performed at uncontrolled ambient air. In order to control compaction effects[@07; @Knight], all measurements are taken in the same conditions, e.g., starting from a granular material shaken at high $\Gamma $ and low $f_{s}$ for several minutes. Compaction effects are apparently negligible in the time scale of the experiments for $f_{p}>0.01$ Hz, but may be present in the data at very-low frequency.
Results
=======
A typical experimental result is reported in Fig. 2, which shows $\tan [\arg
(G_{1})]$ and $|G_{1}|$, measured as a function of the amplitude of the applied torque $T_{0}$, for different $\Gamma $. With the vibrator [*off*]{}, that is for $\Gamma <2\times 10^{-3}$, the response is similar to the one reported previously[@04; @DAnna; @oscillator], with a typical loss factor peak at a torque denoted $T_{0}^{*}$ and a modulus step between two levels denoted $G_{jam}$ and $G_{p}$. The dependence of the loss peak on the geometrical parameters of the experiment is summarized[@04; @DAnna; @oscillator] by the empirical relation $T_{0}^{*}\propto \mu _{s}L^{2}R_{e}^{2}$, with $\mu _{s}$ the coefficient of static friction between the glass beads. In this ”zero temperature-like” conditions, the loss factor peak can be easily explained: at very low applied torque, $T_{0}\ll T_{0}^{*}$, the oscillator probe is jammed into the granular material, and only elastic deformations arise, resulting in a purely elastic dynamic response, with a negligible loss factor and a constant absolute modulus $G_{jam}$. By increasing the torque amplitude, the oscillator probe unjams as the local force between a pair of glass beads somewhere in the medium becomes large enough for the two beads to slip one against the other, dissipating energy by solid friction. The maximum ratio of dissipated over furnished energy, that is a maximum of the loss factor, arises at $T_{0}^{*}$, which can be seen as the average torque at which the oscillator probe unjams. At high torque amplitude, $T_{0}\gg T_{0}^{*}$, the oscillator slides almost freely into the granular medium, and the absolute modulus tends to the torsion constant of the suspension wires, $G_{p}$.
With the vibrator [*on*]{}, one expects that the external vibration facilitates the unjamming of the oscillator. By increasing $\Gamma $, the modulus $|G_{1}|$ decreases monotonically, while $\tan [\arg (G_{1})]$ first increases and then decreases, going through a maximum, as clearly visible in Fig. 2. The fact that the external vibration drives the system through a maximum in the loss factor is evidence that the vibration-induced fluctuations can unjam the oscillator probe. Moreover, below $T_{0}^{*}$ the response is essentially independent of $T_{0}$ , i.e., there is a linear regime. The linearity is confirmed also by a negligible high harmonics signal (not shown) for all $\Gamma $.
The general behavior in the linear regime is better rendered by Fig. 3 which shows the previous maximum in the loss factor as characteristic “jamming” peaks observed as a function of $\Gamma $ for various forcing frequencies $%
f_{p}$. In Fig. 3 we obtain that for the low-torque amplitude, selected in the linear regime, i.e., $T_{0}\ll T_{0}^{*}$, and at low $\Gamma ,$ the applied torque alone is unable to unjam the oscillator probe and the response is elastic. However, by increasing $\Gamma $, unjamming is induced by the external vibration, and the response displays a loss factor peak. The data shown in Fig. 3 are collected by decreasing $\Gamma $, but no difference is observed in following runs if $\Gamma $ is successively increased, decreased and so on, as shown in Fig. 4 for one of the curves of Fig. 3. We say that the response is ”reversible”, although at a mesoscopic level energy is continuously dissipated. The loss factor peak can be seen as the crossover between two different behaviours in the dynamics of the vibrated granular system: at the time-scale set on by the forced oscillator, i.e., $ 1/f_{p}$, the granular system appears solid-like at low-$\Gamma $, while it appears fluid-like at high-$\Gamma $. It is a kind of glass, or jamming transition[@NicodemiJamT][* *]{}, where the oscillator gets stuck in the glassy granular medium.
Of course, since the ”jamming” peak in Fig. 3 shifts with $f_{s},$ the same peak can be observed as a function of the forcing frequency $f_{p}
$, as shown in Fig. 5, for various $\Gamma $. At high forcing frequency (but still $f_{p}\ll f_{s}),$ the applied torque alone is unable to unjam the oscillator probe and the response is elastic, with negligible loss factor and modulus $G_{jam}.$ However, by decreasing $f_{p},$ i.e., by increasing the time scale of the probing oscillator, the response evolves toward the one of the unjammed oscillator, with modulus $G_{p}.$
From the shift of the previous “jamming” peaks with $f_{p}$ or $\Gamma $, an Arrhenius-like semilogarithmic plot can be obtained, as shown in Fig. 6. The data points, for a given $f_{s}$, obey an exponential behavior over four decades in frequency. This is strong evidence for the underlying unjamming process to be a statistical, activated-like hopping process and that some of the usual statistical concepts of thermal systems can be extended to a vibrated granular material. A first, obvious approach consists of formally writing the rate of the hopping process as $R=\nu _{0}\exp (-\Gamma
_{j}/\Gamma )$, with $\Gamma _{j}$ a characteristic normalized acceleration at which unjamming occurs, $\nu _{0}$ an attempt frequency, and $\Gamma $ the vibration intensity, playing the role of a temperature-like parameter. In the linear regime, a peak in the loss factor is expected to arise when the forcing frequency matches the hopping rate, i.e., when $\omega _{p}=R$, and $%
\Gamma _{j}$ appears as the ”slope” of a straight line in a plot of $\ln R$ against $1/\Gamma $; however, then $\Gamma _{j}$ depends on $f_{s}$ (see Fig. 6), which means that $\Gamma _{j}$ is not an intrinsic property of the granular material.
To overcome this difficulty, we search a ”scaling” of $\Gamma $ and $f_{s}$ which eliminates the $f_{s}$ dependence. We find that as a function of the inverse of $\Gamma ^{n}/\omega _{s}$, with $n=0.5$, the data for different $%
f_{s}$ have almost the same ”slope”, as shown in Fig. 7. Hence, we write the hopping probability per unit time as $R=\nu _{0}\exp (-\tau
_{j}/\tau )$, with $\tau _{j}$ a characteristic time of the unjamming process, and $\tau =\Gamma ^{1/2}/\omega _{s}$ a parameter which has the unit of time[@notaGamma]. (Fitting values are given in Fig. 7) Alternatively (Fig. 8), we find also that a ”scaling” of the form $\tau =\Gamma ^{n}/\omega _{s}$, with $n=0.57$, almost collapses the data of Fig. 6 over the same straight line. This empirical definition is appealing since both the parameters $\tau $ and $\tau _{j}$ we introduce, are independent of the vibration frequency $f_{s}$. The unjamming time $\tau
_{j}$ experimentally is an [*intrinsic*]{} parameter of the granular system, which for a given $L$ and $R_{e}$ is likely to depend on ”mesoscopic” parameters such as the grain size and shape, and on ”microscopic” parameters controlling the nature of the contact forces between grains. For $n=0.5$ the externally controlled parameter $\tau $ appears as a typical time of the vibrated granular system in the gravitational field: $\Gamma ^{1/2}/\omega
_{s}=(a_{s}/g)^{1/2}$ is the time of flight of a body, initially at rest, falling for a distance $a_{s}$. (It is also the period of a simple oscillator with thread $a_{s}$ and freely swinging.) Notice that $\tau $ is not a kinetic energy-type temperature, as defined for vigorously vibrated gas-like granular phases; indeed, for weakly excited granular systems, configurational statistics on slow degrees of freedom can be decoupled from fast kinetic aspects[@Barrat2000][@Edwards][@Clement][@Cugliandolo2000][@Nicodemi][@Mehta2000].
Discussion
==========
That the rate of our ”activated-like” process is better given by a probability factor involving the ratio of characteristic times, $\exp
(-\tau _{j}/\tau )$, and not, e.g., by the ratio of vibration intensities, $%
\exp (-\Gamma _{j}/\Gamma )$, is not surprising since the fundamental phenomenon is the rate of energy dissipation. What is the mesoscopic nature of the hopping processes at the length scale of the glass beads? Since we observe an elastic regime at low $\Gamma $, we conclude that the oscillator probe is completely jammed and no dissipative events (i.e., no slipping events) arise. We can suppose the system to oscillate elastically around one unique blocked configuration. In the picture, below the loss factor peak, the external high-frequency vibration propagates in the system as elastic fluctuations only. Such elastic fluctuations are non-dissipative vibrations of the force-chain network which holds the oscillator probe in place. By increasing $\Gamma $, a large number of increasingly energetic elastic fluctuations arise (or increasingly [*longer*]{} if we focus on the parameter $\tau =\Gamma ^{n}/\omega _{s}$), and the force-chain network can be seen as exploring different elastic configurations until a critical configuration is reached. A critical configuration is such that, somewhere, the local friction force between two glass beads is overcome and slipping arises, momentarily unjamming the oscillator probe. The system can switch to another blocked configuration.
A single slipping event possibly triggers a large-scale non-elastic rearrangement of the beads, that is an internal micro-avalanche involving two or more beads. (We emphasize that a massive object moving into a granular medium can introduce local fluidization. The inertia acquired by the object during a slip may be large enough to overcome the resisting force of the granular medium, and the object moves further by successive failure, or “inertial” fluidisation, of the resisting grains arrangement. This effect can be reduced by immersing the object deep enough, so that the resisting force is larger than the inertial force.) Afterwards, the oscillator probe gets jammed by a new force-chain network in a slightly different position. The macroscopic slow rotation of the oscillator is a sequence of stick-slips, where large fluctuations causing a slip are rare events if compared to the numerous elastic fluctuations. Hence, the dynamics is controlled by the extreme fluctuations in the force-chain network, even if elastic, or under-critical fluctuations occur in much larger number. Since a slipping event involves inelastic microscopic processes at the interface between grains, such as plastic and viscoplastic deformation, fatigue, surface fracture, blow out of capillary bridges, and other forms of localized dissipative processes[@Bowden], a slipping event requires a [*minimum finite time*]{} to occur. Let this time be $\tau _{j}$. The ”thermal” time $\tau $ can be seen as the [*average*]{} time-window during which the grains have some freedom to rearrange their position and, possibly, reach a critical slipping configuration and unjam. As a consequence, unjamming is determined by the occurrence probability of a window time $\tau _{a}$ greater as, or equal to $%
\tau _{j}$. Even though we do not know the probability distribution of $\tau
_{a},$ according to extreme order statistics theory[@Galambos][@9; @Vinokur][@11; @Sollich][@12; @Bouchaud], we can speculate the occurrence probability of unjamming events to be $\exp (-\tau _{j}/\tau )$. This gives the rate of the extreme fluctuations, namely $R=\nu _{0}\exp (-\tau
_{j}/\tau )$.
In summarizing, we observe a peak in the loss factor as a function of the empirical control parameters $\tau $ or $\Gamma .$ The peak can be viewed as a crossover, at the time-scale $ 1/f_{p}$ set on by the forced oscillator, in the dynamics of the vibrated granular medium: such a crossover separates a ”low-temperature” (short $\tau ,$ or small $\Gamma $) solid-like behaviour where the oscillator probe is jammed in the granular medium, from a ”high-temperature” (long $\tau ,$ or large $\Gamma $) fluid-like dynamic behaviour. This crossover follows an Arrhenius-like form in the $\ln f$ vs. $1/\Gamma $ plane, reminiscent of the mechanical response of usual glass-froming materials.
A. Barrat, J. Kurchan, V. Loreto, and M. Sellitto, Phys. Rev. Lett. [**85**]{}, 5034 (2000).
J. B. Knight, C. G. Fandrich, C. N.Lau, H. M. Jaeger, and S. R. Nagel, Phys. Rev. E [**51**]{}, 3957 (1995).
I. Albert, [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 5122 (2000).
M. E. Cates, J. P. Wittmer, J.-P. Bouchaud, and P. Claudin, Phys. Rev. Lett. [**81**]{}, 1841 (1998).
A. J. Liu, and S. R. Nagel, Nature [**396**]{}, 21 (1998).
G. D’Anna, Phys. Rev. E [**62**]{}, 982 (2000).
L. Bocquet, E. Charlaix, S. Ciliberto,. and J. Crassous, Nature [**396**]{}, 735 (1998).
H. K. Pak, E. Van Doorn, and R. P Behringer, Phys. Rev. Lett. [**74**]{}, 4643 (1995).
When the vibration frequency is not a control parameter, one can write the “activated” behavior in term of probability factors involving $\Gamma $ instead than $\Gamma ^{n}/\omega _{s}.$ As shown in Fig. 6, for a fixed vibration frequency $f_{s},$ formally writing the rate of the activated-like hopping process as $R=\nu _{0}\exp (-\Gamma _{j}/\Gamma )$, suffices.
S. F. Edwards, in: [*Granular Matter*]{}, (ed A. Mehta.) 121-140 (Springer-Verlag, New York 1994).
E. Clement, and J. Rajchenbach, Europhys. Lett. [**16**]{}, 133 (1991).
L. Berthier, L. F. Cugliandolo, and J. L. Iguain, Phys. Rev. E [**63**]{}, 51302 (2001).
M. Nicodemi, and A. Coniglio, Phys. Rev. Lett. [**82**]{}, 916 (1999).
A. Mehta, and G. C. Barker, J. Phys.: Condens. Matter [**12**]{}, 6619 (2000).
P. P. Bowden, and D. Tabor, [*The friction and lubrication of solids*]{}, 4th ed. (Clarendon Press, Oxford, 1986).
J. Galambos, [*The asymptotic theory of extreme order statistics*]{} (Wiley, New York, 1978).
V. M. Vinokur, M. C. Marchetti, and L.-W. Chen, Phys. Rev. Lett. [**77**]{}, 1845 (1996).
P. Sollich, F. Lequeux, P. Hébraud, and M. E. Cates, Phys. Rev. Lett. [**78**]{}, 2020 (1997).
J. P. Bouchaud, Preprint cond-mat/9910387; J. P. Bouchaud, and M. Mézard, J. Phys. A [**30**]{}, 7997 (1997).
A. Coniglio, and M. Nicodemi, J. Phys.: Condens. Matter [**12**]{}, 6601 (2000).
Figure captions
FIG. 1. Sketch of the forced torsion oscillator immersed at a depth $L$ into a granular medium of glass beads. A single layer of glass beads is glued to the oscillator probe, and the effective radius is $R_{e}$. The container, filled with the granular material, is shaken vertically by a vibrator at the intensity of vibration, $\Gamma $, below the fluidization limit. The method provides a measure of the complex frequency response of the granular medium while the vibrator mimics ”thermal” fluctuations. 1=suspension wires; 2=permanent magnet; 3=external coils; 4=mirror; 5=probe; 6=vibrator.
FIG. 2. Oscillator frequency response, at $f_{p}=1$ Hz, as a function of the amplitude of the applied torque $T_{0}$, for different vibration intensities $%
\Gamma $, for $f_{s}=200$ Hz. (a) The loss factor, given by $\tan [\arg
(G_{1})]$, versus $T_{0}$. The position of the peak obtained at $\Gamma
<2\times 10^{-3}$ (i.e., with the vibration off) is denoted $T_{0}^{*}$. (b) The modulus $|G_{1}|$ versus $T_{0}$. The two extreme levels on the curve obtained at $\Gamma <2\times 10^{-5}$ are denoted $G_{p}$ and $G_{jam}$ respectively. For $T_{0}\ll T_{0}^{*}$, the response is independent of $%
T_{0} $ , i.e., there is a linear regime, confirmed also by a negligible high harmonics signal (not shown) for all $\Gamma $.
FIG. 3. Oscillator frequency response as a function of the vibration intensity $%
\Gamma $, with $f_{s}=200$ Hz, for different forcing frequencies $f_{p}$ of the oscillator, at a given low torque amplitude $T_{0}=3.2\times 10^{-5}$ N m selected in the linear regime, i.e., $T_{0}\ll T_{0}^{*}$. (a) The loss factor $\tan [\arg (G_{1})]$, versus $\Gamma $. (b) The modulus $|G_{1}|$ versus $\Gamma $. For each $f_{p}$ a peak with a maximum at a vibration intensity $\Gamma ^{*}$ can be seen. The data shown in Fig. 3 are collected by decreasing $\Gamma $, but no difference is observed in following runs if $%
\Gamma $ is successively increased, decreased and so on (see Fig. 4.).
FIG. 4. Similar to Fig. 3, for $f_{p}=1$ Hz, but for $\Gamma $ successively decreased, increased, decreased and so on.
FIG. 5. Oscillator frequency response as a function of the forcing frequency $%
f_{p}$, for different vibration intensity $\Gamma $, with $f_{s}=200$ Hz, at a given torque amplitude $T_{0}=3.2\times 10^{-5}$ N m. (a) The loss factor $%
\tan [\arg (G_{1})]$, versus $f_{p}$. (b) The modulus $|G_{1}|$ versus $%
f_{p} $. One can see for each $\Gamma $ a peak with a maximum at a frequency $f_{p}^{*}$. The data shown are collected by decreasing $f_{p}$. A Debye peak of equation $C\omega _{p}\tau _{c}/(1+\omega _{p}^{2}\tau _{c}^{2})$ with $C=1.9$ and $\tau _{c}=R^{-1}=0.6$ is shown in (a). As a function of the frequency, the shape of the Debye peak is independent of the exact definition of the temperature-like parameter entering the rate $R$. The observed ”jamming” peaks are much larger than the pure Debye peak, suggesting that the underlying dynamics is glassy in nature.
FIG. 6. The semilogarithmic Arrhenius-like plot reporting the forcing frequency $f_{p}$ versus $1/\Gamma $ of ”jamming” peaks similar to the ones in Figs. 3 and 5 (filled symbols from measurements vs. $\Gamma $; open symbols from measurements vs. $f_{p}$), for different vibration frequencies $%
f_{s}$. The data are fitted (dashed lines) by $2\pi f_{p}=\nu _{0}\exp
(-\Gamma _{j}/\Gamma )$, which gives $\nu _{0}\approx $70 Hz and $\Gamma
_{j}\approx $0.014 for $f_{s}=50$ Hz, $\nu _{0}\approx $66 Hz and $\Gamma
_{j}\approx $0.042 for $f_{s}=$100 Hz, and $\nu _{0}\approx $49 Hz and $%
\Gamma _{j}\approx $0.14 for $f_{s}=$200 Hz.
FIG. 7. The forcing frequency $f_{p}$ versus the inverse of the empirical control parameter $1/\tau $, i.e., versus $\omega _{s}/\Gamma ^{n}$, with $n=1/2$. The data are fitted (plain and dashed lines) by $2\pi f_{p}=\nu _{0}\exp
(-\tau _{j}/\tau )$, which gives $\nu _{0}\approx $1336 Hz and $\tau
_{j}\approx 1.3\times 10^{-3}$ s for $f_{s}=50$ Hz, $\nu _{0}\approx $1465 Hz and $\tau _{j}\approx 1.2\times 10^{-3}$ s for $f_{s}=100$ Hz, $\nu
_{0}\approx $2698 Hz and $\tau _{j}\approx 1.2\times 10^{-3}$ s for $%
f_{s}=200$ Hz. The straight lines have almost the same ”slope” $\tau _{j}$. The average is $\left\langle \tau _{j}\right\rangle =1.26\times 10^{-3}$ s. $\nu _{0}$ is seen as a natural vibration frequency of the granular medium. Considering the present precision of the data, no clear relationship between $\nu _{0}$ and $f_{s}$ can be found, even though $\nu _{0}$ increases as $f_{s}$ increases.
FIG. 8. Similar to Fig. 7, but with $n=0.57$. The data are fitted (plain and dashed lines) by $2\pi f_{p}=\nu _{0}\exp
(-\tau _{j}/\tau )$, which gives $\nu _{0}\approx $648 Hz and $\tau
_{j}\approx 8.0\times 10^{-4}$ s for $f_{s}=50$ Hz, $\nu _{0}\approx $683 Hz and $\tau _{j}\approx 7.8\times 10^{-4}$ s for $f_{s}=100$ Hz, $\nu
_{0}\approx $1019 Hz and $\tau _{j}\approx 8.7\times 10^{-4}$ s for $%
f_{s}=200$ Hz.
|
---
abstract: 'We identify the presence of typically quantum effects, namely [*superposition*]{} and [*interference*]{}, in what happens when human concepts are combined, and provide a quantum model in complex Hilbert space that represents faithfully experimental data measuring the situation of combining concepts. Our model shows how ‘interference of concepts’ explains the effects of underextension and overextension when two concepts combine to the disjunction of these two concepts. This result supports our earlier hypothesis that human thought has a superposed two-layered structure, one layer consisting of [*classical logical thought*]{} and a superposed layer consisting of [*quantum conceptual thought*]{}. Possible connections with recent findings of a [*grid-structure*]{} for the brain are analyzed, and influences on the mind/brain relation, and consequences on applied disciplines, such as artificial intelligence and quantum computation, are considered.'
author:
- |
Diederik Aerts and Sandro Sozzo\
*Center Leo Apostel for Interdisciplinary Studies\
*Brussels Free University\
*Krijgskundestraat 33, 1160 Brussels, Belgium\
E-Mails: [diraerts@vub.ac.be,ssozzo@vub.ac.be](diraerts@vub.ac.be,ssozzo@vub.ac.be)\
***
title: 'Quantum Interference in Cognition: Structural Aspects of the Brain'
---
: concept theory; quantum cognition; cognitive processes; interference; brain structure
Introduction\[intro\]
=====================
In recent years it has become clear that quantum structures do not only appear within situations in the micro world, but that also situations of the macro world exhibit a quantum behavior [@aertsaerts1994]–[@aertsgaborasozzoveloz2011]. Mainly in domains such as cognitive science (decision theory, concept theory), biology (evolution theory, ecology, population dynamics) and computer science (semantic theories, information retrieval, artificial intelligence), aspects have been identified where the application of classical structures is problematic while the application of quantum structures is promising. The aspects of these domains where classical theories fail, and quantum structures are successful, reveal quite systematically four specific and very characteristic quantum effects, namely [*interference*]{}, [*contextuality*]{}, [*emergence*]{} and [*entanglement*]{}. Sometimes it has been possible to use the full quantum apparatus of linear operators in complex Hilbert space to model these effects as they appear in these situations. However, in quite some occasions a mathematical formalism more general than standard quantum mechanics in complex Hilbert space is needed. We have introduced in [@aertssozzo2012] a general modeling scheme for contextual emergent entangled interfering entities. In the present article we instead focus on the identification of quantum superposition and interference in cognition to explain ‘how’ and ‘why’ interference models the well documented effects of [*overextension*]{} and [*underextension*]{} when concepts combine in disjunction [@hampton1988]. Possible connections with some recent and interesting research on the structure of the brain and technological applications to symbolic artificial intelligence and computation are also presented.
Interference effects have been studied in great detail and are very common for quantum entities, the famous ‘double slit situation’ being an archetypical example of them [@young1802]–[@ArndtNairzVos-AndreaeKellervanderZouwZeilinger1999]. Also for concepts we have studied some effects related to the phenomenon of interference in earlier work [@aerts2009; @aertssozzo2012], [@aerts2010c]–[@aerts2007b]. In the present article, we concentrate on the situation where two concepts, more specifically the concepts [*Fruits*]{} and [*Vegetables*]{} are combined by using the logical connective ‘or’ into a new concept [*Fruits or Vegetables*]{}. Such disjunctive combinations of concepts have been studied intensively by James Hampton [@hampton1988]. Hampton collected experimental data from subjects being asked to estimate the typicality of a collection of exemplars with respect to [*Fruits*]{} and with respect to [*Vegetables*]{}. Then he asked the subjects also to estimate the typicality of the same exemplars with respect to the combination [*Fruits or Vegetables*]{}. By using the data of these experiments we identify interference between the concepts [*Fruits*]{} and [*Vegetables*]{}, and explain how this interference accounts for the effects of underextension and overextension identified by Hampton.
In Sec. \[interferencesuperposition\] we consider the set of data collected by Hampton, and work out a quantum description modeling these data. In Sec. \[graphics\] we illustrate the phenomenon of interference as it appears in the considered conceptual combination, and in Sec. \[explanation\] we present an explanation for the occurrence of this quantum effect by comparing it with the interference typical of the two-slit experiment. This modeling suggests the hypothesis in Sec. \[layers\] that a [*quantum conceptual layer*]{} is present in human thought which is superposed to the usually assumed [*classical logical thought*]{}, the former being responsible of deviations from classically expected behavior in cognition. Finally, we present in Sec. \[brain\] a suggestion inspired by recent research where a [*grid*]{}, rather than a [*neural network*]{}, pattern, is identified in the structure of the brain [@brain2012]. More specifically, we put forward the hypothesis, albeit speculative, that the interference we identity between concepts, and the complex Hilbert space that we structurally use to model this interference, might contain elements that have their isomorphic counterparts in the dynamics of the brain. Aspects of the impact of this hypothesis on the modeling and formalizing of natural and artificial knowledge, as well as the implications on artificial intelligence, robotics and quantum computation, are also inquired.
Fruits interfering with Vegetables\[interferencesuperposition\]
===============================================================
Let us consider the two concepts [*Fruits*]{} and [*Vegetables*]{}, and their combination [*Fruits or Vegetables*]{}, and work out a quantum model for the data collected by J. Hampton for this situation [@hampton1988; @aerts2010c]. The concepts [*Fruits*]{} and [*Vegetables*]{} are two exemplars of the concept [*Food*]{}. And we consider a collection of exemplars of [*Food*]{}, more specifically those listed in Tab. 1. Then we consider the following experimental situation: Subjects are asked to respond to the following three elements: [*Question $A$*]{}: ‘Choose one of the exemplars from the list of Tab. 1 that you find a good example of [*Fruits*]{}’. [*Question $B$*]{}: ‘Choose one of the exemplars from the list of Tab. 1 that you find a good example of [*Vegetables*]{}’. [*Question $A$ or $B$*]{}: ‘Choose one of the exemplars from the list of Tab. 1 that you find a good example of [*Fruits or Vegetables*]{}’. Then we calculate the relative frequency $\mu(A)_k$, $\mu(B)_k$ and $\mu(A\ {\rm or}\ B)_k$, i.e the number of times that exemplar $k$ is chosen divided by the total number of choices made in response to the three questions $A$, $B$ and $A\ {\rm or}\ B$, respectively, and interpret this as an estimate for the probabilities that exemplar $k$ is chosen for questions $A$, $B$ and $A\ {\rm or}\ B$, respectively. These relative frequencies are given in Tab. 1.
For example, for [*Question $A$*]{}, from 10,000 subjects, 359 chose [*Almond*]{}, hence $\mu(A)_1=0.0359$, 425 chose [*Acorn*]{}, hence $\mu(A)_2=0.0425$, 372 chose [*Peanut*]{}, hence $\mu(A)_3=0.0372$, $\ldots$, and 127 chose [*Black Pepper*]{}, hence $\mu(A)_{24}=0.0127$. Analogously for [*Question $B$*]{}, from 10,000 subjects, 133 chose [*Almond*]{}, hence $\mu(B)_1=0.0133$, 108 chose [*Acorn*]{}, hence $\mu(B)_2=0.0108$, 220 chose [*Peanut*]{}, hence $\mu(B)_3=0.0220$, $\ldots$, and 294 chose [*Black Pepper*]{}, hence $\mu(B)_{24}=0.0294$, and for [*Question $A\ {\rm or}\ B$*]{}, 269 chose [*Almond*]{}, hence $\mu(A\ {\rm or}\ B)_1=0.0269$, 249 chose [*Acorn*]{}, hence $\mu(A\ {\rm or}\ B)_2=0.249$, 269 chose [*Peanut*]{}, hence $\mu(A\ {\rm or}\ B)_3=0.269$, $\ldots$, and 222 chose [*Black Pepper*]{}, hence $\mu(A\ {\rm or}\ B)_{24}=0.222$.
Let us now explicitly construct a quantum mechanical model in complex Hilbert space for the pair of concepts [*Fruit*]{} and [*Vegetable*]{} and their disjunction ‘[*Fruit or Vegetable*]{}’, and show that quantum interference models the experimental data gathered in [@hampton1988]. We represent the measurement of ‘a good example of’ by means of a self-adjoint operator with spectral decomposition $\{M_k\ \vert\ k=1,\ldots,24\}$ where each $M_k$ is an orthogonal projection of the Hilbert space ${\cal H}$ corresponding to item $k$ from the list of items in Tab. 1.
[|llllllll|]{} & & & & & &\
\
1 & [*Almond*]{} & 0.0359 & 0.0133 & 0.0269 & 0.0246 & 0.0218 & 83.8854$^\circ$\
2 & [*Acorn*]{} & 0.0425 & 0.0108 & 0.0249 & 0.0266 & -0.0214 & -94.5520$^\circ$\
3 & [*Peanut*]{} & 0.0372 & 0.0220 & 0.0269 & 0.0296 & -0.0285 & -95.3620$^\circ$\
4 & [*Olive*]{} & 0.0586 & 0.0269 & 0.0415 & 0.0428 & 0.0397 & 91.8715$^\circ$\
5 & [*Coconut*]{} & 0.0755 & 0.0125 & 0.0604 & 0.0440 & 0.0261 & 57.9533$^\circ$\
6 & [*Raisin*]{} & 0.1026 & 0.0170 & 0.0555 & 0.0598 & 0.0415 & 95.8648$^\circ$\
7 & [*Elderberry*]{} & 0.1138 & 0.0170 & 0.0480 & 0.0654 & -0.0404 & -113.2431$^\circ$\
8 & [*Apple*]{} & 0.1184 & 0.0155 & 0.0688 & 0.0670 & 0.0428 & 87.6039$^\circ$\
9 & [*Mustard*]{} & 0.0149 & 0.0250 & 0.0146 & 0.0199 & -0.0186 & -105.9806$^\circ$\
10 & [*Wheat*]{} & 0.0136 & 0.0255 & 0.0165 & 0.0195 & 0.0183 & 99.3810$^\circ$\
11 & [*Root Ginger*]{} & 0.0157 & 0.0323 & 0.0385 & 0.0240 & 0.0173 & 50.0889$^\circ$\
12 & [*Chili Pepper*]{} & 0.0167 & 0.0446 & 0.0323 & 0.0306 & -0.0272 & -86.4374$^\circ$\
13 & [*Garlic*]{} & 0.0100 & 0.0301 & 0.0293 & 0.0200 & -0.0147 & -57.6399$^\circ$\
14 & [*Mushroom*]{} & 0.0140 & 0.0545 & 0.0604 & 0.0342 & 0.0088 & 18.6744$^\circ$\
15 & [*Watercress*]{} & 0.0112 & 0.0658 & 0.0482 & 0.0385 & -0.0254 & -69.0705$^\circ$\
16 & [*Lentils*]{} & 0.0095 & 0.0713 & 0.0338 & 0.0404 & 0.0252 & 104.7126$^\circ$\
17 & [*Green Pepper*]{} & 0.0324 & 0.0788 & 0.0506 & 0.0556 & -0.0503 & -95.6518$^\circ$\
18 & [*Yam*]{} & 0.0533 & 0.0724 & 0.0541 & 0.0628 & 0.0615 & 98.0833$^\circ$\
19 & [*Tomato*]{} & 0.0881 & 0.0679 & 0.0688 & 0.0780 & 0.0768 & 100.7557$^\circ$\
20 & [*Pumpkin*]{} & 0.0797 & 0.0713 & 0.0579 & 0.0755 & -0.0733 & -103.4804$^\circ$\
21 & [*Broccoli*]{} & 0.0143 & 0.1284 & 0.0642 & 0.0713 & -0.0422 & -99.6048$^\circ$\
22 & [*Rice*]{} & 0.0140 & 0.0412 & 0.0248 & 0.0276 & -0.0238 & -96.6635$^\circ$\
23 & [*Parsley*]{} & 0.0155 & 0.0266 & 0.0308 & 0.0210 & -0.0178 & -61.1698$^\circ$\
24 & [*Black Pepper*]{} & 0.0127 & 0.0294 & 0.0222 & 0.0211 & 0.0193 & 86.6308$^\circ$\
The concepts [*Fruits*]{}, [*Vegetables*]{} and ‘[*Fruits or Vegetables*]{}’ are represented by unit vectors $|A\rangle$, $|B\rangle$ and ${1 \over \sqrt{2}}(|A\rangle+|B\rangle)$ of the Hilbert space ${\cal H}$, where $|A\rangle$ and $|B\rangle$ are orthgonal, and ${1 \over \sqrt{2}}(|A\rangle+|B\rangle)$ is their normalized superposition. Following standard quantum rules we have $\mu(A)_k=\langle A|M_k|A\rangle$, $\mu(B)_k=\langle B|M_k|B\rangle$, hence $$\mu(A\ {\rm or}\ B)_k={1 \over 2}\langle A+B|M_k|A+B\rangle={1 \over 2}(\mu(A)_k+\mu(B)_k)+\Re\langle A|M_k|B\rangle, \label{muAorB}$$ where $\Re\langle A|M_k|B\rangle$ is the interference term. Let us introduce $|e_k\rangle$ the unit vector on $M_k|A\rangle$ and $|f_k\rangle$ the unit vector on $M_k|B\rangle$, and put $\langle e_k|f_k\rangle=c_ke^{i\gamma_k}$. Then we have $|A\rangle=\sum_{k=1}^{24}a_ke^{i\alpha_k}|e_k\rangle$ and $|B\rangle=\sum_{k=1}^{24}b_ke^{i\beta_k}|f_k\rangle$, which gives $$\label{ABequation}
\langle A|B\rangle=(\sum_{k=1}^{24}a_ke^{-i\alpha_k}\langle e_k|)(\sum_{l=1}^{24}b_le^{i\beta_l}|f_l\rangle)=\sum_{k=1}^{24}a_kb_kc_ke^{i\phi_k}$$ where we put $\phi_k=\beta_k-\alpha_k+\gamma_k$. Further we have $\mu(A)_k=a_k^2$, $\mu(B)_k=b_k^2$, $\langle A|M_k|B\rangle=a_kb_kc_ke^{i\phi_k}$, which gives, by using (\[muAorB\]), $$\label{muAorBequation}
\mu(A\ {\rm or}\ B)_k={1 \over 2}(\mu(A)_k+\mu(B)_k)+c_k\sqrt{\mu(A)_k\mu(B)_k}\cos\phi_k$$ We choose $\phi_k$ such that $$\label{cosequation}
\cos\phi_k={2\mu(A\ {\rm or}\ B)_k-\mu(A)_k-\mu(B)_k \over 2c_k\sqrt{\mu(A)_k\mu(B)_k}}$$ and hence (\[muAorBequation\]) is satisfied. We now have to determine $c_k$ in such a way that $\langle A|B\rangle=0$. Recall that from $\sum_{k=1}^{24}\mu(A\ {\rm or}\ B)_k=1$ and (\[muAorBequation\]), and with the choice of $\cos\phi_k$ that we made in (\[cosequation\]), it follows that $\sum_{k=1}^{24}c_k\sqrt{\mu(A)_k\mu(B)_k}\cos\phi_k=0$. Taking into account (\[ABequation\]), which gives $\langle A|B\rangle=\sum_{k=1}^{24}a_kb_kc_k(\cos\phi_k+i\sin\phi_k)$, and making use of $\sin\phi_k=\pm\sqrt{1-\cos^2\phi_k}$, we have $\langle A|B\rangle=0$ $\Leftrightarrow$ $\sum_{k=1}^{24}c_k\sqrt{\mu(A)_k\mu(B)_k}(\cos\phi_k+i\sin\phi_k)=0$ $\Leftrightarrow$ $\sum_{k=1}^{24}c_k\sqrt{\mu(A)_k\mu(B)_k}\sin\phi_k=0$ $\Leftrightarrow$ $$\label{conditionequation}
\sum_{k=1}^{24}\pm\sqrt{c_k^2\mu(A)_k\mu(B)_k-(\mu(A\ {\rm or}\ B)_k-{\mu(A)_k+\mu(B)_k \over 2})^2}=0$$ We introduce the following quantities $$\label{lambdak}
\lambda_k=\pm\sqrt{\mu(A)_k\mu(B)_k-(\mu(A\ {\rm or}\ B)_k-{\mu(A)_k+\mu(B)_k \over 2})^2}$$ and choose $m$ the index for which $|\lambda_m|$ is the biggest of the $|\lambda_k|$’s. Then we take $c_k=1$ for $k\not=m$. We explain now the algorithm that we use to choose a plus or minus sign for $\lambda_k$ as defined in (\[lambdak\]), with the aim of being able to determine $c_m$ such that (\[conditionequation\]) is satisfied. We start by choosing a plus sign for $\lambda_m$. Then we choose a minus sign in (\[lambdak\]) for the $\lambda_k$ for which $|\lambda_k|$ is the second biggest; let us call the index of this term $m_2$. This means that $0\le\lambda_m+\lambda_{m_2}$. For the $\lambda_k$ for which $|\lambda_k|$ is the third biggest – let us call the index of this term $m_3$ – we choose a minus sign in case $0\le\lambda_m+\lambda_{m_2}+\lambda_{m_3}$, and otherwise we choose a plus sign, and in this case we have $0\le\lambda_m+\lambda_{m_2}+\lambda_{m_3}$. We continue this way of choosing, always considering the next biggest $|\lambda_k|$, and hence arrive at a global choice of signs for all of the $\lambda_k$, such that $0\le\lambda_m+\sum_{k\not=m}\lambda_k$. Then we determine $c_m$ such that (\[conditionequation\]) is satisfied, or more specifically such that $$\label{cmequation}
c_m=\sqrt{{(-\sum_{k\not=m}\lambda_k)^2+(\mu(A\ {\rm or}\ B)_m-{\mu(A)_m+\mu(B)_m \over 2})^2 \over \mu(A)_m\mu(B)_m}}$$ We choose the sign for $\phi_k$ as defined in (\[cosequation\]) equal to the sign of $\lambda_k$. The result of the specific solution that we have constructed is that we can take $M_k({\cal H})$ to be rays of dimension 1 for $k\not=m$, and $M_m({\cal H})$ to be a plane. This means that we can make our solution still more explicit. Indeed, we take ${\cal H}={\mathbb{C}}^{25}$ the canonical 25 dimensional complex Hilbert space, and make the following choices $$\label{vectorA}
|A\rangle=(\sqrt{\mu(A)_1},\ldots,\sqrt{\mu(A)_m},\ldots,
\sqrt{\mu(A)_{24}},0)$$ $$|B\rangle=(e^{i\beta_1}\sqrt{\mu(B)_1},\ldots,c_me^{i\beta_m}\sqrt{\mu(B)_m},\ldots,
e^{i\beta_{24}}\sqrt{\mu(B)_{24}},\sqrt{\mu(B)_m(1-c_m^2)}) \label{vectorB}$$ $$\label{anglebetan}
\beta_m=\arccos({2\mu(A\ {\rm or}\ B)_m-\mu(A)_m-\mu(B)_m \over 2c_m\sqrt{\mu(A)_m\mu(B)_m}}) \\$$ $$\label{anglebetak}
\beta_k=\pm\arccos({2\mu(A\ {\rm or}\ B)_k-\mu(A)_k-\mu(B)_k \over 2\sqrt{\mu(A)_k\mu(B)_k}})$$ where the plus or minus sign in (\[anglebetak\]) is chosen following the algorithm we introduced for choosing the plus and minus sign for $\lambda_k$ in (\[lambdak\]). Let us construct this quantum model for the data given in Tab. 1. The exemplar which gives rise to the biggest value of $|\lambda_k|$ is [*Tomato*]{}, and hence we choose a plus sign and get $\lambda_{19}=0.0768$. The exemplar giving rise to the second biggest value of $\lambda_k$ is [*Pumpkin*]{}, and hence we choose a minus sign, and get $\lambda_{20}=-0.0733$. Next comes [*Yam*]{}, and since $\lambda_{19}+\lambda_{20}-0.0615<0$, we choose a plus sign for $\lambda_{18}$. Next is [*Green Pepper*]{}, and we look at $0\le\lambda_{19}+\lambda_{20}+\lambda_{18}-0.0503$, which means that we can choose a minus sign for $\lambda_{17}$. The fifth exemplar in the row is [*Apple*]{}. We have $\lambda_{19}+\lambda_{20}+\lambda_{18}+\lambda_{17}-0.0428<0$, which means that we need to choose a plus sign for $\lambda_8$. Next comes [*Broccoli*]{} and verifying shows that we can choose a minus sign for $\lambda_{21}$. We determine in an analogous way the signs for the exemplars [*Raisin*]{}, plus sign, [*Elderberry*]{}, minus sign, [*Olive*]{}, plus sign, [*Peanut*]{}, minus sign, [*Chili Pepper*]{}, minus sign, [*Coconut*]{}, plus sign, [*Watercress*]{}, minus sign, [*Lentils*]{}, plus sign, [*Rice*]{}, minus sign, [*Almond*]{}, plus sign, [*Acorn*]{}, minus sign, [*Black Pepper*]{}, plus sign, [*Mustard*]{}, minus sign, [*Wheat*]{}, plus sign, [*Parsley*]{}, minus sign, [*Root Ginger*]{}, plus sign, [*Garlic*]{}, minus sign, and finally [*Mushroom*]{}, plus sign. In Tab. 1 we give the values of $\lambda_k$ calculated following this algorithm, and from (\[cmequation\]) it follows that $c_{19}=0.7997$.
Making use of (\[vectorA\]), (\[vectorB\]), (\[anglebetan\]) and (\[anglebetak\]), and the values of the angles given in Tab. 1, we put forward the following explicit representation of the vectors $|A\rangle$ and $|B\rangle$ in ${\mathbb{C}}^{25}$ representing concepts [*Fruits*]{} and [*Vegetables*]{} $$\begin{aligned}
|A\rangle&=&(0.1895, 0.2061, 0.1929, 0.2421, 0.2748, 0.3204, 0.3373, 0.3441, 0.1222, 0.1165, 0.1252, 0.1291, \nonumber \\
&& 0.1002, 0.1182, 0.1059, 0.0974, 0.1800, 0.2308, 0.2967, 0.2823, 0.1194, 0.1181, 0.1245, 0.1128, 0) \nonumber \\
|B\rangle&=&(0.1154e^{i83.8854^\circ}, 0.1040e^{-i94.5520^\circ}, 0.1484e^{-i95.3620^\circ}, 0.1640e^{i91.8715^\circ}, 0.1120e^{i57.9533^\circ}, \nonumber \\
&& 0.1302e^{i95.8648^\circ}, 0.1302e^{-i113.2431^\circ}, 0.1246e^{i87.6039^\circ}, 0.1580e^{-i105.9806^\circ},0.1596e^{i99.3810^\circ}, \nonumber \\
&& 0.1798e^{i50.0889^\circ}, 0.2112e^{-i86.4374^\circ}, 0.1734e^{-i57.6399^\circ}, 0.2334e^{i18.6744^\circ}, 0.2565e^{-i69.0705^\circ}, \nonumber \\
&& 0.2670e^{i104.7126^\circ}, 0.2806e^{-i95.6518^\circ}, 0.2690e^{i98.0833^\circ}, 0.2606e^{i100.7557^\circ}, 0.2670e^{-i103.4804^\circ}, \nonumber \\
&& 0.3584e^{-i99.6048^\circ}, 0.2031e^{-i96.6635^\circ}, 0.1630e^{-i61.1698^\circ}, 0.1716e^{i86.6308^\circ}, 0.1565). \label{interferenceangles}\end{aligned}$$ This proves that we can model the data of [@hampton1988] by means of a quantum mechanical model, and such that the values of $\mu(A\ {\rm or}\ B)_k$ are determined from the values of $\mu(A)_k$ and $\mu(B)_k$ as a consequence of quantum interference effects. For each $k$ the value of $\phi_k$ in Tab. 1 gives the quantum interference phase of the exemplar number $k$.
Graphics of the interference patterns\[graphics\]
=================================================
In [@aerts2010c] we worked out a way to ‘chart’ the quantum interference patterns of the two concepts when combined into conjunction or disjunction. Since it helps our further analysis in the present article, we put forward this ‘chart’ for the case of the concepts [*Fruits*]{} and [*Vegetables*]{} and their disjunction ‘[*Fruits or Vegetables*]{}’. More specifically, we represent the concepts [*Fruits*]{}, [*Vegetables*]{} and ‘[*Fruits or Vegetables*]{}’ by complex valued wave functions of two real variables $\psi_A(x,y), \psi_B(x,y)$ and $\psi_{A{\rm or}B}(x,y)$. We choose $\psi_A(x,y)$ and $\psi_B(x,y)$ such that the real part for both wave functions is a Gaussian in two dimensions, which is always possible since we have to fit in only 24 values, namely the values of $\psi_A$ and $\psi_B$ for each of the exemplars of Tab. 1. The squares of these Gaussians are graphically represented in Figs. 1 and 2, and the different exemplars of Tab. 1 are located in spots such that the Gaussian distributions $|\psi_A(x,y)|^2$ and $|\psi_B(x,y)|^2$ properly model the probabilities $\mu(A)_k$ and $\mu(B)_k$ in Tab. 1 for each one of the exemplars. For example, for [*Fruits*]{} represented in Fig. 1, [*Apple*]{} is located in the center of the Gaussian, since [*Apple*]{} was most frequently chosen by the test subjects when asked [*Question A*]{}. [*Elderberry*]{} was the second most frequently chosen, and hence closest to the top of the Gaussian in Fig. 1.
![The probabilities $\mu(A)_k$ of a person choosing the exemplar $k$ as a ‘good example’ of [*Fruits*]{} are fitted into a two-dimensional quantum wave function $\psi_A(x,y)$. The numbers are placed at the locations of the different exemplars with respect to the Gaussian probability distribution $|\psi_A(x,y)|^2$. This can be seen as a light source shining through a hole centered on the origin, and regions where the different exemplars are located. The brightness of the light source in a specific region corresponds to the probability that this exemplar will be chosen as a ‘good example’ of [*Fruits*]{}. ](Figure01)
![The probabilities $\mu(B)_k$ of a person choosing the exemplar $k$ as an example of [*Vegetables*]{} are fitted into a two-dimensional quantum wave function $\psi_B(x,y)$. The numbers are placed at the locations of the different exemplars with respect to the probability distribution $|\psi_B(x,y)|^2$. As in Fig. 1, it can be seen as a light source shining through a hole centered on point 21, where [*Broccoli*]{} is located. The brightness of the light source in a specific region corresponds to the probability that this exemplar will be chosen as a ‘good example’ of [*Vegetables*]{}. ](Figure02)
Then come [*Raisin*]{}, [*Tomato*]{} and [*Pumpkin*]{}, and so on, with [*Garlic*]{} and [*Lentils*]{} as the least chosen ‘good examples’ of [*Fruits*]{}. For [*Vegetables*]{}, represented in Fig. 2, [*Broccoli*]{} is located in the center of the Gaussian, since [*Broccoli*]{} was the exemplar most frequently chosen by the test subjects when asked [*Question B*]{}. [*Green Pepper*]{} was the second most frequently chosen, and hence closest to the top of the Gaussian in Fig. 2. Then come [*Yam*]{}, [*Lentils*]{} and [*Pumpkin*]{}, and so on, with [*Coconut*]{} and [*Acorn*]{} as the least chosen ‘good examples’ of [*Vegetables*]{}. Metaphorically, we could regard the graphical representations of Figs. 1 and 2 as the projections of two light sources each shining through one of two holes in a plate and spreading out their light intensity following a Gaussian distribution when projected on a screen behind the holes.
![The probabilities $\mu(A\ {\rm or}\ B)_k$ of a person choosing the exemplar $k$ as an example of ‘[*Fruits or Vegetables*]{}’ are fitted into the two-dimensional quantum wave function ${1 \over \sqrt{2}}(\psi_A(x,y)+\psi_B(x,y))$, which is the normalized superposition of the wave functions in Figs. 1 and 2. The numbers are placed at the locations of the different exemplars with respect to the probability distribution ${1 \over 2}|\psi_A(x,y)+\psi_B(x,y)|^2={1 \over 2}(|\psi_A(x,y)|^2+|\psi_B(x,y)|^2)+|\psi_A(x,y)\psi_B(x,y)|\cos\phi(x,y)$, where $\phi(x,y)$ is the quantum phase difference at $(x,y)$. The values of $\phi(x,y)$ are given in Tab. 1 for the locations of the different exemplars. The interference pattern is clearly visible. ](FruitsVegetablesInterferenceFinal)
The center of the first hole, corresponding to the [*Fruits*]{} light source, is located where exemplar [*Apple*]{} is at point $(0, 0)$, indicated by 8 in both figures. The center of the second hole, corresponding to the [*Vegetables*]{} light source, is located where exemplar [*Broccoli*]{} is at point (10,4), indicated by 21 in both figures.
In Fig. 3 the data for ‘[*Fruits or Vegetables*]{}’ are graphically represented. This is not ‘just’ a normalized sum of the two Gaussians of Figs. 1 and 2, since it is the probability distribution corresponding to ${1 \over \sqrt{2}}(\psi_A(x,y)+\psi_B(x,y))$, which is the normalized superposition of the wave functions in Figs. 1 and 2. The numbers are placed at the locations of the different exemplars with respect to the probability distribution ${1 \over 2}|\psi_A(x,y)+\psi_B(x,y)|^2={1 \over 2}(|\psi_A(x,y)|^2+|\psi_B(x,y)|^2)+|\psi_A(x,y)\psi_B(x,y)|\cos\phi(x,y)$, where $|\psi_A(x,y)\psi_B(x,y)|\cos\phi(x,y)$ is the interference term and $\phi(x,y)$ the quantum phase difference at $(x,y)$. The values of $\phi(x,y)$ are given in Tab. 1 for the locations of the different exemplars. The interference pattern shown in Fig. 3 is very similar to well-known interference patterns of light passing through an elastic material under stress. In our case, it is the interference pattern corresponding to ‘[*Fruits or Vegetables*]{}’. Bearing in mind the analogy with the light sources for Figs. 1 and 2, in Fig. 3 we can see the interference pattern produced when both holes are open.
![A three-dimensional representation of the interference landscape of the concept ‘[*Fruits [or]{} Vegetables*]{}’ as shown in Fig. 3. Exemplars are represented by little green balls, and the numbers refer to the numbering of the exemplars in Tab. 1 and in Figs. 1, 2 and 3. ](FruitOrVegetableGraph05Final)
![Probabilities $1/2(\mu(A)_k+\mu(B)_k)$, which are the probability averages for [*Fruits*]{} and [*Vegetables*]{} shown in Figs. 1 and 2. This would be the resulting pattern in case $\phi(x,y)=90^\circ$ for all exemplars. It is called the classical pattern for the situation since it is the pattern that, without interference, results from a situation where classical particles are sent through two slits. These classical values for all exemplars are given in Tab. 1. ](Figure04)
Fig. 4 represents a three-dimensional graphic of the interference pattern of Fig. 3, and, for the sake of comparison, in Fig. 5, we have graphically represented the averages of the probabilities of Figs. 1 and 2, i.e. the values measured if there were no interference. For the mathematical details – the exact form of the wave functions and the explicit calculation of the interference pattern – and for other examples of conceptual interference, we refer to [@aerts2010c].
Explaining quantum interference\[explanation\]
==============================================
The foregoing section showed how the typicality data of two concepts and their disjunction are quantum mechanically modeled such that the quantum effect of interference accounts for the measured values. We also showed that it is possible to metaphorically picture the situation such that each of the concepts is represented by light passing through a hole and the disjunction of both concepts corresponds to the situation of the light passing through both holes (see Fig. 6).
![A typical interference pattern of a quantum two-slit situation with slits $A$ and $B$. The ‘[*A open B closed*]{}’ curve represents the probability of detection of the quantum entity in case only [*Slit A*]{} is open; the ‘[*B open A closed*]{}’ curve reflects the situation where only [*Slit B*]{} is open; and the ‘[*A and B open classical*]{}’ curve is the average of both. The ‘[*A and B open quantum*]{}’ curve represents the probability of detection of the quantum entity if both slits are open. ](QuantInfFraunFraun02)
This is indeed where interference is best known from in the traditional double-slit situation in optics and quantum physics. If we apply this to our specific example by analogy, we can imagine the cognitive experiment where a subject chooses the most appropriate answer for one of the concepts, e.g., [*Fruits*]{}, as follows: ‘The photon passes with the [*Fruits*]{} hole open and hits a screen behind the hole in the region where the choice of the person is located’. We can do the same for the cognitive experiment where the subject chooses the most appropriate answer for the concept [*Vegetables*]{}. This time the photon passes with the [*Vegetables*]{} hole open and hits the screen in the region where the choice of the person is located. The third situation, corresponding to the choice of the most appropriate answer for the disjunction concept ‘[*Fruits or Vegetables*]{}’, consists in the photon passing with both the [*Fruits*]{} hole and the [*Vegetables*]{} hole open and hitting the screen where the choice of the person is located. This third situation is the situation of interference, viz. the interference between [*Fruits*]{} and [*Vegetables*]{}. These three situations are clearly illustrated in Figs. 1, 2 and 3.
In [@aerts2009; @aerts2007a; @aerts2007b] we analyzed the origin of the interference effects that are produced when concepts are combined, and we provided an explanation that we investigated further in [@aertsdhooghe2009].
Let us now take a closer look at the experimental data and how they are produced by interference. The exemplars for which the interference is a weakening effect, i.e. where $\mu(A\ {\rm or}\ B) < 1/2(\mu(A)+\mu(B))$ or $90^\circ \le \phi$ or $\phi \le -90^\circ$, are the following: [*Elderberry*]{}, [*Mustard*]{}, [*Lentils*]{}, [*Pumpkin*]{}, [*Tomato*]{}, [*Broccoli*]{}, [*Wheat*]{}, [*Yam*]{}, [*Rice*]{}, [*Raisin*]{}, [*Green Pepper*]{}, [*Peanut*]{}, [*Acorn*]{} and [*Olive*]{}. The exemplars for which interference is a strengthening effect, i.e. where $1/2(\mu(A)+\mu(B)) < \mu(A\ {\rm or}\ B)$ or $\phi < 90^\circ$ or $-90^\circ \le \phi$, are the following: [*Mushroom*]{}, [*Root Ginger*]{}, [*Garlic*]{}, [*Coconut*]{}, [*Parsley*]{}, [*Almond*]{}, [*Chili Pepper*]{}, [*Black Pepper*]{}, and [*Apple*]{}. Let us consider the two extreme cases, viz. [*Elderberry*]{}, for which interference is the most weakening ($\phi=-113.2431^\circ$), and [*Mushroom*]{}, for which it is the most strengthening ($\phi=18.6744$). For [*Elderberry*]{}, we have $\mu(A)=0.1138$ and $\mu(B)=0.0170$, which means that test subjects have classified [*Elderberry*]{} very strongly as [*Fruits*]{} ([*Apple*]{} is the most strongly classified [*Fruits*]{}, but [*Elderberry*]{} is next and close to it), and quite weakly as [*Vegetables*]{}. For [*Mushroom*]{}, we have $\mu(A)=0.0140$ and $\mu(B)=0.0545$, which means that test subjects have weakly classified [*Mushroom*]{} as [*Fruits*]{} and moderately as [*Vegetables*]{}. Let us suppose that $1/2(\mu(A)+\mu(B))$ is the value estimated by test subjects for ‘[*Fruits or Vegetables*]{}’. In that case, the estimates for [*Fruits*]{} and [*Vegetables*]{} apart would be carried over in a determined way to the estimate for ‘[*Fruits or Vegetables*]{}’, just by applying this formula. This is indeed what would be the case if the decision process taking place in the human mind worked as if a classical particle passing through the [*Fruits*]{} hole or through the [*Vegetables*]{} hole hit the mind and left a spot at the location of one of the exemplars. More concretely, suppose that we ask subjects first to choose which of the questions they want to answer, [*Question A*]{} or [*Question B*]{}, and then, after they have made their choice, we ask them to answer this chosen question. This new experiment, which we could also indicate as [*Question A*]{} or [*Question B*]{}, would have $1/2(\mu(A)+\mu(B))$ as outcomes for the weight with respect to the different exemplars. In such a situation, it is indeed the mind of each of the subjects that chooses randomly between the [*Fruits*]{} hole and the [*Vegetables*]{} hole, subsequently following the chosen hole. There is no influence of one hole on the other, so that no interference is possible. However, in reality the situation is more complicated. When a test subject makes an estimate with respect to ‘[*Fruits or Vegetables*]{}’, a new concept emerges, namely the concept ‘[*Fruits or Vegetables*]{}’. For example, in answering the question whether the exemplar [*Mushroom*]{} is a good example of ‘[*Fruits or Vegetables*]{}’, the subject will consider two aspects or contributions. The first is related to the estimation of whether [*Mushroom*]{} is a good example of [*Fruits*]{} and to the estimation of whether [*Mushroom*]{} is a good example of [*Vegetables*]{}, i.e. to estimates of each of the concepts separately. It is covered by the formula $1/2(\mu(A)+\mu(B))$. The second contribution concerns the test subject’s estimate of whether or not [*Mushroom*]{} belongs to the category of exemplars that cannot readily be classified as [*Fruits*]{} or [*Vegetables*]{}. This is the class characterized by the newly emerged concept ‘[*Fruits or Vegetables*]{}’. And as we know, [*Mushroom*]{} is a typical case of an exemplar that is not easy to classify as ‘[*Fruits or Vegetables*]{}’. That is why [*Mushroom*]{}, although only slightly covered by the formula $1/2(\mu(A)+\mu(B))$, has an overall high score as ‘[*Fruits or Vegetables*]{}’. The effect of interference allows adding the extra value to $1/2(\mu(A)+\mu(B))$ resulting from the fact that [*Mushroom*]{} scores well as an exemplar that is not readily classified as ‘[*Fruits or Vegetables*]{}’. This explains why [*Mushroom*]{} receives a strengthening interference effect, which adds to the probability of it being chosen as a good example of ‘[*Fruits or Vegetables*]{}’. [*Elderberry*]{} shows the contrary. Formula $1/2(\mu(A)+\mu(B))$ produces a score that is too high compared to the experimentally tested value of the probability of its being chosen as a good example of ‘[*Fruits or Vegetables*]{}’. The interference effect corrects this, subtracting a value from $1/2(\mu(A)+\mu(B))$. This corresponds to the test subjects considering [*Elderberry*]{} ‘not at all’ to belong to a category of exemplars hard to classify as [*Fruits*]{} or [*Vegetables*]{}, but rather the contrary. As a consequence, with respect to the newly emerged concept ‘[*Fruits or Vegetables*]{}’, the exemplar [*Elderberry*]{} scores very low, and hence the $1/2(\mu(A)+\mu(B))$ needs to be corrected by subtracting the second contribution, the quantum interference term. A similar explanation of the interference of [*Fruits*]{} and [*Vegetables*]{} can be put forward for all the other exemplars. The following is a general presentation of this. ‘For two concepts $A$ and $B$, with probabilities $\mu(A)$ and $\mu(A)$ for an exemplar to be chosen as a good example of ‘$A$ or $B$’, the interference effect allows taking into account the specific probability contribution for this exemplar to be chosen as a good exemplar of the newly emerged concept ‘$A\ {\rm or}\ B$’, adding or subtracting to the value $1/2(\mu(A)+\mu(B))$, which is the average of $\mu(A)$ and $\mu(B)$.’
To conclude we observe that ‘[*Fruits or Vegetables*]{}’ is not the only case where quantum interference explains deviations from classically expected behavior. Various examples have been found, for disjunctions, as well as for conjunctions, of concepts [@aerts2009].
A two-layered structure in human thought\[layers\]
==================================================
The detection of quantum structures in cognition has suggested us to put forward the hypothesis that two specifically structured and superposed layers can be identified in human thought as a process [@aerts2009; @aertsdhooghe2009], as follows.
\(i) A [*classical logical layer*]{}. The thought process in this layer is given form by an underlying classical logical conceptual process. The manifest process itself may be, and generally will be, indeterministic, but the indeterminism is due to a lack of knowledge about the underlying deterministic classical process. For this reason the process within the classical logical layer can be modeled by using a classical Kolmogorovian probability description.
\(ii) A [*quantum conceptual layer*]{}. The thought process in this layer is given form under influence of the totality of the surrounding conceptual landscape, where the different concepts figure as individual entities, also when they are combinations of other concepts, at variance with the classical logical layer where combinations of concepts figure as classical combinations of entities and not as individual entities. In this sense one can speak of a [*conceptual emergence*]{} taking place in this quantum conceptual layer, certainly so for combinations of concepts. Quantum conceptual thought has been identified in different domains of knowledge and science related to different, often as paradoxically conceived, problems in these domains. The sorts of measurable quantities being able to experimentally identify quantum conceptual thought have been different in these different domains, depending on which aspect of the conceptual landscape was most obvious or most important for the identification of the deviation of classically expected values of these quantities. For example, in a domain of cognitive science where representations of concepts are studied, and hence where concepts and combinations of concepts, and relations of items, exemplars, instances or features with concepts are considered, measurable quantities such as ‘typicality’, ‘membership’, ‘similarity’ and ‘applicability’ have been studied and used to experimentally put into evidence the deviation of what classically would be expected for the values of these quantities. In decision theory measurable quantities such as ‘representativeness’, ‘qualitative likelihood’, ‘similarity’ and ‘resemblance’ have played this role. The quantum conceptual thought process is indeterministic in essence, i.e. there is not necessarily an underlying deterministic process independent of the context. Hence, if analyzed deeper with the aim of finding more deterministic sub-processes, unavoidably effects of context will come into play. Since all concepts of the interconnected web that forms the landscape of concepts and combinations of them attribute as individual entities to the influences reigning in this landscape, and more so since this happens dynamically in an environment where they are all quantum entangled structurally speaking, the nature of quantum conceptual thought contains aspects that we strongly identify as holistic and synthetic. However, the quantum conceptual thought process is not unorganized or irrational. Quantum conceptual thought is as firmly structured as classical logical thought though in a different way. We believe that the reason why science has hardly uncovered the structure of quantum conceptual thought is because it has been believed to be intuitive, associative, irrational, etc., meaning ‘rather unstructured’. As a consequence of its basic features, an idealized version of this quantum conceptual thought process can be modeled as a quantum mechanical process.
The assumed existence of a quantum conceptual layer in mind fits in with some impressive achievements that have been recently obtained in neuroscience [@brain2012], as we will see in the next section.
Quantum cognition and the structure of the brain\[brain\]
=========================================================
A traditional view of the relation between brain and mind is based on the [*neuroscience paradigm*]{} [@paradigm], according to which the architecture of the brain is determined by connections between neurons, their inibitory/excitatory character, and the strength of their connections. Following this view, roughly speaking, the brain can be seen as a [*parallel distributed computer*]{} containing many billions of neurons, that is, elementary processors interconnected into a complex neural network. In this architecture, the mind and the brain constitute one single unit, which is characterized by a complementary dualism. The mind is in this approach understood as a program carried out by the brain, the program being specified by the neural network architecture. Distributed representations of cognitive structures are studied in such an approach (see, e.g., [*holographic reduced representations*]{} [@gabor1968]–[@plate2003]).
Although the holographic approach is inspired by waves and interference, it is not able to model the complex type of interference that quantum entities undergo. It can be seen by considering the values of the interference angles of the interference pattern we obtain (see equation (\[interferenceangles\])), that the modeling for the concept [*Fruit or Vegetables*]{} is intrinsically quantum mechanical, not able to be reduced to interference of classical waves. This means that, although along the same lines as the holographic memory view [@gabor1968], our approach can introduce a way to consider and study the brain as a quantum mechanical interference producing entity. Concretely we produce a projection of a multi–dimensional complex Hilbert space – 25 dimensional for the [*Fruits or Vegetables*]{} case – in three–dimenesional real space, which is the environment where the bio-mass of the brain is located.
In this respect it is worthy to mention a recent finding [@brain2012], where relationships of adjacency and crossing between cerebral fiber pathways in primates and humans were analyzed by using diffusion magnetic resonance imaging. The cerebral fiber pathways have been found to form a rectilinear three-dimensional grid continuous with the three principal axes of development. Cortico-cortical pathways formed parallel sheets of interwoven paths in the longitudinal and medio-lateral axes, in which major pathways were local condensations. Cross-species homology was strong and showed emergence of complex gyral connectivity by continuous elaboration of this grid structure. This architecture naturally supports functional spatio-temporal coherence, developmental path-finding, and incremental rewiring with correlated adaptation of structure and function in cerebral plasticity and evolution [@brain2012]. The three–dimensional layered structure schematized above puts at stake the ‘neural network’ modeling of the brain, together with some aspects of the neuroscience paradigm, and the brain/mind relation. Such a very mathematically structured grid form would be much closer to what one expect as an ideal medium for interference than this is the case for the structure of a traditional network.
At first sight it might seem that the layered structures that have been detected [@brain2012] are too simple to give rise to complex cognition, even if interference is allowed to play a prominent role, but that is misleading. Indeed, one should not look upon the brain as ‘a container of complex cognition’, but rather as ‘the canvas for the potentiality of emergence of such complex cognition’. That makes a whole difference. Indeed, we know how the rather simple mathematical structure of superposition in a linear vector space and tensor product of linear vector spaces give rise to both emergence and entanglement in quantum mechanics. Also there this mathematical structure plays the role of canvas, where the emergent and entangled states can find a seat to be realized. This is exactly what the role of the recently detected grid could be, due to its rather simple mathematical structure, at least compared to the structure of a network, it could make available in a mathematically systematic way the canvas where emergent states of new concepts can find their seat. This is then a mechanism fundamentally different from what one expects in networks, where ‘new connections are only made when they are needed’. Structures that have generative power can shape ‘empty space’ for potentiality, and ‘creation of new’, hence emergence can take place in a much more powerful way. Of course, there will be a bias coming from the generating structures, which is a drawback compared to the network way. This bias could exactly be an explanation for the functioning of the human brain leading to automated aspects of conceptual reasoning such as ‘the disjunction and conjunction effects’. The above analysis is highly relevant for representations of genuine cognitive models in technology, for example as attempted in artificial intelligence and robotics [@penrose1990]–[@dongchenzhangchen2006].
[99]{}
D. Aerts and S. Aerts, “Applications of quantum statistics in psychological studies of decision processes,” [*Found. Sci.*]{}, vol. 1, pp. 85–97, 1995.
D. Widdows, “Orthogonal negation in vector spaces for modelling word-meanings and document retrieval,” in [*Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics*]{}, 2003, pp. 136–143.
K. van Rijsbergen, [*The Geometry of Information Retrieval*]{}, Cambridge, UK: Cambridge University Press, 2004.
D. Aerts and M. Czachor, “Quantum aspects of semantic analysis and symbolic artificial intelligence,” [*J. Phys. A-Math. Gen.*]{}, vol. 37, pp. L123–L132, 2004.
D. Aerts and L. Gabora, “A theory of concepts and their combinations I & II,” [*Kybernetes*]{}, vol. 34, pp. 167–191 & 192–221, 2005.
P. D. Bruza and R. J. Cole, “Quantum logic of semantic space: An explanatory investigation of context effects in practical reasoning,” in [*We Will Show Them: Essays in Honour of Dob Gabbay*]{}, S. Artemov [*et al.*]{}, Eds., College Publications, 2005.
D. Widdows, [*Geometry and Meaning*]{}, CSLI Publications, IL: University of Chicago Press, 2006.
J. R. Busemeyer, Z. Wang, and J. T. Townsend, “Quantum dynamics of human decision-making,” [*J. Math. Psych.*]{}, vol. 50, pp. 220–241, 2006.
P. D. Bruza, K. Kitto, D. McEvoy, and C. McEvoy, “Entangling words and meaning,” in [*Proceedings of the Second Quantum Interaction Symposium*]{}, Oxford, UK, Oxford University Press, 2008, pp. 118–124.
D. Aerts, “Quantum structure in cognition,” [*J. Math. Psych.*]{}, vol. 53, pp. 314–348, 2009.
D. Aerts, M. Czachor, and B. De Moor, “Geometric analogue of holographic reduced representation,” [*J. Math. Psych.*]{}, Vol. 53, pp. 389–398, 2009.
P. D. Bruza, K. Kitto, D. Nelson, and C. McEvoy, “Extracting spooky-activation-at-a-distance from considerations of entanglement,” in [*Proceedings of QI 2009-Third International Symposium on Quantum Interaction*]{}, P. D. Bruza, D. Sofge, W. Lawless, C. J. van Rijsbergen, and M. Klusch, Eds., LNCS vol. 5494, Berlin, Heidelberg: Springer, 2009, pp. 71–83.
E. M. Pothos and J. R. Busemeyer, “A quantum probability explanation for violations of ‘rational’ decision theory,” [*Proc. Roy. Soc. B*]{}, vol. 276, pp. 2171–2178, 2009.
A. Y. Khrennikov and E. Haven, “Quantum mechanics and violations of the Sure-Thing Principle: The use of probability interference and other concepts,” [*J. Math. Psych.*]{}, vol. 53, pp. 378–388, 2009.
D. Aerts, B. D’Hooghe, and E. Haven, “Quantum experimental data in psychology and economics,” [*Int. J. Theor. Phys.*]{}, vol. 49,pp. 2971–2990, 2010.
D. Aerts and S. Sozzo, “Quantum structure in cognition: Why and how concepts are entangled,” in [*Proceedings of QI 2011-Fourth International Symposium on Quantum Interaction*]{}, D. Song, M. Melucci, and I. Frommholz, Eds., Berlin, Heidelberg: Springer, 2011, LNCS, vol. 7052, pp. 116–127.
D. Aerts, M. Czachor, and S. Sozzo, “Quantum interaction approach in cognition, artificial intelligence and robotics,”in [*Proceedings of the The Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM 2011)*]{}, V. Privman and V. Ovchinnikov, Eds., IARIA, 2011, pp. 35–40, 2011.
D. Aerts, L. Gabora, S. Sozzo, and T. Veloz, “Quantum interaction approach in cognition, artificial intelligence and robotics,” in [*Proceedings of the Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM)*]{}, V. Privman and V. Ovchinnikov, Eds., IARIA, 2011, pp. 57–62, 2011.
D. Aerts and S. Sozzo, “A general modeling scheme for contextual emergent entangled interfering entities,” Submitted to the Proceedings of QI 2012-Fifth International Symposium on Quantum Interaction, 2012.
J. A. Hampton, “Disjunction of natural concepts,” [*Memory & Cognition*]{}, vol. 16, pp. 579–591, 1988.
T. Young, “On the theory of light and colours,” [*Phil. Trans. Roy. Soc.*]{}, vol. 92, pp. 12–48, 1802. Reprinted in part in: Crew, H. (ed.) The Wave Theory of Light, New York (1990).
L. de Broglie, “Ondes et quanta,” [*Comptes Rendus*]{}, vol. 177, pp. 507–510, 1923.
E. Schrödinger, “Quantizierung als Eigenwertproblem(Erste Mitteilung),” [*Ann. Phys.*]{}, vol. 79, pp. 361–376, 1926.
R. P. Feynman, [*The Feynman Lectures on Physics*]{}, New York: Addison–Wesley, 1965.
C. Jönsson, “Electron diffraction at multiple slits,” [*Am. J. Phys*]{}, vol. 4, pp. 4–11, 1974.
M. Arndt, O. Nairz, J. Vos–Andreae, C. Keller, G. van der Zouw, and A. , Zeilinger, “Wave-particle duality of $C_{60}$ molecules,” [*Nature*]{}, vol. 401, pp. 680–682, 1999.
D. Aerts, “Quantum particles as conceptual entities. A Possible Explanatory Framework for Quantum Theory,” [*Found. Sci.*]{}, vol. 14, pp. 361–411, 2009.
D. Aerts, “Quantum interference and superposition in cognition: Development of a theory for the disjunction of concepts,” in [*Worldviews, Science and Us: Bridging [@penrose1990]–[@dongchenzhangchen2006]Knowledge and Its Implications for Our Perspectives of the World*]{}, D. Aerts, B. D’Hooghe, and N. Note, Eds., Singapore: World Scientific, 2011, pp. 169–211.
D. Aerts, “General quantum modeling of combining concepts: A quantum field model in Fock space,” Archive reference and link: [*http://uk.arxiv.org/abs/0705.1740*]{}, 2007.
V. J. Weeden, D. L. Rosene, R. Wang, G. Dai, F. Mortazavi, P. Hagmann, J. H. Kaas, and W. I. Tseng, “The geometric structure of the brain fiber pathways,” [*Science*]{}, vol. 335, pp. 1628–1634, 2012.
D. Aerts and B. D’Hooghe, “Classical logical versus quantum conceptual thought: Examples in economy, decision theory and concept theory,” [*Lecture Notes in Artificial Intelligence*]{}, vol. 5494, pp. 128–142, 2009.
J. L. M. McClelland, D. E. Rumelhart, and the PDP research group, Eds., [*Parallel Distributed Processing: Explorations in the Microstructure of Cognition*]{}, vols. 1 and 2, Cambridge, MA: The MIT Press, 1986.
D. Gabor, “Holographic model for temporal recall,” [*Nature*]{}, vol. 217, 1288–1289, 1968.
K. H. Pribram, [*Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology*]{}, New York, NY: Prentice Hall, 1971.
P. Kanerva, “Large patterns make great symbols: An example of learning from example,” [*Hybrid Neural Systems*]{}, pp. 194–203, 1998.
T. Plate, ‘[*Holographic Reduced Representation: Distributed Representation for Cognitive Structures*]{}, Stanford, CA: CSLI Publications, 2003. R. Penrose, [*The Emperor’s New Mind*]{}, Oxford, UK: Oxford University Press, 1990.
P. Benioff, “Quantum robots and environments,” [*Phys. Rev. A*]{}, vol. 58, no. 2, pp. 893–904, 1998.
D. Dong, C. Chen, C. Zhang, and Z. Chen, “Quantum robots: Structure, algorithms and applications,” [*Robotica*]{}, vol. 24, pp. 513–521, 2006.
|
---
abstract: 'We present spatially-resolved echelle spectroscopy of an intervening -- absorption-line system detected at $z_{\rm abs}={0.73379}$ toward the giant gravitational arc [PSZ1 G311.65–18.48]{}. The absorbing gas is associated to an inclined disk-like star-forming galaxy, whose major axis is aligned with the two arc-segments reported here. We probe in absorption the galaxy’s extended disk continuously, at $\approx 3$kpc sampling, from its inner region out to $15\times$ the optical radius. We detect strong ($W_0^{2796}>0.3$Å) coherent absorption along $13$ independent positions at impact parameters $D=0$–$29$kpc on one side of the galaxy, and no absorption at $D=28$–$57$kpc on the opposite side (all de-lensed distances at $z_{\rm abs}$). We show that: (1) the gas distribution is anisotropic; (2) $W_0^{2796}$, $W_0^{2600}$, $W_0^{2852}$, and the ratio $W_0^{2600}\!/W_0^{2796}$, all anti-correlate with $D$; (3) the $W_0^{2796}$-$D$ relation is not cuspy and exhibits significantly less scatter than the quasar-absorber statistics; (4) the absorbing gas is co-rotating with the galaxy out to $D \lesssim 20$kpc, resembling a ‘flat’ rotation curve, but at $D\gtrsim 20$kpc velocities [ *decline*]{} below the expectations from a 3D disk-model extrapolated from the nebular [\[\]]{} emission. These signatures constitute unambiguous evidence for rotating extra-planar diffuse gas, possibly also undergoing enriched accretion at its edge. Arguably, we are witnessing some of the long-sought processes of the baryon cycle in a single distant galaxy expected to be representative of such phenomena.'
author:
- |
S. Lopez,$^{1}$ N. Tejos,$^{2}$ L. F. Barrientos,$^{3}$ C. Ledoux,$^{4}$ K. Sharon,$^{5}$ A. Katsianis,$^{1,6,7}$ M. K. Florian,$^{8}$ E. Rivera-Thorsen,$^{9}$ M. B. Bayliss,$^{10,11}$ H. Dahle,$^{12}$ A. Fernandez-Figueroa,$^{1}$ M. D. Gladders,$^{13}$ M. Gronke,$^{14}$ M. Hamel,$^{1}$ I. Pessa$^{15}$ and J. R. Rigby$^{16}$\
$^{1}$ Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago, Chile. E-mail: slopez@das.uchile.cl\
$^{2}$ Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059, Valparaíso, Chile. E-mail: nicolas.tejos@pucv.cl\
$^{3}$ Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Casilla 306, Santiago, Chile\
$^{4}$ European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago, Chile\
$^{5}$ Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA\
$^{6}$ Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China\
$^{7}$ Department of Astronomy, Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai Jiao Tong University,\
Shanghai 200240, China\
$^{8}$ Observational Cosmology Lab, Goddard Space Flight Center, Code 665, Greenbelt, MD 20771, USA\
$^{9}$ Institute of Theoretical Astrophysics, University of Oslo, Postboks 1029, 0315 Oslo, Norway\
$^{10}$ Kavli Institute for Astrophysics & Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue,\
Cambridge, MA 02139, USA\
$^{11}$ Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA\
$^{12}$ Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo, Norway\
$^{13}$ Department of Astronomy & Astrophysics and Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue,\
Chicago, IL 60637, USA\
$^{14}$ Department of Physics, University of California, Santa Barbara, CA 93106, USA\
$^{15}$ Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany\
$^{16}$ Observational Cosmology Lab, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
bibliography:
- 'Lopez\_lit.bib'
title: 'Slicing the cool circumgalactic medium along the major-axis of a star-forming galaxy at $z=0.7$'
---
\[firstpage\]
galaxies: evolution — galaxies: formation — galaxies: intergalactic medium — galaxies: clusters: individual ([PSZ1 G311.65–18.48]{})
Introduction
============
Models and simulations that describe the various components and scales of the baryon cycle around galaxies remain to be tested observationally. Such a task poses a serious challenge, though, as most of the ‘action’ occurs in the diffuse circum-galactic medium (CGM), i.e., at several optical radii from the host galaxy scales [e.g., @Tumlinson2017]. Traditionally, observations of the CGM at $10$–$100$kpc scales have been based on the absorption it imprints on background sources, primarily quasars [e.g., @Nielsen2013cat; @Prochaska2017; @Tumlinson2017; @Chen2017 and references therein] but also galaxies [@Steidel2010; @Diamond-Stanic2016; @Rubin2018], including the absorbing galaxy itself [@Martin2005; @Martin2012; @Kornei2012]. Such techniques have yielded a plethora of observational constraints and evidence for a connection between a galaxy’s properties and its CGM.
Galaxies studied through these methods, nevertheless, are probed by single pencil beams; therefore, to draw any conclusions that involve the spatial dependence of an observable requires averaging absorber properties [@Chen2010; @Nielsen2013] or stacking spectra of the background sources [@Steidel2010; @Bordoloi2011; @Rubin2018a; @Rubin2018b]. A complementary workaround is to use multiple sight-lines through individual galaxies. Depending on the scales, the background sources can be binary or chance quasar groups [@Martin2010; @Bowen2016] or else lensed quasars [@Smette1992; @Lopez1999; @Lopez2005; @Lopez2007; @Rauch2001; @Ellison2004; @Chen2014; @Zahedy2016]. Despite the paucity of the latter, lensed sources are able to resolve the CGM of intervening galaxies on kpc scales, albeit at a sparse sampling. More recently, @Lopez2018 have shown that the spatial sampling can be greatly enhanced by using giant gravitational arcs. Comparatively, these giant arcs are very extended [e.g., @Sharon2019] and thus can probe the gaseous halo of [*individual*]{} galaxies on scales of $1$–$100$kpc at a [*continuous*]{} sampling, nicely matching typical CGM scales. Such an experimental setup, therefore, removes potential biases introduced by averaging a variety of absorbing galaxies.
Following on our first tomographic study of the cool CGM around a star-forming group of galaxies at $z\approx 1$ [@Lopez2018 hereafter ‘Paper I’], we here present spatially-resolved spectroscopy of a second giant gravitational arc. We pool together echelle and integral-field (IFU) spectroscopy of the brightest known gravitational arc to date, found around the cluster [PSZ1 G311.65–18.48]{} [a.k.a. the ‘Sunburst Arc’; @Dahle2016; @Rivera-Thorsen2017; @Rivera-Thorsen2019; @Chisholm2019]. We apply our technique to study the spatial extent and kinematics of an intervening -- absorption-line system at $z={0.73379}$. Due to a serendipitous arc/absorber geometrical projection on the sky, we are able to spatially resolve the system all along the major axis of a host galaxy that may be exemplary of the absorber population at these intermediate redshifts.
The paper is structured as follows. In Section \[sec:data\], we present the observations and describe the different datasets. In Section \[sec\_lens\_model\], we describe the reconstructed absorber plane and assess the meaning of the absorption signal. In Section \[sec:G1\], we present the emission properties of the identified absorbing galaxy. In Section \[sec:abs\], we provide the main analysis and results on the line strength and kinematics of the absorbing gas. We discuss our results in Section \[sec:discussion\] and present our summary and conclusions in Section \[sec:summary\]. Details on data reduction and models are provided in an Appendix. Throughout the paper, we use a $\Lambda$CDM cosmology with the following cosmological parameters: $H_0=70$kms$^{-1}$Mpc$^{-1}$, $\Omega_m=0.3$, and $\Omega_{\Lambda} =0.7$.
Observations and data reduction {#sec:data}
===============================
Experimental setup
------------------
![[*HST*]{}/ACS F814W-band image of the northern arc segments around [PSZ1 G311.65–18.48]{}. The $3$ MagE slits (‘NE’, ‘SKY’, and ‘SW’) are indicated in red, along with our definition of ‘pseudo-spaxels’, and their numbering (for clarity only shown for the NE slit; see § \[sec:mage\_data\]). The slit widths are of $1$, and their lengths are of $10$; we have divided each of them into $11$ pseudo-spaxels of $1.0$$\times0.9$each. The position of the absorbing galaxy (G1) is encircled in blue. The ground-based observations were performed under a seeing of $0.7"$ (represented by the beam-size symbol in the top-right corner). \[fig\_FOV\]](fig_FOV_alt.pdf){width="\columnwidth"}
![Zoom-in into the SW segment showing a MUSE image centered at the continuum around absorption at $\sim 4848$ Å. The MagE ‘SW’ and ‘SKY’ slits with their corresponding pseudo-spaxels (§ \[sec:mage\_data\]) are shown in red. The blue circle indicates the seeing FWHM. The green contours indicate a flux level of $5\sigma$ above the sky level. Since the observing conditions during the MUSE and the MagE observations were quite similar (e.g., dark nights, seeing $\approx 0\farcs7$), such contours show that SW pseudo-spaxels \#2 to \#11 and SKY pseudo-spaxels \#10 to \#11 were fully illuminated by the source, while SW \#1 was only partially illuminated by the source. Using the same method, all NE pseudo-spaxels appear to be illuminated by the source (not shown here). \[fig\_slit\_sky\]](fig_slit_sky.pdf){width="\columnwidth"}
[PSZ1 G311.65–18.48]{} extends over $\approx 60\arcsec$ on the sky (Fig. \[fig\_FOV\]) and results from the lensing of a $z=2.369$ star-forming galaxy by a cluster at $z=0.443$ [@Dahle2016]. According to archival VLT/MUSE data, an intervening absorption-line system at $z = {0.73379}$ appears in the spectra of one of northernmost segments of the arc. The same data reveal nebular [\[\]]{} emission at the same redshift from a nearby galaxy, which we consider to be the absorbing galaxy (hereafter referred to as ‘G1’). To thoroughly study this system, in this paper we exploit three independent datasets: (1) medium-resolution IFU data obtained with VLT/MUSE, which we use to constrain the emission-line properties of G1; (2) [*Hubble Space Telescope*]{} ([*HST*]{}) imaging, which we largely use to (a) build the lens model needed to reconstruct the absorber plane, and (b) constrain the overall properties of G1 based on its continuum emission; and (3) medium-resolution echelle spectra obtained with Magellan/MagE, which we use to constrain the absorption-line properties of the gas.
VLT/MUSE {#sec:muse_data}
--------
We retrieved MUSE observations of [PSZ1 G311.65–18.48]{} from the ESO archive (ESO program 297.A-5012(A); PI Aghanim). The field comprising the arc segments shown in Fig. \[fig\_FOV\] was observed in wide-field mode for a total of $2966$s on the night of May 13th, 2016 under good seeing conditions ($0\farcs7$). We reduced the raw data using the MUSE pipeline v1.6.4 available in [Esoreflex]{}. The sky subtraction was improved using the Zurich Atmospheric Purge (ZAP v1.0) algorithm. We applied a small offset to the [*HST*]{} and MUSE fields to take them to a common astrometric system using as a reference a single star near G1. The spectra cover the wavelength range $4\,750$–$9\,300$ Å at a resolving power $R\approx 2\,100$. The exposure time resulted in a S/N that is adequate to constrain the emission-line properties of G1, but not enough for the absorption-line analysis, given the MUSE spectral resolution.
HST/ACS {#sec:hst_data}
-------
[*HST*]{} observations of [PSZ1 G311.65–18.48]{} were conducted on February 21st to 22nd, 2018, and September 2nd, 2018 using the F814W filter of ACS (GO15101; PI Dahle) and the F160W filter of the IR channel of WFC3 (GO15337; PI Bayliss) respectively. F814W observations consist of 8 dithered exposures acquired over two orbits, totaling $5280$s. F160W observations were conducted in one orbit, using three dithered pointings totaling $1359$s.
These data were reduced using the [[Drizzlepac]{}]{} software package.[^1] Images were drizzled to a $0.03$per pixel grid using the routine [astrodrizzle]{} with a “drop size" (final$\_$pixfrac) of 0.8 using a Gaussian kernel. Where necessary, images were aligned using the routine tweakreg, before ultimately being drizzled onto a common reference grid with north up.
Magellan/MagE {#sec:mage_data}
-------------
Spectroscopically, Magellan/MagE greatly outperforms MUSE in terms of blue coverage and resolving power; hence, [*these observations are central to the present study*]{}. Here we provide a concise description of the observations (see Table \[tab:obs\] for a summary). More details on the observations and data reduction are presented in the Appendix \[sec:mage\_appendix\].
We observed the two northernmost segments in [PSZ1 G311.65–18.48]{} during dark-time on the first half-nights of July 20th and 21st, 2017 (program CN2017B-57, PI Tejos). The weather conditions varied but the seeing was good ($0\farcs6-0\farcs7$) and steady.
With the idea of mimicking integral-field observations, we placed three $1$$\times 10$ slits (referred to as ‘NE’, ‘SKY’ and ‘SW’) along the two arc segments (see Fig. \[fig\_FOV\]) using blind offsets. The ‘SKY’ slit was placed in a way that the northernmost/southernmost extreme of the slit has light contribution from the North-East/South-West arc segments, respectively, while the inner part is dominated by the actual background sky signal. Thus, the ‘SKY’ slit provides not only a reference sky spectrum for the ‘NE’ and ‘SW’ slits (both completely covered by the extended emission of the arc at seeing $0\farcs7$; see Fig. \[fig\_slit\_sky\]), but it also provides independent arc signal at the closest impact parameters to G1 in each arc segment.
The data were reduced using a custom pipeline (see details in § \[sec:mage\_reduction\]). The spectra cover the wavelength range $3\,300$–$9\,250$ Å at a resolving power $R=4\,500$. For each slit, 11 calibrated spectra were generated using a $3$-pixel spatial binning, corresponding to $0\farcs9$ on the sky (see Fig. \[fig\_2D\]). Such binning oversamples the seeing, making the spectra spatially independent. These spectra define 11 ‘pseudo-spaxels’ in each slit. The spectra were recorded into three data-cubes of a rectangular shape of $1 \times 11$ ‘spaxels’ of $1\farcs0\times0\farcs9$ each. Throughout the paper, we use the convention that the northernmost spaxel in a given slit is its ‘position 1’ (e.g., SW \#1) and position numbers increase toward the South in consecutive order (see Figs. \[fig\_FOV\] to \[fig\_2D\]).
![Raw MagE 2D spectra obtained through the SW (upper panel) and NE (bottom panel) slits. Each exposure is $3\,600$s long. Wavelength increases to the right and each spectral pixel corresponds to $\approx 22$[km s$^{-1}$]{}. Both spectra are centered at $\lambda\approx 4850$Å, the expected position of $\lambda\lambda$2796,2803 at $z={0.73379}$ (indicated by the arrows in the upper panel). absorption is clearly seen all along the SW slit, but not in the NE slit. Moreover, the velocity shift and kinematical complexity of the absorption seems to be a function of the spatial position with respect to G1, which is located around SW position \# 2 (see also Fig. \[fig\_slits\]). The grid tracing the echelle orders corresponds to the eleven spatial positions (pseudo-spaxels) described in the text, with numbers (indicated on the right margin) increasing from North to South. Each position is $0.9\arcsec$ along the slit, and the slit width used was $1.0\arcsec$. A sky line at $4861.32$Å blocks partially the $2803$Å transition, unfortunately, but it otherwise aids the eye to follow the spatial direction on the CCD. \[fig\_2D\]](fig_2D.pdf){width="\columnwidth"}
![Magnification map at $z={0.73379}$ (displayed in the image plane). The contours correspond to the [*HST*]{} F814W image. We caution that this figure does not show the magnification of the giant arc itself, which is at a different source redshift. \[fig\_magnification\]](fig_magnification.pdf){width="\columnwidth"}
Lens model and absorber-plane geometry {#sec_lens_model}
======================================
In this section we describe the lens model used to reconstruct the absorber plane and to properly define impact parameters.
Lens model {#sec:lensing}
----------
The lens model is computed using the public software [[Lenstool]{}]{} [@Jullo2007]. Our model includes cluster-scale, group-scale, and galaxy-scale halos. The positions, ellipticities, and position angles of galaxy-scale halos are fixed to the observed properties of the cluster-member galaxies, which are selected from a color-magnitude diagram using the red sequence technique [@Gladders2000]. The other parameters are determined through scaling relations, with the exception of the brightest cluster galaxy that is not assumed to follow the same scaling. Some parameters of galaxies that are near lensed sources are left free to increase the model flexibility. The parameters of the cluster and group scale halos are set as free parameters. The model used in this work solves for six distinct halos, and overall uses 100 halos.
We constrain the lens model with positions and spectroscopic redshifts of multiple images of lensed background sources, selected from our *HST* imaging and lensing analysis in this field will be presented in Sharon et al., in prep.
From the resulting model of the mass distribution of the foreground lens, we derive the lensing magnification and deflection maps that are used in this work. The deflection map $\vec{\alpha}$ is used to ray-trace the observed positions to a background (source) plane, using the lensing equation: $$\vec{\beta} = \vec{\theta} - \frac{d_{ls}}{d_s} \vec{\alpha} (\vec{\theta}), \\$$ where $\vec{\beta}$ is the position at the background plane, $\vec{\theta}$ is the position in the image plane, and $d_{ls}$ and $d_s$ are the angular diameter distances from the lens to the source and from the observer to the source, respectively. In this work, we ray-trace the pixels and spaxels of both the arc and G1, to the absorber plane at $z={0.73379}$.
The arc segments are highly magnified and appear at regions close to the critical curves, where the lensing uncertainties are significant. However, for the redshift of G1 this region is far enough from the strong lensing regime, so that the lensing potential and its derivatives are smooth (as can be seen in Fig. \[fig\_magnification\]) and the uncertainties are reduced.
Absorber-plane geometry {#sec_geometry}
-----------------------
Fig. \[fig\_slits\] shows a zoom-in region of the field around G1 in the image plane (top panel) and in the reconstructed absorber plane at $z={0.73379}$ (bottom panel). For clarity, only the SW spaxels are shown. In the absorber plane, each spaxel is $\approx 3\times6$ kpc$^2$ in size.
![Impact parameters to G1, probed by the MagE spaxels in the image plane (horizontal scale) and in the absorber plane (vertical scales). Positions to the North-East of the G1 semi-minor axis are assigned arbitrarily with negative values and are shown with open symbols. Note that the transformation from the image to the absorber plane is well approximated by a constant scale factor (the straight line in the figure). To convert angular distances into physical distances in the absorber plane a scale of 7.28kpc/$\arcsec$ was used. \[fig\_ip\]](impact_parameter_revised.pdf){width="\columnwidth"}
Impact parameters, $D$, are defined as the projected distance between the center of a spaxel and the center of G1. Impact parameters in arc-seconds are defined in the reconstructed image. They are then converted to physical distances by using the cosmological scale at $z={0.73379}$ ($1 \arcsec =
7.28$kpc). For the sake of clarity, we arbitrarily assign negative or positive values depending on whether the spaxel is to the North-East or to the South-West of G1’s minor axis, respectively. Due to the particular alignment of galaxy and arc segments, the conversion between impact parameters in the image and in the reconstructed planes is almost linear (Fig. \[fig\_ip\]).
Our definition of impact parameter carries three sources of uncertainty. The first one comes from the lens model systematics and cosmology; we estimate this error to be $\approx 5$%, and therefore to dominate at large impact parameters. A second source of error comes from the astrometry, which introduces an error that dominates at low impact parameters. For instance, spaxel SW \#1 in Fig. \[fig\_slits\] does not apparently match any arc signal in the [*HST*]{} image. However, we do measure flux on that spaxel (Fig. \[fig\_2D\]), which we render independent from SW \#2, judging from the different absorption kinematics (Fig. \[fig\_stack\_mage\]). The astrometry is further discussed in Appendix \[sect\_astrometry\]. These two can be considered [*measurement*]{} errors associated with our particular definition of impact parameter.
A third source of uncertainty comes from the extended nature of the background source, which is relevant for comparisons with the well defined ‘pencil-beam’ quasar sight-lines. Our absorbing signal results from a light-weighted profile, which in turn is modulated by both the source deflection and the lens magnification. Thus, our experimental setup faces an inherent source of systematic uncertainty in the impact parameters (suffered by any observations using extended background sources).
To account for the last two uncertainties we arbitrarily assign a systematic error on $D$ of half the spaxel size [*along the slit*]{}, i.e., $\approx1.5$kpc in the absorber plane.
Emission properties of G1 at $z={0.73379}$ {#sec:G1}
==========================================
We use the [*HST*]{} and MUSE datasets to characterize G1. In the following subsections we present the details of these analyses, and Table \[table\_G1\] summarizes G1’s inferred properties.
Geometry and environment
------------------------
From the source plane reconstruction of the [*HST*]{} image (see bottom panel of Fig. \[fig\_slits\]), G1 is a spiral galaxy with well defined spiral arms. The position angle of the major axis is PA$=55\degree$ N to E. The axial ratio is about 0.7, which implies an inclination angle of $i=45\degree$.
G1 seems to have no companions nearby. We have run an automatic search for emission line sources and found no other galaxy at this redshift in the MUSE field. According to our lens model, G1 is magnified by a factor of $\mu \approx 2.9$. The model does not identify regions with much lower magnification around G1 (Fig. \[fig\_magnification\]) implying that no other non-magnified galaxies have been missed by our automatic search, down to a $1\sigma$ surface brightness limit of $\approx
5\times10^{-19}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$.
![[*Right panel:*]{} [\[\]]{} nebular emission around G1 in the MUSE cube. Stars and foreground objects have been removed. The inset shows G1’s stellar emission as seen in the [*HST*]{} F814W band. Both images are displayed in the image plane. Yellow boxes are $0\farcs8$ on each side, corresponding to $4\times 4$ MUSE spaxels. The blue circle indicates the seeing FWHM. [*Left panels:* ]{} Gaussian fits to [\[\]]{}$\lambda\lambda3727,3729$ at each of the 12 selected regions indicated by the numbered boxes.[]{data-label="fig_oii"}](fig_oii_vel.pdf){width="\columnwidth"}
[*HST*]{} photometry {#sec_photometry}
--------------------
G1 is located in projection close to the bright arc (see Fig \[fig\_slits\]); thus, the photometry is expected to be contaminated. To measure the galaxy flux we use two different techniques. We first apply a symmetrization approach in which we rotate the galaxy image, subtract it from the original and clip any $2\sigma$ positive deviations; this image is finally subtracted from the original and in this fashion the unrelated emission is eliminated [@Schade1995]. The second approach is to obtain the flux from a masked image that excludes the arc. From both methods we obtain an average $m_{F814W} = 21.76 \pm 0.20$ and $m_{F160W} = 22.04 \pm 0.17$, corrected for Galactic extinction of E(B-V)=0.094mag using dust maps of @Schlegel1998. The absolute magnitude is computed from the F814W band which is close to rest-frame B-band, and the small offset is corrected using a local SBc galaxy template [@Coleman1980]. The absolute magnitude is $M_B=-20.49$. Using the luminosity function from DEEP2 [@Willmer2006] we obtain a de-magnified luminosity of $L/L^*_B=0.14$.
Using a standard SED fitting code [@Moustakas2017] we constrain the (de-magnified) median stellar mass to be M$_*= 4.8 \times 10^{9}$ M$_{\astrosun}$. Using the stellar-to-halo mass relation in [@Moster2010] we infer a halo mass of M$_h=4.8 \times 10^{11}$ M$_{\astrosun}$, which corresponds to a virial radius of $R_{\rm vir}\approx135$kpc.
\[O II\] emission {#oii_emission}
-----------------
Fig. \[fig\_oii\] shows the nebular [\[\]]{} emission around G1 as obtained from the MUSE datacube (i.e., in the image plane), from which we define the systemic redshift. We fit the [\[\]]{}$\lambda 3727,3729$ doublet with double Gaussians in 19 $4\times4$ binned spaxels (of which the brightest 12 are shown in Fig. \[fig\_oii\]) and obtain a total (de-magnified) [\[\]]{} flux of $f_{\rm
OII}=2.1\times10^{-17}$ erg s$^{-1}$ cm$^{-2}$. Considering the luminosity distance to $z={0.73379}$ we infer a (obscured) star-formation rate [@Kennicut1998] of SFR$=1.1$[M$_{\astrosun}$]{}yr$^{-1}$. Considering its redshift and specific star formation, G1 represents a star-forming galaxy [@Lang2014; @Oliva-Altamirano2014; @Matthee2019].
To compare [\[\]]{} emission with [*absorption*]{} velocities, we map the MUSE spaxels into the MagE spaxels. In this fashion we make sure we are sampling roughly the same volumes both in emission and absorption (although for the reasons outlined in §\[sec\_lens\_model\] the physical regions are not constrained within a spaxel, and therefore we cannot establish whether [\[\]]{} and occur in [*exactly*]{} the same volumes). We set $v=0$ [km s$^{-1}$]{} at $z=0.73379$. The re-mapped cube shows significant [\[\]]{} emission in MagE spaxels SW \#1 through \#4 (Fig. \[fig\_stack\_mage\]). The fit results are listed in Table \[table\_emission\].
We also perform a morpho-kinematical analysis of G1’s [\[\]]{} emission using the [[Galpak]{}]{} software [@Bouche2015]. The input is a reconstructed version of the MUSE cube in the absorber plane (see Appendix \[sec:galpak\] for details). From the model we obtain an independent assessment on the geometry and halo mass of the galaxy (see Table \[table\_G1\]). We find a total halo mass that is somewhat larger than that obtained from the SED fitting, but consistent within uncertainties. We also find consistency for G1’s inclination. However, the inferred PAs of the major axis differ by $\sim
15\degree$, which should not be a surprise if gas and stars have somewhat different geometries. We come back to the [[Galpak]{}]{} model in § \[sec\_velocities\] when we assess the kinematics of the absorbing gas.
Absorption properties of G1 at $z={0.73379}$ {#sec:abs}
============================================
This section encompasses the core of the present study. We analyze the absorption-line properties of G1 according to both absorption strengths and kinematics in the MagE data. We emphasize that MagE blue coverage and resolving power should lead to robust equivalent width ($W_0$) and redshift measurements.
MagE absorption profiles
------------------------
is detected in all 11 SW positions and in 2 of the SKY positions. All but 3 (4) of these detections have also () detections. In the NE arc-segment, we find no absorption in any of the 11 positions down to sensitive limits.
![ detections in the SW slit. Position numbers of the MagE spaxels are indicated, with numbers increasing to the South-West. The center of G1 lies close to SW \#2 (see Figs. \[fig\_FOV\] and \[fig\_slits\]). The blue shaded spectrum corresponds to [\[\]]{} coverage (scaled to fit in the y-axis) as measured with MUSE over the MagE spaxels. Only the four MagE spaxels that lie closest to G1 (SW positions \#1 to \#4) show noticeable [\[\]]{} (see Table \[table\_emission\]). The yellow shaded region indicates the position of a sky emission line at $4861.32$Å (see Fig. \[fig\_2D\]). \[fig\_stack\_mage\]](fig_stack_mage.pdf){width="1.\columnwidth"}
To obtain $W_0$ and redshifts, we fit single-component Voigt profiles in each continuum-normalized spectrum. The spectral resolution of MagE is not high enough to resolve individual velocity components and therefore the fits are not unique; however, using Voigt profiles (instead of Gaussian profiles) allows us to obtain equivalent widths and accurate velocities via simultaneous fitting of multiple transition lines. We use the VPFIT package [@Carswell2014] to fit the following lines: $\lambda\lambda2796,2803$, $\lambda2852$, and $\lambda\lambda\lambda\lambda2600,2585,2382,2374$. $\lambda 2344$ was excluded from the analysis because it is in the source’s Ly$\alpha$ forest. Possible lines are heavily blended with sky lines in the red part of the spectrum and were not considered either. In each fit, redshift, column densities ($N$) and Doppler parameters ($b$) were left free to vary while keeping all transitions tied to a common redshift and Doppler parameter, and the same species to a common column density. We calculate equivalent widths and their errors from the fitted $N$ and $b$ values using the approximation provided in @Draine2011. $W_0$ upper limits for non-detections are obtained using the formula $W_0(2\sigma) = 2\times{\rm
FWHM}/\langle S/N \rangle/(1+z)$, where $\langle S/N\rangle$ is the average signal-to-noise per pixel at the position of the expected line. The full velocity spread of the system, $\Delta v_{\rm FWHM}$, is estimated from the deconvolved synthetic profile of $\lambda 2796$.
The complete set of synthetic profiles and non-absorbed spectral regions is shown in the Appendix. The fitted parameters are presented in Table \[table\_abslines\]. Aided by the fitted profiles, we do not see evidence of anomalous multiplet ratios, and therefore assume no partial covering effects [e.g., @Ganguly1999; @Bergeron2017].
In Fig. \[fig\_stack\_mage\] we present the absorption profiles and their fits in the SW slit (the fits are constrained by the lines as well, not shown here but in the Appendix). The fitted profiles feature a clear transition from stronger (kinematically more complex) to weaker (simpler) systems, as one probes outwards of G1, i.e., with increasing position number along the slit. The errors in velocity, just a few [km s$^{-1}$]{}, are small enough to also reveal a clear shift in the centroid velocities (red tick-marks in the Figure) that change with position in a non-random fashion. We come back to these kinematical aspects in §\[sec\_velocities\] and §\[sec\_kinematics\].
![ $W_0(2796)$ map in the absorber (de-lensed) plane. Each spaxel is $3\times 6$ kpc$^2$. SKY slit positions with no source illumination are shown transparent. Upper limits ($2\sigma$) in the NE slit are indicated with blue triangles). The dashed line indicates the projection of G1’s semi-major axis at PA=$55\degree$ N to E. The inset shows an image from the [*HST*]{} F814W band in the absorber plane. \[fig\_rEW\_map\]](fig_rEW_map.pdf){width="\columnwidth"}
Fig. \[fig\_rEW\_map\] shows the corresponding map of $W_0(2796)$ in the (re-constructed) absorber plane. The color of each spaxel is tied to the rest-frame equivalent width when is detected. The blue arrows indicate $2\sigma$ upper limits while the dashed line indicates G1’s position angle. This map provides an overall picture of the present scenario: coherent absorption in a highly inclined disk along its major axis toward the South-West direction, with two detections in the North-East side of G1. Conversely, the NE slit, further away from G1, shows no detections.
In the following analysis we consider separately the equivalent widths and the velocities, both as a function of $D$.
Equivalent widths versus impact parameter
-----------------------------------------
![ $\lambda 2796$ rest-frame equivalent width as a function of impact parameter $D$ (in the absorber plane) for SW, NE, and SKY slits. Non-detections are reported as $2\sigma$ upper limits. Positions to the North-East of the G1 minor axis are depicted with open symbols. Measurement uncertainties in $D$ (§ \[sec\_lens\_model\]) come from the astrometry (horizontal error bars) and from the lens model (represented by symbol sizes). For comparison with the quasar statistics, data points from @Nielsen2013 are displayed (grey symbols). The dashed curve is a scaled version of the isothermal density profile from @Chen2010 using $L=0.14~L^*$ and the shaded region is the RMS of the differences between model and data (see § \[sec\_isothermal\] for details). \[fig\_rEW\_D\]](fig_mgii_rEW_VPFIT.pdf){width="\columnwidth"}
Fig. \[fig\_rEW\_D\] summarizes the first of our main results. It shows an anti-correlation between $\lambda 2796$ equivalent width and impact parameter [e.g., @Chen2010; @Nielsen2013] along the three slit directions used in this work. Thanks to the serendipitous alignment of G1 and the arc segments, this is the first time such a relation can be observed in an individual absorbing galaxy along its major axis.
Noteworthy, there appears to be more coherence toward [PSZ1 G311.65–18.48]{} along the SW slit than in the system studied toward [RCS2032727$-$132623]{} (Paper I), in the sense that all SW positions have positive detections, having no non-detections down to $\approx 0.2$Å, our $2\sigma$ detection limit. Since we are probing here (1) along the major axis of a disk galaxy, and (2) smaller impact parameters, the observed coherence probably indicates that the gas in the disk (this arc) is less clumpy than further away in the halo ([RCS2032727$-$132623]{}).
We compare these arc data with the statistics of quasar absorbers in § \[sec:discussion\].
![image](fig_11.pdf){width="101.00000%"}
Gas velocity versus impact parameter {#sec_velocities}
------------------------------------
Fig. \[fig\_vel\_D\] displays our second main result. The left panel shows - absorption velocities in the SW and SKY slits (green and olive colors, respectively) and [\[\]]{} emission velocities (orange colors) as a function of impact parameter, $D$. The emission velocities come from [\[\]]{} fits in apertures that match SW spaxels \#1 to \#4 (only the four closest spaxels to G1 show significant [\[\]]{}; Fig. \[fig\_stack\_mage\]). Error bars indicate the uncertainty in the velocity centroid, while the shaded region indicates the projected velocity spread. Note that no spaxel coincides with $D=0$ kpc. In this and next figures we treat impact parameters on the NE side of G1’s minor axis as negative quantities (and hence we get rid of the open symbols). This choice spots apparent rotation around G1, that we discuss below. Given the alignment between the arc and the G1’s major axis, such a plot can be considered a rotation curve. This is the first rotation curve of absorbing gas measured in such a distant galaxy.
Perhaps the most striking feature in the left panel of Fig. \[fig\_vel\_D\] is the [*decline*]{} in velocity at SW spaxels \#10 and \#11. To explore possible gas rotation, we use our 3D model of [\[\]]{} emission (§ \[oii\_emission\]) and obtain a line-of-sight velocity map at any position near G1 (right panel of Fig. \[fig\_vel\_D\]). This model might not be unique, but it does serve our purpose of extending it to larger distances for comparison with the absorbing gas. The line-of-sight velocities allowed by the model within an aperture that matches the SW slit are represented in the left panel by the dashed curves. It can be seen that most velocities are well comprised by the model velocities, indicating co-rotation of the absorbing gas out to $D\approx 23$kpc. The exception are velocities at SW spaxels \#1 (discussed in § \[sec\_kinematics\]), and \#10 and \#11 (§ \[sec\_accretion\]).
Summary of absorption properties
--------------------------------
Before proceeding to the discussion, it is useful to consider an overview of the observables by including the other two absorption species detected and their equivalent-width ratios. Such absorption-line summary is shown in Fig. \[fig\_ratios\], where the upper panel is a simpler version of the left panel in Fig. \[fig\_vel\_D\], the middle panel joints equivalent widths of the 3 species studied in this work, and the bottom panel shows $W_0$ equivalent-width ratios. We concentrate on the standard ratios [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}$\equiv
W_0(2600)/W_0(2796)$ and [$\cal{R}^{\rm MgI}_{\rm MgII}$]{}$\equiv W_0(2852)/W_0(2796)$, bearing in mind that Mg is an $\alpha$ element and therefore chemical enrichment could affect those ratios.
From the middle panel it can be seen that, like for , and equivalent-widths also anti-correlate with $D$. This is expected, since such species have similar ionization potentials and are most likely co-spatial [@Werk2014].
From the bottom panel of Fig. \[fig\_ratios\], both [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} and [$\cal{R}^{\rm MgI}_{\rm MgII}$]{} exhibit a general decrease as we probe further out of G1. This is more evident in [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}, which is above $0.5$ out to SW\#6, and below such threshold beyond. The trend seems real even excluding position SW\#1, which is the only measurement above unity (see § \[sec\_kinematics\]). In the large-distance end, the two outermost positions have comparatively low [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values.
Discussion {#sec:discussion}
==========
In this section, we synthesize the various observables of G1’s CGM. The discussion revolves around what the observed equivalent widths, kinematics, and equivalent width ratios tell us about the origin of the - gas. It also highlights the complementarity between our technique and other CGM probes.
Evolutionary context
--------------------
G1 seems to be an isolated, sub-luminous ($0.1L^*_B$) star-forming ($>1.1$[M$_{\astrosun}$]{}yr$^{-1}$) disk-like galaxy. Fig. \[fig\_oii\] shows that the [\[\]]{} emission is confined to the optical surroundings, while absorption is detected much further out, at least in the direction of the SW slit. This suggests that G1 has recently experienced a burst of star-formation, which is detached from the older (and more ordered) cool gas. This is analogous to local galaxies, where H$\alpha$ (also a proxy for star formation) is not necessarily associated with (as detected via 21-cm observations, and here considered to be traced by ), which is usually more extended [@Bigiel2012; @Rao2013]. Therefore, the offset seen toward [PSZ1 G311.65–18.48]{} should not be surprising for a formed disk still experiencing star bursts, much similar to -selected galaxies detected in emission [@Noterdaeme2010; @Bouche2007].
For comparison with the local Universe, our $W_0(2796)$ measurements are $\approx 3$ times higher than those found in M31 (similar halo mass, similar inclination, major axis quasar sightlines) by @Rao2013 at similar impact parameters. Such differences might have an evolutionary or environmental origin, with G1 bearing a larger gaseous content.
Spatial structure of the CGM
----------------------------
### Direct comparison with quasar and galaxy surveys
The grey points in Fig. \[fig\_rEW\_D\] are drawn from the sample of 182 quasar absorbers in @Nielsen2013. Note that our data provide seven independent measurements to the sparsely populated interval $D<10$kpc.
In general, our data falls within the quasar scatter, but that scatter is much larger than what we see across the arc. The smaller arc scatter cannot be due only to our particular experimental design. Even if the arc data result from a light-weighted average (over a spaxel area) the spaxels are independent of each other and therefore cannot falsify spatial smoothness on the scales shown in Fig. \[fig\_rEW\_D\].
In Paper I, we found a similar situation toward [RCS2032727$-$132623]{}. These cases strongly suggest that the scatter in $W_0^{\rm quasar}$ is not intrinsic to the CGM but rather dominated by the heterogeneous halo population, in which gas extent and smoothness is a function of host-galaxy intrinsic properties [@Chen2008; @Chen2010; @Nielsen2013; @Nielsen2015; @Rubin2018b] and orientation [@Nielsen2015]. It should therefore not be a surprise that quasar-galaxy samples exhibit more scatter than the present case. Furthermore, the same should be true for other extended probes of the CGM like background galaxies [@Steidel2010; @Bordoloi2011; @Rubin2018a; @Rubin2018b], which also provide single lines-of-sight [the exception being the handful of cases where background galaxies resolve foreground halos; @Diamond-Stanic2016; @Peroux2018].
### Isothermal-profile model {#sec_isothermal}
We also compare our data with a physically-motivated model. The dashed line in Fig. \[fig\_rEW\_D\] shows a 4-parameter isothermal profile with finite extent, $R_{\rm gas}$, developed by @Tinker2008 to describe $W_0^{\rm
quasar}(D)$. The isothermal profile was first motivated to model the observed distribution of dynamical mass within $\approx 30$kpc of nearby galaxies [@Burkert1995]. @Chen2010 fitted such a profile to a sample of 47 galaxy- pairs and 24 galaxies showing no absorption at $10<D<120 h^{-1}$kpc and obtained the scaling relation $R_{\rm gas} =
74\times(L/L_*)^{0.35}$kpc.
We test this model on our arc data by imposing the profile to pass through the $W_0$ value of the closest spaxel to G1 (SW \#2). We use $L/L_*=0.14$ (see § \[sec\_photometry\]) and set the model amplitude to fit $W_0(2796)=
2.27\pm 0.15$Å at $D=1.4$kpc, leaving the 3 other model parameters in @Chen2010 unchanged. The dashed line in Fig. \[fig\_rEW\_D\] shows that the isothermal model nicely fits our arc data (RMS$=0.19$ Å); moreover, it fits the data not only at the closest spaxel (by construction), but also at almost all impact parameters (excepting the two measurements to the “opposite” side of G1; see next subsection). This is remarkable, since we are fitting a single halo with an isothermal profile that fits the quasar statistics at $D>10$kpc, extrapolated to smaller impact parameters.
The fit has important consequences for our understanding of gaseous halos. First, it validates an isothermal gas distribution over the popular Navarro-Frenk-White [NFW; @Navarro1997] profile, which does not predict a flat $W_0$-$D$ relation at small $D$. This is the first time we can firmly rule out a NFW model for the cool CGM, thanks to our several detections at $D<10$kpc in a single system. Incidentally, the fit also lends support to CGM models that adopt a single density profile [e.g., @Stern2016]. Secondly, it suggests that G1’s CGM is representative of the -selected absorber population, since it can be modeled with parameters that result from quasar absorber averages and over a wide redshift range. And third, it reveals that the scatter seen in the overall population includes an [*intrinsic*]{} component, likely due to CGM structure on scales of tens of kpc. It seems timely to verify these fundamental points with more measurements at small-$D$, including single detections toward unresolved background sources.
### kpc scales
The overlap of the SKY and SW slits (Fig. \[fig\_rEW\_map\]) helps us to qualitatively assess variations in $W_0(2796)$ around G1 on kpc scales. Firstly, SKY positions \#10 and \#11 partially overlap with SW positions \#1 and \#2, respectively. The corresponding equivalent widths, though, show no significant differences (see Fig. \[fig\_rEW\_D\]), suggesting that close to G1 (within a few kpc) the gas is smooth on scales of $\approx
1$kpc, which is roughly the offset between the aforementioned SKY and SW spaxels. This could be due to a covering factor [@Steidel1997; @Tripp2005; @Chen2008; @Kacprzak2008; @Stern2016] close to unity at small impact parameters ($D\lesssim 10$kpc).
### Isotropy
The two measurements to the “opposite” side of G1 (i.e., to the North-East of G1; open symbols in Fig. \[fig\_rEW\_D\]) depart by 2-3$\sigma$ from the trend shown by the SW positions to the South-West of G1 at the same impact parameter (noting that the difference is within the typical scatter reported toward quasar sightlines at larger distances). This indicates that the gas is not homogeneously distributed around G1, even at these small distances.
We are not able to test isotropy of the gas on scales between $4<D<29$kpc, unfortunately, due to the lack of arc signal right to the North-East of G1. However, NE position \#11 is located $29.3$ kpc away from G1, just as far as SW position \#11 on the other side, and yet it shows no down to a stringent $2\sigma$ limit of $0.16$ Å ($\log N/{\rm
cm}^{-2}=12.7$), while the SW position has a significant detection at twice that value. This situation is remarkable, since NE \#11 appears in projection on top of the major axis (Fig. \[fig\_rEW\_map\]), while SW \#11 lies around 7kpc away in projection from the same axis. The NE non-detection comes then even more unexpected, under the assumption of isotropy. We conclude that the gas traced by , to the extent that we can measure it, is either (1) not isotropically distributed, or (2) distributed in a disk which is not aligned with the optical disk, or (3) is confined to a (spherical?) volume $\lesssim30$kpc in size along G1 major axis. This latter option implies that SW\#11 absorption might have an external origin, a possibility we address below.
![Summary of MagE absorption-line properties at $z=0.73379$ toward [PSZ1 G311.65–18.48]{}, as a function of impact parameter $D$ from G1. Only SW detections are shown. The only impact parameter to the North-East of G1’s minor axis has been flipped the sign. [*Upper panel:*]{} Velocity of +line centroids (same as in Fig. \[fig\_vel\_D\], left panel). [*Middle panel:*]{} Rest-frame equivalent width of $\lambda$2796, $\lambda$2600, and $\lambda$2852. [*Bottom panel:*]{} Equivalent-width ratios. The vertical dashed lines indicate the transitions between the absorption regimes proposed in § \[sec\_kinematics\], i.e., from left to right: disk, disk+inner-halo, and outer halo absorption. \[fig\_ratios\]](fig_ratio.pdf){width="1.\columnwidth"}
Kinematics of the absorbing gas {#sec_kinematics}
-------------------------------
To the South-West of G1 the absorption signal extends out to $\approx 8$ optical radii along the major axis. Detecting extraplanar gas at $z=0.7$ has important consequences for our understanding of disk formation and gas accretion [e.g., @Bregman2018; @Stewart2011a; @Stewart2011]. The gas traced by shows clear signs of co-rotation (Fig. \[fig\_vel\_D\]), suggesting that the shape of the rotation curve is not necessarily governed by a combination of outflows in less massive halos, as we see here a more ordered rotating disk. Our data also confirm the rotation scenario unveiled by simulations [e.g., @Stewart2011] and also proposed for observations of disk-selected quasar absorbers at $z\sim 1$ [e.g., @Steidel2002; @Ho2017; @Zabl2019].
Based on the line centroids at velocity $v$ (left panel in Fig. \[fig\_vel\_D\]), and excluding the kinematically detached position SW\#1 (discussed below), we identify three distinct absorption regimes: (1) disk absorption at $D \lesssim 10$kpc, where velocities rise to $\approx
110$[km s$^{-1}$]{}; (2) disk+inner-halo absorption at $10 \lesssim D\lesssim 20$kpc, where velocities remain flat; and (3) outer-halo absorption at $D\gtrsim
20$kpc, where velocities fall down ‘back’ to $v=0$ [km s$^{-1}$]{}.
Interestingly enough, the three proposed regimes correlate with the kinematical complexity of the absorption profiles. In fact, based on the absorption profiles in Fig. \[fig\_stack\_mage\], the disk absorption corresponds to SW positions \#2 to \#4, in which $\Delta v_{\rm
FWHM}\approx200$[km s$^{-1}$]{}, suggesting several velocity components (also note that position \#4 corresponds to the first spaxel beyond the stellar radius; Fig. \[fig\_slits\]). Then, the disk+halo absorption corresponds to positions \#5 to \#9, with somewhat simpler absorption kinematics and smaller $\Delta
v_{\rm FWHM}$ values, suggesting fewer velocity components. We emphasize that we presently cannot resolve individual velocity components and thus $v$ and $\Delta v_{\rm FWHM}$ must be considered spectroscopic (and spatial; see § \[sec\_geometry\]) averages.
The dashed lines in the left panel of Fig. \[fig\_vel\_D\] show that the two aforementioned regimes are explained, to some extent, by our rotation model. Conversely, SW positions \#10 and \#11 have the lowest velocity offsets and spreads, and cannot be explained with rotation, even in the Keplerian limit (green dashed line in Fig. \[fig\_vel\_D\]). Such ‘outer-halo’ absorption is one of the most striking signature in the present data, which we discuss in § \[sec\_accretion\].
Finally, SW \#1 also stands out. This position shows a significantly higher velocity offset ($\sim 90$[km s$^{-1}$]{}) than the [\[\]]{} emission, suggesting the dominant absorbing clouds are not tracking the rotation (the same may be also true for part of the SW \#2 absorption). The overlapping spaxel SKY \#10 shows a consistent velocity, meaning that the measurements are robust. Such kind of offsets are rarely observed in SDSS stacked spectra [@Noterdaeme2010], suggesting their covering factor is low. The arc positions also show the highest [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values in our sample, which can be explained if the gas is more enriched and processed. These two features conspire in favor of a galactic-scale outflow [@Steidel2010; @Kacprzak2012; @Shen2012; @Fielding2017] in one of the velocity components, which is escaping G1 in the line-of-sight direction. Moreover, the spaxels show significant [\[\]]{} flux, and therefore might be co-spatial with star-forming regions, from which supernova-driven winds are expected to be launched [e.g., @Fielding2017; @Nelson2019].
Gradient in chemical enrichment?
--------------------------------
Some of the [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values in Fig. \[fig\_ratios\] are exceptionally high compared with the literature [@Joshi2018; @Rodriguez-Hidalgo2012]. Systems selected in the SDSS by having [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}$>0.5$ are found to probe lower impact parameters; moreover, there seems to be a distinction between absorbers associated with high or low SFR depending on whether this ratio is above or below 0.5, respectively [@Noterdaeme2010; @Joshi2018]. Our particular experimental setup confirms this trend in the present host galaxy: the four closest positions to G1 show simultaneously the strongest [\[\]]{} emission (Fig. \[fig\_stack\_mage\]) and the highest [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values (all above 0.5; Fig. \[fig\_ratios\]). Furthermore, [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} seems to show a negative gradient outwards of G1.
Equivalent widths of saturated lines are known to be a function of the number of velocity components [@Charlton1998; @Churchill2000], rather than of column density, $N$. The present spectra do not allow us to resolve such clouds nor to get at their $N$-ratios, making it hard to assess unambiguously the physical origin of the [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} gradient. Nevertheless, $N$-ratios must have an effect on [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}. Speculating that both kinematics and line-saturation affect $\lambda 2796$ and $\lambda 2600$ similarly at a fixed impact parameter, a gradient in [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}($D$) should globally reflect the same trend in $N($)/$N($).
$N($)/$N($) is driven by three factors: (a) ionization: but assuming [$N($$)$]{}$\gtrsim 19$cm$^{-2}$ at $D\lesssim20$kpc $\approx
0.1$$R_{\rm vir}$ [@Werk2014], ionization is seemingly the less important factor [@Giavalisco2011; @Dey2015]; (b) dust: Mg is less depleted than Fe [@Vladilo2011; @DeCia2016]; therefore one expects $N($)/$N($) (or [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}) to [ *increase*]{} outwards of G1, which we do not observe; and (c) chemical enrichment: $\alpha$/Fe decreases as $Z$ increases; therefore, $N($)/$N($) ([$\cal{R}^{\rm FeII}_{\rm MgII}$]{}) should decrease outwards of G1, which we do observe.
We conclude that we are likely facing the effect of a negative gradient in chemical enrichment, with the outermost positions being less chemically evolved than those more internal to G1. Using high-resolution quasar spectra, in a sample of star-forming galaxies @Zahedy2017 find evidence for a negative gradient in $N($$)/N($$)$ as well; however, their ratios fall down (statistically) at larger distances ($\sim 100$ kpc) than probed here around a single galaxy. Since @Zahedy2017 galaxy sample is a few to ten times more luminous than G1, the different scales are likely explained by the luminosity dependence of $R_{\rm gas}$ [e.g., @Chen2010].
Damped Ly$\alpha$ systems {#sec:dlas}
-------------------------
systems having [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}$>0.5$ and $W_0(2852)>0.1$ Å have been proposed [@Rao2006; @Rao2017] to select damped Ly$\alpha$ systems [DLAs; mostly neutral absorption systems having $\log$[$N($$)$]{}$>20.3$cm$^{-2}$; e.g., @Wolfe2005] at $z<1.65$. According to those criteria, positions SW\#1 through \#7 classify as DLAs candidates. This lends support to the idea that DLAs occur (at least) in regions internal to galaxies and, furthermore, that some of them are associated with disks both at high and low redshift, as predicted by state-of-the-art simulations [@Rhodin2019]. Moreover, the present arc positions classified as DLAs have also the widest velocity dispersions (most of them are within our ‘disk’ kinematical classification), suggesting we are hitting a prototype DLA host [e.g., @Ledoux2006; @Neeleman2013].
Finding DLAs out to $15$kpc ($> 0.1~R_{\rm vir}$) may be somewhat surprising. Halo models predict columns in excess of the DLA threshold only at very low impact parameters, about three times less than here (@Qu2018; but see @Mackenzie2019). The larger extent observed here might be due to the geometrical effect of probing along the major axis of an inclined disk [but see @Rao2013].
Assuming G1 hosts DLA clouds with unity covering factor within a projected disk of radius 15kpc, we estimate the total mass in neutral gas to be roughly $\log M_{\rm HI}/M_{\astrosun} \approx
9.5$. This is of the order of magnitude of what is found in 21-cm observations at low redshift [e.g., @Kanekar2018], suggesting that G1 represents a high-redshift analog of a nearby DLA host.
G1’s star-formation efficiency, defined as SFR/$M_{\rm HI}$, is relatively high, SFE=$3.5\times10^{-10}$yr$^{-1}$, for the bulk of star-forming galaxies [@Popping2015]. On the other hand, the cool gas fraction, defined as $M_{\rm HI}/(M_{\rm HI}+M_*)$, falls just below average for $z=0.7$: $f_{\rm gas}\approx0.4$ [e.g., @Popping2015]. This indicates that G1 is still efficiently forming stars, but will enter a quenching phase —running out of gas in (SFE)$^{-1}\approx 3$Gyr— if not provided with extra gas supply [@Genzel2010; @Leroy2013; @Sanchez2014].
Cold accretion {#sec_accretion}
--------------
![Cartoon model for the inner CGM of the $z=0.7$ galaxy studied in this work (G1). The red polygons represent the MagE spaxels, reconstructed in the absorber plane and shown here in the same scale as in Fig. \[fig\_rEW\_map\]. The green rotating disk represents the volume where we detect absorption with $W_0>0.12$ Å. The disk is centered on the stellar light of G1, has a position angle of $55\degree$ N to E, and has an inclination angle of $i=45\degree$, i.e., same parameters as for the stellar disk (see also Fig. \[fig\_rEW\_map\]). The disk is assumed to produce absorption with unity covering factor and to be embedded in a spherical volume producing much less covering at our detection limit. The extensions of disk and spherical envelope are set arbitrarily such that no absorption is detected on spaxel NE \#11 (right-most position in the NE slit). The yellow arrow symbolizes in-flowing enriched gas which, if co-planar and aligned with the major axis, would reproduce the observed kinematics at SW \#10 and SW \#11 (left panel in Fig. \[fig\_vel\_D\]). See § \[sec\_accretion\] for further discussion. \[fig\_cartoon\]](dibujo5.pdf){width="\columnwidth"}
The gas detected at SW positions \#10 and \#11 stand out in many respects (Fig. \[fig\_ratios\]): it is kinematically detached from the rotation curve; it has larger $W_0$ than an extrapolated trend followed by the more internal positions; and it has the lowest [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values, likely indicating less processed gas. In addition, spaxel SW \#11 lies $7$kpc away in projection from the major axis; depending on the (unknown) disk thickness, the gas detected in these directions could be co-planar and lie at distances of $\approx 0.2\,R_{\rm vir}$ from G1. These signatures suggest an ’external’ origin. The absorption profiles at some other SW positions allow for an unresolved velocity component at the velocity of SW \#11 (Figures \[fig\_stack\_mage\] and \[fig\_vel\_D\]), which could be explained by extended non-rotating gas surrounding the disk. However, such a velocity component would not fit SW \#5 through SW \#9, nor any of the NE spaxels. We therefore dismiss the surrounding gas scenario for SW \#10 and \#11. Rather, we consider in-falling gas. Cosmological simulations predict that galaxies hosted by M$\lesssim 10^{12}$[M$_{\astrosun}$]{} halos should undergo “cold-mode” accretion [e.g., @Stewart2011a]. In the following we consider the possibility to have detected enriched cold accretion at medium redshift [@Kacprzak2014; @Stewart2011; @Bouche2013; @Bouche2016; @Danovich2015; @Qu2019].
Fig. \[fig\_cartoon\] shows a cartoon representation of G1’s inner CGM. The green rotating disk represents the volume where we detect -- absorption. The disk is assumed to produce absorption with unity covering factor and to be embedded in a spherical volume likely producing much less covering at our detection limit, $W_0>0.12$ Å. This distinction is a possible explanation for the good match with an isothermal model at the SW slit (Fig. \[fig\_rEW\_D\]) and the lack of detections at the NE slit (Fig. \[fig\_rEW\_map\]). In the cartoon model, the extensions of disk and spherical envelope are set arbitrarily such that no absorption is detected to the North-East of spaxel NE \#11 (right-most position in the NE slit). Such a choice implies that SW \#10 and SW \#11 (right-most positions on the SW slit) would not have signal from the disk, but from an external medium, which is consistent with our low-velocity detections. The proposed accreting gas enters the galactic disk radially and roughly transversely to the line-of-sight (producing the low line-of-sight velocities) while in the process of acquiring enough angular momentum to start co-rotating.
Alone from the kinematics, though, it is hard to disentangle extraplanar inflow (radial or tangential) from a warped disk [@Diamond-Stanic2016], a scenario that seems to reproduce some observations of quasar absorbers having low line-of-sight velocities [@Rahmani2018; @Martin2019]. Indeed, most of the disks in the local Universe exhibit warps [@Sancisi2008; @Putman2009], their extended disks do show anomalies [@Koribalski2018], and in a few cases rotation curves start declining when becomes patchy in the extended disk of dwarf galaxies [@Das2019; @Oikawa2014]. Authors explain such cases via warped and tilted disks [@Sofue2016].
This being said, our data offer enough indications [*against*]{} the warped disk scenario. First, we do not see interacting galaxies [@Diamond-Stanic2016]. Secondly, we do not detect absorption at the same distance on the opposite side of G1 (i.e., NE positions \#10 and \#11). Third, velocities in simulated dwarfs fall down by only 20% at 20 kpc [@Kyle2019], while here we see a decline of about $80\%$. Indeed, SW positions \#10 and \#11 have much less specific angular momentum than the rest. For instance SW \#11 has 60% less specific angular momentum than SW \#10 (i.e., $(Rv)_{\# 11 }=0.6\times (Rv)_{\#10}$), and so forth, suggesting the gas is not (yet) rotating. And lastly, the gas shows the lowest [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} values, i.e., it is consistent with less processed gas, which is expected in cold accretion [e.g., @Oppenheimer2012; @Kacprzak2016]. Detecting accretion via at the level of $W_0\sim0.2$–$0.3$ Å, although incompatible with pristine gas [@Fumagalli2011; @MartinDC2019], agrees well with quasar observations of disk-selected absorbers [@Rubin2012; @Zabl2019].
By averaging spatially the absorption in SW\#10 and \#11 in a circular aperture of radius 30kpc, we find that the covering factor is low, $f_{\rm accretion} \approx
1$%. This is consistent with simulations at higher redshifts [@Faucher2011; @Fumagalli2011] and lends support to the cold-accretion scenario.
Altogether, cold, recycled accretion [@Rubin2012; @Danovich2015] at $\approx0.2\,R_{\rm vir}$ seems the most favoured scenario to explain the present data. It might be radial accretion at the disk edge [@Stewart2011; @Putman2012] originating from the cool CGM [@Werk2014] in form of recycled winds [@Oppenheimer2010; @Angles-Alcazar2017], i.e., gas left over from past star-bursts.
This is not the first time absorption kinematics is seen decoupled from emission [@Steidel2002; @Martin2019; @Ho2017]. Velocities below Keplerian have also been detected in quasar sightlines although at slightly larger distances [@Martin2019; @Ho2017; @Kacprzak2017]. Those signatures seem to be frequent in highly inclined disks and authors have argued that they might probe inflows. However, with quasar sightlines probing only one position in the intersected halo, it is challenging to confirm this hypothesis. Thanks to the present tomographic data, we see for the first time a [*smooth transition*]{} to disk co-rotation, providing the first unambiguous evidence for enriched-gas accretion beyond the local Universe.
Summary and conclusions {#sec:summary}
=======================
We have studied the cool and enriched CGM of a $z=0.7$ star-forming galaxy (G1) via the gravitational arc-tomography technique [@Lopez2018], i.e., using a bright giant gravitational arc as background source. G1 appears to be an isolated and sub-luminous disky galaxy, seen at an inclination angle $i\approx 45\degree$.
We have measured , , and equivalent widths ($W_0$) in 25 $3\times6$ kpc$^2$ independent positions (including 13 velocity measurements) along G1’s major axis, at impact parameters $D=0$–$60$kpc (0–0.4$R_{\rm vir}$). This unique configuration has allowed us to probe distinct signatures of the CGM in an individual galactic environment. Our findings can be summarized as follows:
1. Enriched gas is detected out to $D\approx 30$kpc ($\approx 0.2~R_{\rm vir}$) in one radial direction from G1. The absorption profiles (Fig. \[fig\_stack\_mage\]) show kinematic variations as a function of $D$, becoming less complex outwards of G1. We suggest that the arc positions probe different regions in the halo and extended disk of G1. Within $\sim 3$ kpc, the smallest scales permitted by our ground-based observations, the gas distribution appears smooth in the central regions (unity covering factor). By comparing $W_0$ measured on both sides of G1, we find evidence that the gas is not distributed isotropically (Fig. \[fig\_rEW\_map\]).
2. We observe a $W_0$–$D$ anti-correlation in all three studied metal species. The $W_0(2796)$ scatter in the arc data (Fig. \[fig\_rEW\_D\]) is significantly smaller than that of the quasar statistics, suggesting biases in the latter, likely due to a variety of host properties and orientations. Our data populates the sparse $D<10$kpc interval, revealing that $W_0(D)$ flattens at low impact parameters. An isothermal density profile fits the arc data remarkably well at almost all impact parameters. Since most of the model parameters are tied to the quasar statistics, this suggests that the present halo is prototypical of the -selected CGM population. In particular, at $D<10$kpc the good fit rules out cuspy gas distributions, like those described by NFW or power-law models.
3. For most of the detections, the absorption velocities (Fig. \[fig\_vel\_D\], left panel) resemble a flat rotation curve, which appears to be kinematically coupled to G1’s [\[\]]{} emission. There are two exceptions to this trend. (a) One position, lying only $4$kpc in projection from G1 and measured independently in two slits, departs from rotation with a velocity of $\sim +90$[km s$^{-1}$]{}. This suggests that the gas, also exhibiting the highest [$\cal{R}^{\rm FeII}_{\rm MgII}$]{} value of the sample, might be out-flowing from G1. And (b), the two outer-most detections (at $\approx 30$ kpc $\approx0.2~R_{\rm vir}$) also seem decoupled from the disk kinematics, falling too short in velocity. We do not detect absorption at the same distance on the opposite side of G1. We interpret the low-velocity signal as occurring in less-enriched gas having a co-planar trajectory, which will eventually flow into the galaxy’s rotating disk (e.g., an enriched cold-accretion inflow).
4. The equivalent-width ratio [$\cal{R}^{\rm FeII}_{\rm MgII}$]{}$(D)$ (Fig. \[fig\_ratios\]) exhibits a negative gradient, which could partly be due to a negative gradient in metallicity. This ratio also suggests that G1’s central regions ($D<15$kpc) may host DLAs. We estimate the total reservoir of neutral gas and find it to be comparable with the mass locked into stars, suggesting that the galaxy has little fuel left to keep up with its current star-formation efficiency.
Outlook {#sec:outlook}
=======
We have highlighted the exquisite advantages of gravitational arc-tomography: (1) the background sources extend over hundreds of kpc$^2$ on the sky, permitting a true ‘slicing’ of the CGM of [*individual*]{} intervening galaxies; (2) comparison with the statistics of quasar-galaxy pairs offers a great opportunity to assess the gas patchiness and its covering factor around individual systems, something beyond the capabilities of present-day quasar observations; (3) the individual systems can be used as test laboratories in future simulations. Challenges are manyfold as well: sensitive spatially-resolved spectroscopy is needed (not available until recently); absorber-plane reconstruction is required via ad-hoc modeling of the lensing configuration (usually non-trivial); bright giant gravitational arcs are rare on the sky. We expect that soon new surveys will provide targets for future extremely-large observing facilities. In the meantime, a comparison scheme between the arc and quasar statistics can and must be developed. These are key aspects that nicely [*complement*]{} quasar studies. Furthermore, with higher spectral resolution one shall be able to resolve individual velocity components and assess the chemical state of the gas in a spatial/kinematical context. Undoubtedly, such tools shall enable a more profound understanding of the baryon cycle across galaxy evolution.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the anonymous referee for comments that improved the manuscript. This work has benefited from discussions with Nikki Nielsen, Kate Rubin, Glenn Kacprzak, Umberto Rescigno and Nicolas Bouché. This paper includes data gathered with the $6.5$ meter Magellan Telescopes located at Las Campanas Observatory, Chile: the Magellan/MagE observations were carried out as part of program CN2017B-57 (PI Tejos). The VLT/MUSE data were obtained from the ESO public archive (program 297.A-5012(A), PI Aghanim). This work was supported in part by NASA through a grant (HST-GO-15377.01, PI Bayliss) awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5–26555. SL was partially funded by UCh/VID project ENL18/18 and by FONDECYT grant number 1191232. NT acknowledges support from PUCV/VRIEA projects $039.333/2018$ and $039.395/2019$, and FONDECYT grant number 1191232. LFB was partially supported by CONICYT Project BASAL AFB-170002. MG was supported by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51409 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
[^1]: [drizzlepac.stsci.edu](drizzlepac.stsci.edu)
|
---
abstract: |
The results of new spectroscopic analyses of 20 recently reported extrasolar planet parent stars are presented. The companion of one of these stars, HD10697, has recently been shown to have a mass in the brown dwarf regime; we find \[Fe/H\] $= +0.16$ for it. For the remaining sample, we derive \[Fe/H\] estimates ranging from $-0.41$ to $+0.37$, with an average value of $+0.18
\pm 0.19$. If we add the 13 stars included in the previous papers of this series and 6 other stars with companions below the 11 M$_{\rm Jup}$ limit from the recent studies of Santos et al., we derive $\langle$\[Fe/H\]$\rangle =
+0.17 \pm 0.20$.
Among the youngest stars with planets with F or G0 spectral types, \[Fe/H\] is systematically larger than young field stars of the same Galactocentric distance by 0.15 to 0.20 dex. This confirms the recent finding of Laughlin that the most massive stars with planets are systematically more metal rich than field stars of the same mass. We interpret these trends as supporting a scenario in which these stars accreted high-Z material after their convective envelopes shrunk to near their present masses. Correcting these young star metallicities by 0.15 dex still does not fully account for the difference in mean metallicity between the field stars and the full parent stars sample.
The stars with planets appear to have smaller \[Na/Fe\], \[Mg/Fe\], and \[Al/Fe\] values than field dwarfs of the same \[Fe/H\]. They do not appear to have significantly different values of \[O/Fe\], \[Si/Fe\], \[Ca/Fe\], or \[Ti/Fe\], though. The claim made in Paper V that stars with planets have low \[C/Fe/\] is found to be spurious, due to unrecognized systematic differences among published studies. When corrected for these differences, they instead display slightly enhanced \[C/Fe\] (but not significantly so). If these abundance anomalies are due to the accretion of high-Z matter, it must have a composition different from that of the Earth.
author:
- 'Guillermo Gonzalez, Chris Laws, Sudhi Tyagi, and B. E. Reddy'
title: 'Parent Stars of Extrasolar Planets VI: Abundance Analyses of 20 New Systems'
---
INTRODUCTION {#intro}
============
In our continuing series on stars-with-planets (hereafter, SWPs), we have reported on the results of our spectroscopic analyses of these stars (Gonzalez 1997, Paper I; Gonzalez 1998, Paper II; Gonzalez & Vanture 1998, Paper III; and Gonzalez et al. 1999, Paper IV; Gonzalez & Laws 2000, Paper V). Other similar studies include Fuhrmann et al. (1997, 1998) and Santos et al. (2000b,c). The most significant finding so far has been the high mean metallicity of SWPs, as a group, compared to the metallicity distribution of nearby solar-type stars (Gonzalez 2000; Santos et al. 2000b,c).
Additional extrasolar planet candidates continue to be announced by planet hunting groups using the Doppler method. We follow-up these annoucements with high resolution spectroscopic observations as time and resources permit. Herein, we report on the results of our abundance analyses of 20 new candidate SWPs. We compare our findings with those of other recent similar studies, look for trends in the data suggested in previous studies, and evaluate proposed mechanisms in light of the new dataset.
SAMPLE AND OBSERVATIONS {#obs}
=======================
High-resolution, high S/N ratio spectra of 14 stars were obtained with the 2dcoude echelle spectrograph at the McDonald observatory 2.7 m telescope using the same setup as described in Paper V. Two stars difficult or impossible to observe from the northern hemisphere, HR 810 and HD 1237, were observed on three nights with the CTIO 1.5 m with the fiber fed echelle spectrograph. Observing them on multiple nights permits us to test for possible variations in their temperatures over one stellar rotation period, given their youth. Additional details of the spectra obtained at CTIO and McDonald, including a list of the discovery papers, are presented in Table 1. Although it does not have a known planet, we include HD75332 in the program, since its physical parameters are similar to those of the hotter SWPs. HD75332 is also included in the field star abundance survey of Chen et al. (2000), which we will be comparing to our results in Section 4.2.7. We also include HD217014 (51 Peg), even though it was already analyzed in Paper II, because: 1) the new spectra are of much higher quality, and 2) it was included in the field star abundance surveys of Edvardsson et al. (1993) and Tomkin et al. (1997).
High resolution spectra of nine stars (HD12661, HD16141, HD37124, HD38529, HD46375, HD52265, HD92788, HD177830, and BD-10 3166) [^1] obtained with the HIRES spectrograph on the Keck I were supplied to us by Geoff Marcy (see Paper IV for more details on the instrument). The Keck spectra have the advantage of higher resolving power and much weaker water vapor telluric lines, due to the altitude of the site. However, the much smaller wavelength coverage of the Keck spectra results in a much shorter linelist for us to work with.
The data reduction methods are the same as those employed in Paper V. Spectra of hot stars with a high $v \sin i$ values were also obtained in order to divide out telluric lines in the McDonald and CTIO spectra.
ANALYSIS
========
Spectroscopic Analysis
----------------------
The present method of analysis is the same as that employed in Paper V, and therefore, will not be described herein. We have added more Fe I, II lines to our linelist (Table 2). Their $gf$-values were calculated from an inverted solar analysis using the Kurucz et al. (1984) Solar Flux Atlas or our spectrum of Vesta (obtained with the McDonald 2.7 m). We also added a new synthesized region: 9250 - 9270 Å. This region contains one Mg I, two Fe I, and three O I lines; only one of the O I triplet, 9266 Å, is unblended in all our stars, but the other two are usable in the warmer stars. The addition of a second Mg line to our linelist helps greatly, because the 5711 line was the only one we had employed until now, and it is not measurable in the cooler stars. We also added the O I triplet near 7770 Å. Since these lines are known to suffer from non-LTE effects, we have corrected the O abundances derived from these lines using Takeda’s (1994) calculations. We list the individual EW values in Tables 3 - 6 and present the adopted atmosphere parameters in Table 7. We list the \[X/H\] values in Tables 8 - 12. We list in Table 13 Mg and O abundances (derived from the 9250 Å region) for several stars studied in previous papers in our series.
Since we have not previously used the CTIO 1.5 m telescope for spectroscopic studies of SWPs, we need an independent check on the zero point of the derived abundances for HR810 and HD1237. To accomplish this, we also obtained a spectrum of $\alpha$ Cen A with this instrument. We derive the following values for T$_{\rm eff}$, $\log g$, $\xi_{\rm t}$, and \[Fe/H\]: $5774 \pm 61$ K, $4.22 \pm 0.08$, $0.90 \pm 0.10$, and $+0.35 \pm 0.05$. This value of \[Fe/H\] is 0.10 dex larger than the value derived by Neuforge-Verheecke & Magain (1997).
Derived Parameters
------------------
We have determined the masses and ages in the same way as in Paper V. Using the [*Hipparcos*]{} parallaxes (ESA 1997) and the stellar evolutionary isochrones of Schaller et al. (1992) and Schaerer et al. (1993), along with our spectroscopic $T_{\rm eff}$ estimates, we have derived masses, ages, and theoretical $\log g$ values (Table 14).[^2] BD-10 3166 is too distant for a reliable parallax determination, so it is not included in the table.
Two stars, HD37124 and HD46375, give inconsistent results: they are located in a region of the HR diagram where no ordinary stars are expected (they are too luminous and/or too cool relative to even the oldest isochrones). One possible solution is to invoke an unresolved companion of comparable luminosity. It is highly unlikely that the companion is responsible for the observed radial velocity variations in each star, as that would require them to be viewed very nearly pole-on, which is extremely improbable (Geoff Marcy, private communication). It is more likely that the companions are sufficiently separated such that they do not significantly affect the Doppler measurements on short timescales. Therefore, we encourage that these two systems be searched for close stellar companions.
Several other stars, HD1237, HD130322, and HD192263, are of too low a luminosity to derive reliable ages, due to the convergence of the stellar evolutionary tracks at low luminosities (see Figure 1). However, it is still possible to derive useful mass and $\log g$ estimates for them. For those stars with theoretical $\log g$ estimates in Table 14, there is generally good agreement with the spectroscopic values listed in Table 7.
DISCUSSION
==========
Comparison with Other Studies
-----------------------------
Several stars in the present study have been included in other recent spectroscopic studies. Santos et al. (2000b,c) analyzed a total of 13 SWPs using a method patterned after that of Paper V. Two of their stars, HD1237 and HD52265, overlap with our present sample (and one other, HD75289, from Paper V). Their results for HD75289 are nearly identical to ours. Our McDonald spectrum of HD52265 yields similar results to those of Santos et al. (2000b), but our Keck spectrum yields a T$_{\rm eff}$ value 100 K larger than theirs. Our T$_{\rm eff}$ estimates for HD1237 are very similar to theirs, and the other parameters agree less well but are still consistent with our results.[^3] Their \[Fe/H\] estimate for HD1237 is 0.06 dex smaller than ours. Combining this with the results of our analysis of $\alpha$ Cen, we tentatively suggest that our abundance determinations for HR810 and HD1237 (as listed in Tables 10 and 11) be reduced by 0.05 dex.
When comparing our results to those of Santos et al. (2000b,c), it should be noted that our quoted uncertainties are smaller than theirs. This cannot be due to differences in the way we calculate uncertainties, since they adopt the same method we employ. Also, our EW measurements for HD52265 and HDHD75289 are essentially the same within 1-2 mÅ. We suggest that a contributing factor is the small number of low excitation Fe I lines employed by them. Clean Fe I lines with $\chi_{\rm l}$ values near 1 eV are far less numerous than the high excitation Fe I lines. Adding even 2 or 3 more Fe I lines with small $\chi_{\rm l}$ values significantly increases the leverage one has in constraining T$_{\rm eff}$.
Another issue of possible concern with the Santos et al. (2000b,c) studies is the systematically large values of $\log g$ that they derive. Several of their estimates are near 4.8. This is 0.2 to 0.3 dex larger than is expected from theoretical stellar isochrones.
Abudance ratios, as \[X/Fe\], have also been derived by Santos et al. (2000b,c). Comparing \[Si/Fe\], \[Ca/Fe\], and \[Ti/Fe\] values for the three stars in common between Santos et al. (2000b) and the present work, we find the results to be consistent and well within the quoted uncertainties.
Feltzing & Gustafsson (1998) derived \[Fe/H\] $= +0.36$ for HD134987, only 0.04 dex greater than our estimate. Randich et al. (1999) derived \[Fe/H\] $= +0.30$ and Sadakane et al. (1999) derived \[Fe/H\] $= +0.31$ for HD217107, both consistent with our estimate of \[Fe/H\] $= +0.36$. Fuhrmann (1998) derived \[Fe/H\] $= +0.02$ for HD16141, smaller than our estimate of \[Fe/H\] $= +0.15$. Edvardsson et al. (1993) derived \[Fe/H\] $= +0.18$ for HD89744, smaller than our estimate of \[Fe/H\] $= +0.30$. Mazeh et al. (2000) derived \[Fe/H\] $= 0.00$ for HD 209458, very close to our estimate of \[Fe/H\] $=
+0.04$. Castro et al. (1997), using a spectrum with a S/N ratio of 75, derived \[Fe/H\] $= +0.50$ for BD-10 3166, 0.17 larger than our estimate; given the relatively low quality of their spectrum compared to ours, we are inclined to consider our estimate as more reliable for this star.
Gimenez (2000) derived T$_{\rm eff}$ and \[Fe/H\] values for 25 SWPs from Strömgren photometry. Nine stars are in common between the two studies, the results being in substantial agreement.[^4] In summary, then, our results are consistent with those of other recent studies.
Looking for Trends
------------------
The present total sample of SWPs with spectroscopic analyses is more than twice as large as that available in Paper V. Therefore, we will make a more concerted effort than in our previous papers to search for trends among the various parameters of SWPs. The first step is the preparation of the SWPs sample.
We will restrict our focus to extrasolar planets with minimum masses less than 11 M$_{\rm J}$. This excludes HD10697 (see Zucker & Mazeh 2000) and HD114762. We must also exclude BD-10 3166, as it was added to Doppler search programs (Butler et al. 2000) as a result of our suggestion (in Paper IV), based on its similarity to 14Her and $\rho^{1}$Cnc. The planet around HD89744 was also predicted prior to its announcement in January 2000 (see Gonzalez 2000), but it was already being monitored for radial velocity variations (Robert Noyes, private communication), so we will retain it in the sample. The remaining stars are drawn from the previous papers in our series as well as the studies of Santos et al. (2000b,c), which are patterned after Paper V. The total number of SWPs in the sample is 38.
We will compare the parameters of SWPs to those of field stars without known giant planets. Of course, the comparison is not perfect given: 1) the possible presence of giant planets not yet discovered in the field star sample, 2) possible systematic differences between our results and those of the field star surveys, and 3) the possibility that some stars without known planets have lost them through dynamical interactions with other stars in its birth cluster (Laughlin & Adams 1998).
### Young SWPs
The observed metallicity distribution among nearby dwarfs is due to a combination of several factors: 1) the spread in age (combined with the disk age-metallicity relation), 2) radial mixing of stars born at different locations in the disk (combined with the Galactic disk radial metallicity gradient), and 3) intrinsic (or “cosmic”) scatter in the initial metallicity. These have the effect of blurring any additional metallicity trends that we may be interested in studying. The effects of the first two factors can be greatly mitigated if we restrict our attention to young stars (i.e., age less than $\sim$ 2 Gyrs), since they have approximately the same age and their orbits in the Milky Way have not changed very much.
Gonzalez (2000) presented a preliminary analysis of this kind. There he compares the \[Fe/H\] values of four young stars, HR810, $\tau$ Boo, HD75289, and HD192263, to that of a field young star sample and finds that all four are metal-rich relative to the mean trend in the field. We repeat the comparison here with HR810, HD1237, HD13445, HD52265, HD75289, HD82943, HD89744, HD108147, HD121504, HD130322, HD169830, HD192263, and $\tau$ Boo (Figure 2); HD13445, HD82943, and HD169830 are from Santos et al. (2000b), and HD108147 and HD121504 are from Santos et al. (2000c). The age estimates for HD82943 and HD169830 quoted by Santos et al. (2000b), 5 and 4 Gyrs, respectively, are based on the Ca II emission measure, which is not as reliable for F stars as ages derived from stellar isochrones. Ng & Bertelli (1998) derive an age and mass of $2.1 \pm 0.2$ Gyrs and $1.39 \pm 0.01$ M$_{\odot}$, respectively, for HD169830; we estimate an age of $2.4 \pm 0.3$ Gyrs, based on the T$_{\rm eff}$ and \[Fe/H\] estimates of Santos et al. (2000b). For HD82943 we derive age and mass estimates of $2 \pm 1$ Gyrs and $1.16 \pm 0.02$ M$_{\odot}$, respectively, from Santos et al.’s (2000b) results. Therefore, the age of HD169830 is sufficiently close to our 2 Gyr cutoff to justify its inclusion it in the young star subsample. Our age and mass estimates for HD108147 and HD121504 are, respectively: $1 \pm 1$ Gyr, $1.23 \pm
0.02$ M$_{\odot}$ and $2 \pm 1$ Gyr, $1.18 \pm 0.02$ M$_{\odot}$.
These results support the trend of higher mean \[Fe/H\] for young stars reported previously, except HD13445 and, with a lesser deviation, HD121504; both have a mean Galactocentric distance inside the Sun’s orbit. One possible solution to this discrepancy may be that the age of HD13445 has been underestimated. If independent evidence of a greater age is found for HD13445, then it should be removed from the young star sample.
### Metallicity and Stellar Mass
The detection of a correlation between metallicity and stellar mass has been suggested as a possible confirmation of the “self-pollution” scenario (Papers I, II). This is due to the dependence of stellar convective envelope mass on stellar mass for luminosity class V stars. Hence, the accretion of a given mass of high-Z material by an F dwarf will have a greater effect on the surface abundances than the accretion of the same amount of material by a G dwarf. Laughlin (2000), using \[Me/H\] and mass estimates of 34 SWPs, finds a significantly greater correlation between \[Me/H\] and stellar mass among the SWPs compared to a field star control sample.
Santos et al. (2000b,c) also address this question. Their analysis differs from Laughlin’s in that they compare \[Fe/H\], corrected for stellar age, to the convective envelope mass at two ages, $10^{7}$ and $10^{8}$ years (note, Laughlin’s comparison to a control sample eliminates the need to correct for stellar age). Santos et al. (2000b,c) find a correlation that supports our findings, but they are not convinced it is significant. They are concerned with an observational bias that reduces detection efficiency among F dwarfs relative to G dwarfs, due to the higher average rotation velocities among F dwarfs. However, such a bias should only affect the relative number of detected F dwarfs with planets, not the mean \[Fe/H\] of the selected F dwarfs.
Among the young star subsample discussed in the previous section, there is a weak correlation between stellar mass and excess \[Fe/H\] (Figure 2b), which we define as the offset in \[Fe/H\] for a given star from the mean trend line in Figure 2a. The nine young SWPs with mass $> 1.0$ M$_{\odot}$ are, on average, 0.15 dex more metal-rich than the three low mass stars (excluding HD13445). A least-squares fit to the full sample of young SWPs results in a slope of $+0.63 \pm 0.26$ dex M$_{\odot}^{\rm -1}$, R of 0.60, and RMS scatter of 0.20 dex; leaving out HD13445, we find a slope of $+0.32 \pm
0.15$ dex M$_{\odot}^{\rm -1}$, R of 0.56, and RMS scatter of 0.11 dex. Laughlin finds a slope of 0.548 dex M$_{\odot}^{\rm -1}$ from his dataset.
### Metallicity and Stellar Temperature
The two most metal-rich SWPs, $\rho^{\rm 1}$ Cnc and 14Her, have similar temperatures, $\sim$5250 K. Is it just a coincidence that two similar stars with the highest known \[Fe/H\] values in the solar neighborhood have planets? Another interesting pair is 51Peg and HD187123, which not only have similar atmospheric parameters but also similar planets. On the other hand, HD52265 and HD75289 are virtually identical, but their planets have very different properties. At the present time such comparisons are not very useful given the small sample size, but they will eventually help in isolating environmental factors not directly related to the stellar parameters. For example, to account for the different parameter values of the planets orbiting HD52265 and HD75289, one could invoke stochastic planet formation mechanisms, dynamical interactions among giant planets, or perturbations by other stars in the birth cluster.
### Metallicity Distribution
Gonzalez (2000) and Santos et al. (2000b,c) have shown that the \[Fe/H\] distribution of SWPs peaks at higher \[Fe/H\] than that of field dwarfs. In Figure 3a we show the \[Fe/H\] distribution of the present sample of 38 SWPs with spectroscopic \[Fe/H\] values and compare it to the field star spectroscopic survey of Favata et al. (1997).[^5] The mean \[Fe/H\] of the SWPs sample is $+0.17 \pm 0.20$, while the mean of our subsample from Favata et al. is $-0.12 \pm 0.25$.
Can the difference in mean \[Fe/H\] between the field and SWP samples be accounted for entirely by the anomalously high \[Fe/H\] values of the more massive SWPs? To attempt an answer to this question, we can correct the \[Fe/H\] values of the SWPs for the apparent correlation with stellar mass noted above. We have applied the following correction: 0.15 dex is subtracted from \[Fe/H\] for SWPs with mass $> 1$ M$_{\odot}$. We present a histogram with the corrected \[Fe/H\] values in Figure 3b. The mean for the corrected sample is $0.07 \pm 0.19$.
Apart from the differences in their mean \[Fe/H\] values, the field star and SWPs samples differ in their shapes (see Figure 3a). The uncorrected SWPs sample is strongly asymmetric with a peak at very high \[Fe/H\]. The corrected distribution, however, looks much more symmetric (Figure 3b). Even after the correction is applied, however, there remains a small peak at extremely high \[Fe/H\].
We show the corresponding cumulative distributions in Figure 4. A Kolmogorov-Smirnov test applied to the distributions in Figure 4a indicates a probability less than 2.8x10$^{\rm -6}$ that they are drawn from the same population. The same test applied to Figure 4b indicates a probability less than 4.9x10$^{\rm -4}$. Therefore, assuming there are no significant selection biases or systematic differences, both SWP distributions are drawn from significantly more metal-rich populations than the field stars.
### Lithium Abundances
In Paper V we presented a simple comparison of Li abundances among SWPs to field stars and suggested a possible correlation in the sense that the SWPs have less Li, all else being equal. Ryan (2000) presents a more careful comparison of Li abundances in SWPs and field stars, and concludes that the two groups are indistinguishable in this regard. The present results do not change this conclusion.
### Carbon and Oxygen Abundances
In Paper V we presented preliminary evidence for a systematic difference in the \[C/Fe\] values for SWPs relative to the Gustafsson et al. (1999) plus Tomkin et al. (1997) field star samples. Most SWPs appeared to have \[C/Fe\] values less than field stars of the same \[Fe/H\]. The particularly low value of \[C/Fe\] for $\tau$Boo, a metal-rich F dwarf, lead us to select HD89744 as a possible SWP that should be monitored for Doppler variations based on its low \[C/Fe\].[^6]
In Paper V we did not consider possible systematic offsets among the various C abundance studies. To properly compare our results to other studies, it is essential to determine the relative offsets. Gustafsson et al. noted a systematic difference between their C abundances and those of Tomkin et al. (1995). Comparing the \[C/Fe\] estimates of the 28 stars in common between these two studies, we find a signficant systematic trend with T$_{\rm eff}$; the two sets of \[C/Fe\] values are equal at T$_{\rm eff} = 5826$ K, and the slope is 0.00041 dex K$^{\rm -1}$ (in the sense that the Gustafsson et al. values are larger than those of Tomkin et al. 1995). There are 9 stars in common between Tomkin et al. (1995) and Tomkin et al. (1997), with the former study having \[C/Fe\] values 0.05 dex larger on average. We only have one star in common with Tomkin et al. (1997), HD217014, where our \[C/Fe\] estimate is smaller by 0.11 dex; we assume half this difference is due to random error, and, hence, adopt a systematic offset of 0.05 dex. We have applied all these offsets to the various sources of \[C/Fe\] estimates and placed them on the zero-point scale of our results. We present the results in Figure 5a. Left out of the plot are Gustafsson et al. stars with T$_{\rm eff} > 6400$K, since: 1) all the SWPs in our sample are cooler than this limit, and 2) the hottest stars in the Gustafsson et al. study display the largest deviations relative to those of Tomkin et al. (1995). Given the large systematic offset between Gustafsson et al. and the other studies, we decided not to apply a correction for location in the Milky Way, as we did in Paper V.
We cannot determine from our present analysis alone what is the source of the systematic trend in the Gustafsson et al. data relative to other studies. However, given that they employed a single weak \[C I\] line and their results are in agreement with others for stars of solar T$_{\rm eff}$ implies that a weak unrecognized high excitation line is blending with it. This is always the danger when basing the abundance for a given element on only one weak line.
Our new comparison of \[C/Fe\] values between SWPs and field stars does not confirm the claim we made in Paper V. Instead, the SWPs appear to have slightly larger values of \[C/Fe\], but not significantly so. The four SWPs with the largest \[C/Fe\] values are HD13445, HD37124, HD168746, HD192263.
This new result for the \[C/Fe\] values of SWPs relative to field stars compels us to revisit our successful prediction of the planet orbiting HD89744. That prediction was based on: 1) its high \[Fe/H\], and 2) its low \[C/Fe\]. With the elimination of the second criterion, we are left with only one reason for its selection. However, HD89744 is also a young F dwarf, and as shown in Section 4.2.1, it is much more metal-rich than the trend among young field stars. Therefore, our original success for this star was accidental, but in light of the results presented in this work, we can look back and understand why we were successful.
In Figure 5b we present \[O/Fe\] values for field stars from Edvardsson et al. (1993) and Tomkin et al. (1997) and for the present sample of SWPs (using our average O abundances from Tables 8 - 11, 13). As we did in Paper V for C, we corrected the observed \[O/Fe\] values for a weak trend with Galactocentric distance (amounting to $-0.032$ dex kpc$^{\rm -1}$). Apart from one star (HD192263, for which we did not measure the O I triplet near 9250 Å), it appears that the SWPs follow the same trend as the field stars. The smaller scatter among the \[O/Fe\] values for field stars compared to \[C/Fe\] may be indicative of the more varied sources of C in the Milky Way.
In Figure 6 we present the \[C/O\] values for the same stars plotted in Figures 4a,b. The field stars display a positive trend with \[Fe/H\]. The SWPs appear to follow the same trend.
### Other Light Element Abundances
Several other light elements have well-determined abundances in solar type stars: Na, Mg, Al, Si, Ca, and Ti. Three recent spectroscopic surveys of nearby solar type stars have produced high quality abundance datasets for these elements: Edvardsson et al. (1993) and Tomkin et al. (1997), Feltzing & Gustafsson (1998), and Chen et al. (2000).[^7] They are all differential fine abundance studies which use the Sun as the standard for the $gf-$values. We will use the results of these studies, with some modifications noted below, to search for possible deviations among SWPs from trends in the field star population.
For the ED93 sample we are: 1) retaining only the higher quality ESO results (and excluding their McDonald-based spectra), 2) retaining only single stars, 3) excluding known SWPs, and 4) including the results of Tomkin et al. (1997). Note that although Tomkin et al. (1997) only reanalyzed nine stars from Edvardsson et al., all of them are metal-rich; therefore, inclusion of their results greatly helps the present comparison. For the other two surveys we are also retaining only single stars and excluding known SWPs. We are also excluding K dwarfs from the FG98 sample. Our final adopted three samples, ED93, FG98, and CH00, contain 62, 37, and 89 stars, respectively. Not every star in these samples contains a full set of light element abundance determinations, so the actual number of stars available for comparison of abundances of a given element will be less than these totals. In order to reduce systematic errors, we only employ abundances derived from neutral lines in forming \[X/Fe\] values; in the following, \[Ti/Fe\] is shorthand for \[Ti I/Fe I\].
Although \[X/Fe\] values from different studies are less likely to suffer from systematic differences than are \[Fe/H\] values (due, for instance, to cancellation of systematic errors in EW measurement), we must confirm that these various studies are consistent with each other. We can do this by comparing stars in common among them. We discuss each element in turn below:
Na – In all the studies considered here, abundances are based on the Na I pair near 6160 Å. With ED93 we share 47UMa, $\rho$CrB, and 51Peg. Our \[Na/Fe\] estimates are larger by 0.1 dex for 47UMa, $\rho$CrB and 0.1 dex smaller for 51Peg.[^8] FG98 derive a \[Na/Fe\] value for HD134987 0.17 dex larger than our estimate. CH00 have in common with us 47UMa and HD75332 for which their \[Na/Fe\] estimates are 0.13 and 0.00 dex smaller than ours, respectively. We compare all the datasets in Figure 7 (note, the CH00 and ED93 samples are plotted on separate diagrams for clarity). Both ED93 and FG98 show an upturn in \[Na/Fe\] above solar \[Fe/H\]. The data from CH00 do not include metal-rich stars, but are similar to ED93 for smaller \[Fe/H\]. The SWPs sample deviates towards smaller \[Na/Fe\] by about 0.2 dex relative to ED93 and FG98 at high \[Fe/H\]. This difference appears to be real, but additional study of the field metal rich stars would be very helpful. The star with the most deviant negative \[Na/Fe\] is HD192263.
Mg – This is a more difficult element to measure, given the relatively small number of clean weak lines available. Also, the neutral lines employed by different studies are very heterogeneous. The large scatter evident among the FG98 sample in Figure 8 is perhaps evidence of the difficulty in deriving accurate values of \[Mg/Fe\] for metal-rich stars. For the stars in common, our \[Mg/Fe\] values are consistently smaller by about 0.08 dex, on average. The difference between the SWPs and the metal-rich field stars in the figure is about twice this value. Thus, we conclude that there is evidence for a real difference in \[Mg/Fe\], but it is tentative.
Al – There is considerable overlap in the lines employed by different studies, but no two adopt exactly the same linelist. Determinations of \[Al/Fe\] should be about as reliable as those of \[Na/Fe\]. Our \[Al/Fe\] estimates are smaller by about 0.04 dex than ED93, about the same as FG98, and 0.23 dex smaller than CH00. As can be seen in Figure 9, the SWPs are not as cleanly separated from the field stars as they are for \[Na/Fe\] or \[Mg/Fe\], but there is a clump of SWPs with \[Al/Fe\] values significantly smaller than the field stars.
Si – Abundances of Si should be considered highly reliable. It is represented by several high quality neutral lines, and they display weak sensitivity to uncertainties in $T_{\rm eff}$ and $\log g$. Our \[Si/Fe\] estimates are about the same as ED93 and FG98, and 0.05 dex smaller than CH00. The FG98 sample displays a very large scatter in \[Si/Fe\] values compared with the other studies. Otherwise, the SWPs sample appears as a continuation of the field star trends to higher \[Fe/H\] (Figure 10).
Ca – The Ca abundances should also be considered reliable. Most studies employ at least 3 or 4 neutral Ca lines, with one or two of them in common. Our \[Ca/Fe\] estimates are about the same as ED93, and 0.06 dex smaller than CH00. We find no evidence of a significant difference between the SWPs and the field stars (Figure 11).
Ti – Like Si and Ca, the Ti abundances should be reliable. Most studies employ at least three neutral lines, some many more. The overlap is usually only one line, though. The scatter among the FG98 stars is larger than the other samples. Our \[Ti/Fe\] estimates are larger by about 0.04 dex than ED93, smaller by 0.04 dex than FG98, and larger by 0.04 dex than CH00. We find no evidence of a significant difference between the SWPs and the field stars (Figure 12).
In summary, there is some evidence of a real difference in \[Na/Fe\], \[Mg/Fe\], and \[Al/Fe\] between SWPs and the general field dwarf star population. There do not appear to be differnces in \[Si/Fe\], \[Ca/Fe\], and \[Ti/Fe\]. Among the F type SWPs included in the present study, the \[Na/Fe\], \[Mg/Fe\], and \[Al/Fe\] values are near $-0.05$, $0.00$, and $-0.13$, respectively. Additional high quality abundance analyses of metal rich field stars are required to test these findings.
Sources of Trends
-----------------
In Papers I and II, we proposed two hypotheses to account for the correlation between metallicity and the presence of giant planets: 1) accretion of high Z material after the outer convection zone of the host star has thinned to a certain minimum mass elevates the apparent metallicity above its primordial value, and/or 2) higher primordial metallicity in its birth cloud makes it more likely that a star will be accompanied by planets. Laughlin presents evidence for the first hypothesis in the form of a weak positive correlation between \[Me/H\] and stellar mass, while Santos et al. (2000b,c) finds a similar, though less convincing, trend with stellar envelope mass.
Our finding of a trend between \[Fe/H\] and stellar mass among the young SWPs also supports the first hypothesis. The mean difference in \[Fe/H\] between the high mass young SWPs and those of low mass is 0.15 to 0.20 dex. The low mass young SWPs have a mean \[Fe/H\] similar to that of the general field young star population, implying that self-pollution leads to less than a $\sim 0.05$ dex increase for these stars. However, given the very small sample size of young G dwarfs with planets, this statement is not yet conclusive. Laws & Gonzalez (2000) find a difference in \[Fe/H\] of $0.025 \pm 0.009$ dex between the solar analogs 16 Cyg A and B (16 Cyg A having the larger \[Fe/H\]). This small difference is consistent with our lack of a detection of a metallicity anomaly for the low mass young solar analogs (though the number statistics are still small).
Subgiants allow us another way of distinguishing the two hypotheses. After a solar type star leaves the main sequence and enters the subgiant branch on the HR diagram, the depth of its outer convection zone increases. Sackmann et al. (1993) find that the Sun’s outer convection zone increases by about 0.4 M$_{\odot}$ as it traverses horizontally across the HR diagram along the subgiant branch (see their Figure 2 and Table 2). Therefore, a star that has experienced significant pollution of its outer convection zone will undergo a 20 fold dilution of its surface metallicity. In this regard, HD38529 and HD177830 are particularly interesting. Both stars are extremely metal rich, with \[Fe/H\] $\sim$ 0.37. They tend to argue against the first hypothesis. However, the great age of HD177830 makes its high \[Fe/H\] value difficult to understand within either hypothesis.
As shown in the previous section, the F type SWPs appear to be deficient in Na and Al. The lack of significant deviations in the C, O, Si, Ti, and Ca abundances among the F type SWPs is a useful clue as to the composition of the material possibly accreted by their host stars. We can compare all the abundance anomalies to the composition of various candidate bodies. One such candidate is the Earth, for which the abundances of many elements are relatively well known. Using the bulk abundance estimates for the Earth of Kargel & Lewis (1993), we have prepared a list of logarithmic number abundances relative to Fe and relative to the corresponding solar photospheric ratios (Table 15). According to these estimates, C is only a trace element, O is moderately depleted, followed by Na, and the other light elements are present in roughly solar proportions. These numbers are not consistent with anomalous abundance ratios we found among the F type SWPs.
Our finding of normal C/O ratios among the SWPs is inconsistent with the suggestion of Gaidos (2000) that a low C/O ratio is required to build giant planets. Perhaps the mechanisms Gaidos discusses operate at a level undetectable with the present level of measurement precision.
If the anomalous F dwarfs are removed from the SWPs sample, there is still a significant difference relative to the field stars. The stars HD12661, HD83443, HD134987, HD177830, HD217107, BD-10 3166, $\rho^{1}$Cnc, and 14Her have \[Fe/H\] values greater than 0.30 dex and T$_{\rm eff}$ values less than 6000 K. Therefore, their high \[Fe/H\] values are more easily explained by the second hypothesis. The most anomalous stars are still $\rho^{1}$Cnc, and 14Her, which do not fit easily within either hypothesis.
Implications of Findings
------------------------
Assuming that a significant amount of self-pollution has indeed occurred in the atmospheres of the more massive SWPs, what are the implications? There are several. First, as noted in Paper II, a star with a metal-enriched envelope relative to its interior cannot be compared to stellar isochrones based on homogeneous models. Ford et al. (1999) find that decreasing the interior metallicity from the observed surface value leads to a decrease in the derived mass but to increases in the derived age and the size of the convective envelope. For 51Peg, Ford et al. find a decrease of 0.11 M$_{\odot}$ in mass and an increase in age of 3.2 Gyr, if they assume an interior metallicity 0.2 dex less than its observed surface metallicity. We encourage additional research like that of Ford et al., applied specifically to the F type SWPs discussed in the present work.
A possibly fruitful direction of research involves comparing the ages of the SWPs derived from stellar evolutionary isochrones to those obtained by other methods (e.g., Ca II emission measures, cluster membership, kinematics). If the F type SWPs do have metal-poor interiors, then it might be possible to determine the systematic errors in the age estimates using such observations.
It may also be possible to learn something of the composition of the accreted material by comparing the deviations of the abundances of the SWPs from the field star population. As was argued in the previous section, the present results are not consistent with accretion of material with the same composition as the Earth.
In Papers I and II we suggest that if the self-pollution mechanism is operating in SWPs, then we will have to adjust Galactic chemical evolution models accordingly, since the observed surface abundances of these stars are not reliable indicators of the composition of the ISM from which they formed. However, the effect is likely very small since it appears that only F dwarfs are affected, while most Galactic chemical evolution studies employ abundance data from G dwarf samples.
Brown Dwarfs
------------
So far only four stars have been studied spectroscopically with companions in the bwown dwarf mass range, 11 M$_{\rm J} <$ Mass $<$ 80 M$_{\rm J}$. These are HD10697 (present study), HD114762 (Paper II), HD162020 (Santos et al. 2000c), and HD202206 (Santos et al. 2000b) with \[Fe/H\] values of $+0.16$, $-0.60$, $+0.01$, and $+0.36$, respectively. This is a large range, and, given the very small sample size, it is too early to make a meaningful comparison with the field stars. Nevertheless, it is notable that two of these stars are quite metal-rich. HD202206 is particularly interesting with its exceptionally high \[Fe/H\].
CONCLUSIONS
===========
Employing a sample of 38 SWPs with high quality spectroscopic abundance analyses, we find the following anomalies:
- The present results confirm the high average \[Fe/H\] of SWPs compared to nearby field stars. The average \[Fe/H\] of the 38 SWPs with high quality spectroscopic \[Fe/H\] estimates is $+0.17 \pm 0.20$.
- Among the youngest SWPs, which presumably have suffered the least amount of migration in the Milky Way’s disk, most have \[Fe/H\] values about 0.15 dex larger than young field stars at the same mean Galactocentric distance. Young SWPs more massive than $\sim$1 M$_{\odot}$, in particular, have the largest positive “excess \[Fe/H\]” relative to the field star trend.
- There does not appear to be significant differences in the \[C/Fe\] and \[O/Fe\] values between SWPs and field stars, though \[C/Fe\] may be somewhat high among the SWPs.
- We found evidence for smaller values of \[Na/Fe\], \[Mg/Fe\], and \[Al/Fe\] among SWP’s compared to field stars of the same \[Fe/H\]. They do not appear to differ in \[Si/Fe\], \[Ca/Fe\], or \[Ti/Fe\].
We have suggested the “self-pollution” scenario as an explanation for the anomalous trends among the F type SWPs. However, it is not likely that this mechanism can account for the extremely high \[Fe/H\] values of 14Her and $\rho^{1}$ Cnc or of the highly evolved subgiant, HD177830.
Given the trends uncovered in the present work and by Laughlin, one is virtually guaranteed of discovering a giant planet orbiting a young F dwarf with a \[Fe/H\] value $\sim$0.25 dex greater than that of field stars at the same Galactocentric distance. In this regard, we expect that a planet will be found orbiting HD75332.[^9] The following four super metal rich F or G0 dwarfs from Feltzing & Gustafsson (1998) also should be searched for planets: HD71479, HD87646, HD110010, and HD130087. Given the trends uncovered in the present work, we believe the chances are high that these stars harbor giant planets. It would be even more interesting if any of these stars do not have planets!
We are grateful to David Lambert for obtaining spectra of HD12661 and HD75332, George Wallerstein for obtaining spectra of HR810 and HD1237, and Geoff Marcy for sharing with us his Keck template spectra. We thank Eric Gaidos for bringing to our attention the systematic offsets among the various C abundances studies. Greg Laughlin and the anonymous referee also provided helpful comments. This research has made use of the Simbad database, operated at CDS, Strasbourg, France, as well as Jean Schneider’s and Geoff Marcy’s extrasolar planets web pages. This research has been supported by the Kennilworth Fund of the New York Community Trust. Sudhi Tyagi was supported by the Space Grant Program at the University of Washington.
-3pt
Butler, R. P., Vogt, S. S., Marcy, G. W., Fischer, D. A., Henry, G. W., & Apps, K. 2000, , in press
Castro, S., Rich, R. M., Grenon, M., Barbuy, B., & McCarthy, J. K. 2000, , 114, 376
Chen, Y. Q., Nissen, P. E., Zhao, G., Zhang, H. W., & Benoti, T. 2000, , 141, 491
Donahue, R. A. 2000, Ph.D. thesis, New Mexico State Univ.
Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D. L., & Nissen, P. E. et al. 1993, , 275, 101
ESA 1997, The [*Hipparcos*]{} and [*Tycho*]{} Catalogue, ESA SP-1200
Favata, F., Micela, G., & Sciortino, S. 1997, , 322, 131
Feltzing, S. & Gustafsson, B. 1998, , 129, 237
Fischer, D. A., Marcy, G. W., Butler, R. P., Vogt, S. S., & Apps, K. 1999, , 111, 50
Fischer, D. A., Marcy, G. W., Butler, R. P., Vogt, S. S., Frink, S., & Apps, K. 2000, , in press
Ford, E. B., Rasio, F. A., & Sills, A. 1999, , 514, 411
Fuhrmann, K. 1998, , 338, 161
Fuhrmann, K., Pfeiffer, M. J., & Bernkopf, J. 1997, , 326, 1081
Fuhrmann, K., Pfeiffer, M. J., & Bernkopf, J. 1998, , 336, 942
Gaidos, E. 2000, Icarus, 145, 637
Gonzalez, G. 1997, , 285, 403 (Paper I)
Gonzalez, G. 1998, , 334, 221 (Paper II)
Gonzalez, G. 1999, , 308, 447
Gonzalez, G. 2000, Disks, Planetesimals, and Planets, ed. F. Garzon, C. Eiroo, D. de Winter, & T. J. Mahoney, ASP Conf. Ser (San Francisco: ASP), in press
Gonzalez, G. & Laws, C. 2000, AJ, 119, 390 (Paper V)
Gonzalez, G. & Vanture, A. D. 1998, , 339, L29 (Paper III)
Gonzalez, G., Wallerstein, G, & Saar, S. H. 1999, , 511, L111 (Paper IV)
Grevesse, N. & Sauval, A. J. 1998, Space Science Reviews, 85, 161
Gustafsson, B., Harlsson, T., Olsson, E., Edvardsson, B., & Ryde, N. 1999, , 342, 426
Henry, T. J., Soderblom, D. R., Donahue, R. A., & Baliunas, S. L. 1996, , 111, 439
Henry, G. W., Butler, R. P., & Vogt, S. S. 2000, , 529, L41
Kargel, J. S. & Lewis, J. S. 1993, Icarus, 105, 1
Korzennik, S. G., Brown, T. M., Fischer, D. A., Nisenson, P., Noyes, R. W. 2000, , 533, L147
Kurster et al. 2000, , 53, L33
Kurucz, R. L., Furenlid, I., Brault, J., Testerman, L. 1984, Solar Flux Atlas from 296 to 1300 nm, National Solar Observatory
Laughlin, G. 2000, , in press
Laughlin, G. & Adams, F. C. 2000, , 508, L171
Laws, C. & Gonzalez, G. 2000, ApJL, submitted
Marcy, G., Butler, R. P., Vogt, S. S., Fischer, D., & Liu, M. C. 1999, , 520, 239
Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, , 536, L43
Mayor, M. & Queloz, D. 1995, , 378, 355
Mazeh, T., Naef, D., Torres, G., Latham, D. W., Mayor, M. et al. 2000, , 532, L55
Naef, D., Mayor, M., Pepe, F., Queloz, D., Udry, S., & Burnet, M. 2000a, Disks, Planetesimals, and Planets, ed. F. Garzon, C. Eiroo, D. de Winter, & T. J. Mahoney, ASP Conf. Ser (San Francisco: ASP), in press
Naef, D. et al. 2000b, , submitted
Neuforge-Verheecke, C. & Magain, P. 1997, , 328, 261
Randich, S., Gratton, R., Pallavicini, R., Pasquini, L., & Carretta, E. 1999, , 348, 487
Ryan, S. G. 2000, , 316, L35
Saar, S. H., & Brandenburg, A. 1999, , 524, 295
Sackmann, I.-J., Boothroyd, A. I., & Kraemer, K. E. 1993, , 418, 457
Sadakane K., Honda, S., Kawanomoto, S., Takeda, Y., & Takada-Hidai, M. 1999, , 51, 505
Santos, N. C., Mayor, M., Naef, D., Pepe, F., Queloz, D., et al. 2000a, , 356, 599
Santos, N. C., Israelian, G., & Mayor, M. 2000b, , in press
Santos, N. C., Israelian, G., & Mayor, M. 2000c, poster presented at IAU Symposium 202
Schaerer, D., Charbonnel, C., Meynet, G., Maeder, A., & Schaller, G. 1993, , 102, 339
Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, , 96, 269
Sivan, J. P. et al. 2000, IAU Symposium 202, in press
Smith, M. A. & Giampapa, M. S. 1987, in Cool Stars, Stellar Systems, and the Sun, eds. J. Linsky & R. Stencel (Berlin: Springer), 477
Takeda, Y. 1994, , 46, 53
Tomkin, J., Edvardsson, B., Lambert, D. L., & Gustafsson, B. 1997, , 327, 587
Tomkin, J., Woolf, V. M., Lambert, D. L., & Lemke, M. 1995, , 109, 2204
Udry, S., Mayor, M., Naef, D., Pepe, F., Queloz, D., Santos, N. C. et al. 2000, , 356, 590
Vogt, S. S., Marcy, G. W., Butler, R. P., & Apps, K. 2000, , 536, 902
Zucker, S. & Mazeh, T. 2000, , 531, L67
FIGURE CAPTIONS {#figure-captions .unnumbered}
===============
[lcccccccc]{} HR810 & 00/02/16 & CTIO 1.5-m & 5200-9000 & 40,000 & 75 & $+16.5$ & GW & 1HR810 & 00/02/17 & CTIO 1.5-m & 5200-9000 & 40,000 & 110 & $+16.7$ & GW & 1HR810 & 00/02/18 & CTIO 1.5-m & 5200-9000 & 40,000 & 130 & $+16.2$ & GW & 1HD1237 & 00/02/17 & CTIO 1.5-m & 5200-9000 & 40,000 & 60 & $-6.7$ & GW & 2HD1237 & 00/02/18 & CTIO 1.5-m & 5200-9000 & 40,000 & 75 & $-6.5$ & GW & 2HD1237 & 00/02/19 & CTIO 1.5-m & 5200-9000 & 40,000 & 60 & $-6.2$ & GW & 2HD12661 & 99/02/07 & McD 2.7-m & 3700-10000 & 64,000 & 445 & $-47.1$ & DLL & 3HD75332 & 99/02/06 & McD 2.7-m & 3700-10000 & 64,000 & 690 & $+5.1$ & DLL & HD10697 & 99/12/21 & McD 2.7-m & 3700-10000 & 58,000 & 250 & $-46.4$ & CL,GG & 4HD52265 & 99/12/22 & McD 2.7-m & 3700-10000 & 58,000 & 390 & $+54.0$ & CL,GG & 5, 13HD177830 & 99/12/22 & McD 2.7-m & 3700-10000 & 58,000 & 280 & $-72.5$ & CL,GG & 4HD192263 & 99/12/21 & McD 2.7-m & 3700-10000 & 58,000 & 155 & $-10.7$ & CL,GG & 4, 12HD209458 & 99/12/22 & McD 2.7-m & 3700-10000 & 58,000 & 280 & $-14.5$ & CL,GG & 6,7HD217014 & 99/12/22 & McD 2.7-m & 3700-10000 & 58,000 & 750 & $-32.8$ & CL,GG & 14HD217107 & 99/12/21 & McD 2.7-m & 3700-10000 & 58,000 & 340 & $-13.2$ & CL,GG & 8HD222582 & 99/12/22 & McD 2.7-m & 3700-10000 & 58,000 & 200 & $+12.4$ & CL,GG & 4HD89744 & 00/03/28 & McD 2.7-m & 3700-10000 & 63,000 & 700 & $-5.3$ & BER & 9HD130322 & 00/03/28 & McD 2.7-m & 3700-10000 & 63,000 & 300 & $-12.9$ & BER & 10HD134987 & 00/03/28 & McD 2.7-m & 3700-10000 & 63,000 & 520 & $+4.5$ & BER & 4HD168443 & 00/03/28 & McD 2.7-m & 3700-10000 & 63,000 & 390 & $-48.8$ & BER & 11
[lccccc]{} Fe I & 6024.07 & 4.55 & -0.12 & 118.0Fe I & 6093.65 & 4.61 & -1.34 & 31.0Fe I & 6096.67 & 3.98 & -1.81 & 37.2Fe I & 6098.25 & 4.56 & -1.74 & 17.1Fe I & 6213.44 & 2.22 & -2.66 & 83.5Fe I & 6820.37 & 4.64 & -1.17 & 42.0Fe I & 7583.80 & 3.02 & -1.90 & 85.7Fe I & 7586.03 & 4.31 & -0.18 & 135.3Fe II & 5991.38 & 3.15 & -3.48 & 30.3Fe II & 6442.95 & 5.55 & -2.38 & 5.2Fe II & 6446.40 & 6.22 & -1.92 & 4.0
[lcccccccc]{} C I & 5380.32 & 24.9 & 33.4 & 39.2 & 35.9 & 49.3 & 14.6 & 34.8C I & 6587.62 & 18.3 & 22.3 & 30.8 & 29.5 & 42.6 & 7.9 & 25.4C I & 7108.94 & 15.6 & & & & & & C I & 7115.19 & 37.3 & & 40.6 & & 50.8 & & C I & 7116.99 & 25.4 & & 35.4 & & 45.7 & 11.5 & 30.1N I & 7468.31 & 6.0 & 9.5 & 9.9 & 9.4 & 13.3 & & 10.6O I & 7771.95 & 76.9 & 78.4 & 112.1 & & 144.3 & 43.3 & 86.9O I & 7774.17 & 67.3 & 72.9 & 100.1 & 111.3 & 129.3 & 39.0 & 81.6O I & 7775.39 & 54.9 & 57.8 & 82.9 & 90.4 & 105.2 & 30.2 & 64.9Na I & 6154.23 & 49.7 & 73.0 & 43.9 & 32.5 & 40.1 & 55.2 & 68.9Na I & 6160.75 & 75.4 & 93.0 & 62.1 & 48.5 & 55.1 & 70.8 & 86.1Mg I & 5711.10 & 115.7 & 133.5 & 103.4 & 96.1 & 102.9 & 135.0 & 129.0Al I & 7835.32 & 54.9 & 90.5 & 52.6 & 39.5 & 50.0 & 58.4 & 67.6Al I & 7836.13 & 73.0 & 123.2 & 67.2 & 51.8 & 66.1 & 79.2 & 95.7Si I & 5793.08 & 55.4 & & 54.6 & & & 44.3 & 63.4Si I & 6125.03 & 42.5 & 54.6 & 43.7 & 37.0 & 42.3 & 31.7 & 52.2Si I & 6145.02 & 49.6 & 61.1 & 49.1 & 42.2 & 49.1 & 39.9 & 59.5Si I & 6721.84 & 55.8 & 77.5 & 57.2 & 47.6 & 57.2 & 49.8 & 65.1S I & 6046.11 & 27.3 & 30.0 & 30.2 & 24.5 & 33.6 & 17.4 & 32.1S I & 6052.68 & 17.8 & 22.9 & 22.0 & 23.0 & & 9.0 & 23.7S I & 7686.13 & 6.5 & & 4.6 & & 11.9 & & 8.3Ca I & 5867.57 & 32.1 & 40.2 & 23.4 & 21.1 & 24.3 & 37.7 & 38.1Ca I & 6166.44 & 81.0 & 91.1 & 69.5 & 65.4 & 70.4 & 91.6 & 87.9Sc II & 5526.79 & 93.3 & 95.9 & 92.6 & 86.1 & 108.4 & 73.2 & 95.2Sc II & 6604.60 & 51.7 & 54.9 & 47.7 & 38.0 & 56.1 & 38.6 & 53.3Ti I & 5965.83 & 39.0 & 50.3 & 28.2 & 24.5 & 25.0 & 49.1 & 51.3Ti I & 6126.22 & 33.9 & 41.3 & 19.7 & 13.7 & 15.0 & 44.8 & 37.4Ti I & 6261.10 & 65.9 & 75.7 & 43.6 & 35.8 & 40.5 & 77.6 & 69.8Ti II & 5336.77 & 86.4 & 87.3 & 87.9 & & 101.3 & 69.5 & 85.9Ti II & 5418.75 & 64.8 & 63.8 & 62.7 & & 74.7 & 48.6 & 60.7Cr I & 5787.97 & 56.6 & 65.9 & 49.3 & & 44.2 & & 62.2Fe I & 5044.21 & & 91.3 & 71.6 & & & 93.2 & 91.0Fe I & 5247.05 & & 85.2 & 64.4 & 51.8 & & 86.2 & 80.3Fe I & 5322.04 & & & 60.1 & 52.8 & & 76.3 & 77.7Fe I & 5651.47 & & & & 17.1 & & & Fe I & 5652.32 & & & & 23.4 & & & Fe I & 5775.09 & & & & 57.5 & & & Fe I & 5806.73 & 64.3 & 77.1 & 59.2 & 54.0 & 59.3 & 67.5 & 75.0Fe I & 5814.80 & & & & 19.4 & & & Fe I & 5827.89 & & & & 9.5 & & & Fe I & 5852.19 & 54.0 & 60.5 & 44.9 & 38.0 & 41.4 & 53.1 & 58.0Fe I & 5853.18 & 15.7 & 16.9 & & & & 19.1 & 19.8Fe I & 5855.13 & 34.4 & 39.5 & 25.4 & 20.2 & 22.9 & 31.3 & 38.0Fe I & 5856.08 & 44.3 & 51.5 & 36.6 & 30.8 & 34.8 & 44.0 & 49.0Fe I & 5956.70 & 68.3 & & 43.9 & 33.3 & 38.4 & 70.1 & 65.5Fe I & 6024.07 & & & 114.8 & & & & Fe I & 6027.06 & 75.6 & 83.0 & 69.9 & 61.4 & 70.0 & 75.3 & 80.5Fe I & 6034.04 & 13.2 & 19.2 & 10.1 & 9.6 & 10.4 & 14.9 & 18.0Fe I & 6054.10 & 17.2 & 23.4 & 9.6 & & & 16.8 & 21.0Fe I & 6055.99 & & & & & & 88.3 & 91.0Fe I & 6079.02 & & 69.0 & & 45.0 & & & Fe I & 6120.25 & 10.8 & 11.9 & 4.4 & & & 11.8 & 11.0Fe I & 6151.62 & 63.8 & 69.6 & 47.6 & 36.8 & 42.9 & 66.1 & 66.0Fe I & 6157.73 & & 80.1 & 67.2 & 61.1 & 67.9 & 74.5 & 86.6Fe I & 6159.41 & 20.2 & 25.3 & 13.4 & 12.7 & 14.3 & 19.5 & 22.5Fe I & 6165.37 & 57.1 & 64.2 & 49.8 & 40.8 & 44.7 & 55.2 & 61.9Fe I & 6180.22 & & & & & & & 82.4Fe I & 6188.04 & & 70.1 & & & & 62.7 & Fe I & 6226.77 & 42.3 & 48.8 & 31.6 & 25.6 & 30.2 & 41.1 & 43.7Fe I & 6229.23 & 52.6 & 63.5 & 43.0 & 31.7 & 39.4 & 54.4 & 61.5Fe I & 6240.66 & 65.2 & 69.5 & 47.0 & 37.5 & 45.9 & 64.9 & 67.9Fe I & 6265.13 & & & & & & & 106.5Fe I & 6270.24 & 67.1 & 71.6 & 53.2 & 44.8 & 50.8 & 66.9 & 68.8Fe I & 6303.46 & & 10.2 & & & & & Fe I & 6380.75 & 65.1 & 74.9 & 57.3 & 50.3 & 56.0 & 64.1 & 68.8Fe I & 6385.74 & 17.5 & 22.6 & 12.9 & & & 17.4 & 19.8Fe I & 6392.55 & 29.2 & 33.7 & 14.4 & 9.7 & 11.1 & 32.2 & 29.9Fe I & 6498.95 & 63.5 & 74.5 & 43.7 & 30.0 & 35.9 & 67.7 & 64.0Fe I & 6581.22 & & 40.0 & 19.7 & & 14.5 & 35.6 & 46.0Fe I & 6591.32 & 14.6 & 20.4 & 14.2 & 10.6 & 10.9 & 16.2 & 18.3Fe I & 6608.03 & 30.8 & 35.3 & 14.3 & 12.9 & 12.8 & 32.9 & 33.8Fe I & 6627.56 & 38.6 & 48.6 & 32.2 & 24.4 & 30.3 & 39.2 & 49.2Fe I & 6646.98 & 20.9 & 23.6 & 10.0 & & & 21.1 & 22.2Fe I & 6653.88 & 15.4 & 21.2 & 13.6 & 10.8 & 10.2 & 16.6 & 22.2Fe I & 6703.57 & 51.5 & & 35.9 & 27.4 & 34.0 & 53.3 & 54.0Fe I & 6710.31 & 27.7 & 34.0 & 13.8 & 10.2 & 9.3 & 32.6 & 28.4Fe I & 6725.39 & 27.8 & 33.9 & 20.2 & 15.8 & 17.5 & 27.4 & 30.5Fe I & 6726.67 & 58.2 & 68.0 & 52.5 & 44.6 & 49.4 & 61.0 & 64.1Fe I & 6733.16 & 37.8 & 45.8 & 28.6 & 23.4 & 28.3 & 37.0 & 43.8Fe I & 6739.54 & 21.1 & 24.7 & 11.8 & & 6.6 & 23.9 & 23.2Fe I & 6745.11 & & 19.2 & & & & 12.9 & Fe I & 6745.96 & & 16.6 & 7.8 & & & 12.2 & Fe I & 6746.96 & & 10.1 & 3.9 & & & 8.8 & 8.5Fe I & 6750.15 & 91.1 & 97.3 & 77.2 & 65.0 & 73.9 & 93.6 & 94.4Fe I & 6752.72 & 50.8 & 54.9 & 39.4 & 34.2 & 38.1 & 43.5 & 54.9Fe I & 6786.88 & & 44.2 & & & & & 41.6Fe I & 6820.43 & & & 45.8 & & & & Fe I & 6839.83 & & & & & & & 46.5Fe I & 6855.74 & & 38.1 & 23.2 & & & 26.3 & 31.5Fe I & 6861.93 & & 39.7 & 19.5 & & & 34.1 & 34.2Fe I & 6862.48 & 43.0 & 51.2 & 32.8 & & & 40.9 & 47.4Fe I & 6864.31 & & 14.0 & & & & 10.0 & 12.2Fe I & 7498.56 & 30.1 & 35.5 & 21.7 & & 18.7 & 25.9 & 32.0Fe I & 7507.27 & 74.6 & 90.0 & 64.1 & & 61.4 & 80.1 & 83.1Fe I & 7583.80 & & & 85.5 & & 83.7 & & Fe I & 7586.03 & & & 128.2 & & 129.3 & & Fe II & 5234.62 & & 96.5 & & & & 74.2 & 97.0Fe II & 5425.25 & & & & 58.2 & & & Fe II & 5991.37 & & & 48.6 & & 59.7 & & 48.9Fe II & 6084.10 & & 33.0 & & 31.7 & & 17.4 & Fe II & 6149.25 & 47.3 & 47.1 & 56.6 & 54.3 & 68.0 & 29.3 & 48.3Fe II & 6247.55 & 65.4 & 63.7 & 74.5 & & 90.3 & 42.1 & 65.4Fe II & 6369.46 & 28.5 & 29.6 & 35.0 & 27.3 & 42.5 & 15.6 & 30.3Fe II & 6416.92 & & 54.1 & & & & 38.6 & Fe II & 6442.95 & & & & & 14.8 & & 8.3Fe II & 6446.40 & & & & & 15.5 & & 7.3Ni I & 6767.77 & 92.9 & 103.5 & 80.5 & 69.5 & 80.9 & 95.7 & 100.4Eu II & 6645.13 & 12.3 & 10.8 & 8.1 & 6.3 & 12.2 & 7.9 & 7.9
[lccccccc]{} C I & 5380.32 & 27.3 & 13.6 & 29.6 & 30.2 & 27.6 & 20.1C I & 6587.62 & 18.2 & & 18.9 & 21.2 & 17.5 & 13.3C I & 7108.94 & & 5.3 & 7.5 & & 15.7 & 6.7C I & 7115.19 & & 14.6 & 29.1 & 39.0 & & 21.9C I & 7116.99 & 24.5 & 10.5 & 23.0 & 24.6 & 28.1 & 20.1N I & 7468.31 & 4.8 & 3.0 & 5.2 & 9.0 & 3.9 & 4.9O I & 7771.95 & 76.9 & 26.8 & 100.1 & 83.5 & 72.5 & 72.3O I & 7774.17 & 69.2 & 23.6 & 89.0 & 74.0 & 66.0 & 65.7O I & 7775.39 & 53.9 & 16.7 & 68.8 & 59.0 & 52.3 & 51.0Na I & 6154.23 & 49.9 & 68.5 & 27.0 & 52.1 & 63.1 & 35.7Na I & 6160.75 & 67.3 & 87.1 & 45.4 & 74.1 & 82.8 & 57.7Mg I & 5711.10 & 121.2 & 138.2 & 95.0 & 119.5 & 133.0 & 106.3Al I & 7835.32 & 59.5 & 64.9 & 37.2 & 59.3 & 72.9 & 46.3Al I & 7836.13 & 81.5 & 80.2 & 51.8 & 81.1 & 93.8 & 59.4Si I & 5793.08 & 55.7 & 38.7 & 45.5 & & 64.8 & 49.1Si I & 6125.03 & 42.4 & 29.3 & 28.5 & 46.5 & 52.6 & 33.8Si I & 6145.02 & 47.4 & 29.9 & 38.3 & 51.2 & 57.2 & 40.8Si I & 6721.84 & 57.1 & 40.1 & 44.1 & 59.5 & 72.6 & 59.0S I & 6046.11 & 25.3 & 10.2 & 17.8 & 23.2 & 34.0 & 18.0S I & 6052.68 & 14.3 & 28.1 & 15.0 & 18.4 & 17.3 & 15.0S I & 7686.13 & & & 4.3 & & 9.2 & 5.8Ca I & 5867.57 & 34.4 & 57.5 & 31.1 & 33.1 & 42.7 & 25.3Ca I & 6166.44 & 83.3 & 107.7 & 63.4 & 82.5 & 92.2 & 70.4Sc II & 5526.79 & 93.4 & 68.7 & 82.7 & 89.4 & 75.9 & 82.1Sc II & 6604.60 & 53.9 & 36.6 & 38.6 & 49.5 & 56.4 & 41.5Ti I & 5965.83 & 51.2 & 74.2 & 20.6 & 38.1 & 52.2 & 33.3Ti I & 6126.22 & 39.4 & 70.6 & 11.9 & 30.9 & 46.8 & 23.1Ti I & 6261.10 & 71.6 & 104.1 & 37.9 & 59.1 & 80.6 & 49.9Ti II & 5336.77 & 85.6 & 62.2 & 80.0 & & 75.7 & Ti II & 5418.75 & 64.2 & 42.5 & 54.9 & & 63.3 & 56.1Cr I & 5787.97 & & 78.7 & 40.5 & 55.2 & 70.5 & 49.4Fe I & 5044.21 & 86.3 & & 67.3 & & & 72.7Fe I & 5247.05 & 81.7 & 108.5 & 51.0 & 75.8 & & 69.6Fe I & 5322.04 & & 89.0 & 54.6 & 70.8 & & Fe I & 5806.73 & 65.2 & 71.6 & 48.9 & 66.8 & 77.8 & 54.9Fe I & 5852.19 & 50.9 & 63.1 & 35.3 & 51.2 & 63.3 & 44.0Fe I & 5853.18 & 14.5 & 26.7 & & & 20.5 & Fe I & 5855.13 & 29.3 & 33.2 & 17.9 & 31.0 & 40.1 & 25.3Fe I & 5856.08 & 41.5 & 48.6 & 27.8 & 44.0 & 55.0 & 37.4Fe I & 5956.70 & 68.4 & 88.5 & 36.5 & 61.5 & 77.3 & 52.8Fe I & 6024.07 & 119.0 & & 99.5 & & & Fe I & 6027.06 & 73.7 & & 61.3 & 76.0 & & 67.1Fe I & 6034.04 & 13.4 & 16.5 & 9.3 & 13.4 & 20.5 & 9.9Fe I & 6054.10 & 15.1 & 19.1 & 10.3 & 14.6 & 22.2 & Fe I & 6055.99 & 80.3 & & 69.7 & & & Fe I & 6089.57 & & & 30.6 & & & Fe I & 6093.66 & & & 26.4 & & & Fe I & 6096.69 & & & 31.4 & & & Fe I & 6098.28 & & & 14.4 & & & Fe I & 6120.25 & 12.4 & 21.5 & & 7.6 & 13.8 & 6.6Fe I & 6151.62 & 62.8 & 78.9 & 38.5 & 60.2 & 73.1 & 51.4Fe I & 6157.73 & 72.7 & 76.2 & 57.4 & 71.7 & & 64.9Fe I & 6159.41 & 17.9 & 22.9 & 10.6 & 19.7 & 26.6 & 12.9Fe I & 6165.37 & 53.5 & 55.9 & 37.0 & 55.7 & 64.4 & 46.7Fe I & 6180.22 & & & 45.7 & & & Fe I & 6188.04 & 59.4 & & & & & Fe I & 6200.32 & & & 64.3 & & & Fe I & 6226.77 & 38.5 & 45.0 & 23.8 & 39.9 & 54.3 & 32.1Fe I & 6229.23 & 53.5 & 58.3 & 30.9 & 50.7 & 63.9 & 42.1Fe I & 6240.66 & 62.6 & 79.6 & 37.9 & 58.6 & 72.4 & 51.5Fe I & 6265.13 & & & 79.0 & 98.8 & & Fe I & 6270.24 & 65.4 & 76.3 & 40.2 & 62.7 & 74.3 & 54.2Fe I & 6380.75 & 65.0 & 62.0 & 43.5 & 65.7 & 75.0 & 56.7Fe I & 6385.74 & 15.0 & 18.7 & 8.6 & 15.4 & 24.6 & Fe I & 6392.55 & 27.9 & 40.2 & 10.8 & 24.9 & 38.0 & 20.3Fe I & 6498.95 & 63.2 & 88.4 & 31.4 & 55.6 & 74.4 & 50.0Fe I & 6581.22 & 28.9 & 52.0 & 13.1 & 27.0 & & Fe I & 6591.32 & 15.7 & 15.4 & 6.8 & 16.0 & 24.7 & 10.3Fe I & 6608.03 & 28.8 & 42.9 & 12.1 & 25.2 & & 18.3Fe I & 6627.56 & 37.0 & 42.1 & 22.8 & 38.7 & 50.4 & 29.6Fe I & 6646.98 & 18.1 & 25.0 & 7.3 & 15.7 & 25.6 & Fe I & 6653.88 & 15.9 & 21.0 & 8.7 & 15.1 & 23.0 & 12.0Fe I & 6703.57 & 47.9 & 62.0 & 27.3 & 48.6 & 62.0 & 38.4Fe I & 6710.31 & 26.9 & 46.0 & 9.4 & 24.0 & 36.7 & 19.8Fe I & 6725.39 & 26.0 & 30.3 & 15.0 & 25.8 & 37.4 & 20.7Fe I & 6726.67 & 57.3 & 64.0 & 41.4 & 58.7 & 70.6 & 48.1Fe I & 6733.16 & 36.4 & 37.2 & 21.2 & 37.0 & 48.3 & 27.8Fe I & 6739.54 & 20.4 & 33.5 & 7.8 & 17.0 & 28.3 & 12.9Fe I & 6745.11 & 11.6 & & & & 18.9 & Fe I & 6745.96 & 9.9 & 11.8 & & & 17.9 & Fe I & 6746.96 & 7.1 & 12.4 & 2.7 & 6.3 & 12.1 & Fe I & 6750.15 & 89.3 & 106.4 & 66.6 & 87.4 & 101.1 & 77.5Fe I & 6752.72 & 46.0 & 56.3 & 29.8 & & & 35.9Fe I & 6786.88 & 36.5 & & 18.0 & & & Fe I & 6820.43 & & & 36.8 & & & Fe I & 6839.83 & & & 20.7 & 39.9 & & Fe I & 6855.74 & 27.8 & 29.8 & 17.3 & & & 20.0Fe I & 6861.93 & 31.6 & 47.3 & 12.3 & 30.2 & & Fe I & 6862.48 & 39.7 & 43.6 & 24.3 & 41.5 & 53.6 & 32.6Fe I & 6864.31 & 12.4 & & 4.1 & & 16.3 & Fe I & 7498.56 & 28.2 & 31.0 & 13.2 & 26.2 & 38.2 & 21.8Fe I & 7507.27 & 74.0 & 89.2 & 53.3 & 74.2 & & 61.5Fe I & 7583.80 & & 124.5 & 74.3 & & & Fe I & 7586.03 & & 160.1 & 108.1 & & & Fe II & 5234.62 & 89.7 & & 96.4 & & & Fe II & 5991.37 & & 19.2 & 38.0 & 41.4 & & Fe II & 6084.10 & 25.9 & & & & & Fe II & 6149.25 & 40.5 & 17.2 & 43.4 & 44.9 & 44.9 & 40.2Fe II & 6247.55 & 57.4 & 26.3 & 66.3 & & 60.2 & 57.0Fe II & 6369.46 & 23.7 & & 23.0 & 27.2 & 29.0 & 22.1Fe II & 6416.92 & 45.5 & & & & & Fe II & 6432.68 & & 25.1 & 52.1 & & & Fe II & 6442.95 & & & 5.9 & & & Fe II & 6446.40 & & & 8.1 & & & Fe II & 7515.79 & & & 18.2 & & & Ni I & 6767.77 & 94.9 & 98.9 & 70.1 & 92.0 & 103.3 & 81.1Eu II & 6645.13 & 12.8 & 8.7 & 6.5 & 8.0 & 11.2 & 7.0
[lccccccc]{} C I & 6587.62 & 26.3 & 27.1 & 34.4 & & & C I & 7115.19 & 39.5 & 35.6 & 36.2 & & & C I & 7116.99 & 30.3 & 31.0 & 28.4 & & & O I & 7771.95 & 132.5 & & 117.3 & 67.3 & 58.0 & 58.3 O I & 7774.17 & 108.4 & 104.0 & 97.8 & 55.4 & 56.9 & 56.5 O I & 7775.39 & 86.8 & 85.3 & 80.6 & 37.4 & 48.9 & Na I & 6154.23 & 32.3 & 42.6 & 39.9 & 60.6 & 55.7 & 56.0 Na I & 6160.75 & 54.7 & 56.4 & 57.9 & 87.9 & 80.6 & 81.2 Mg I & 5711.10 & 96.1 & 104.0 & 106.5 & 134.9 & 139.9 & 129.7 Al I & 7835.32 & 48.9 & 49.5 & 50.8 & 70.9 & 64.4 & 71.7 Al I & 7836.13 & 61.6 & 64.1 & 66.3 & 86.6 & 77.8 & 85.0 Si I & 5793.08 & 55.2 & 57.5 & 54.5 & 61.8 & 54.2 & 54.7 Si I & 6125.03 & 37.8 & 40.5 & 39.0 & 44.8 & 40.6 & 40.9 Si I & 6145.02 & 53.8 & 44.6 & 46.9 & 52.3 & 50.3 & 51.2 Si I & 6721.84 & 60.6 & 56.8 & 59.1 & 54.3 & 58.3 & 64.2 S I & 6046.11 & 28.7 & 28.1 & 25.1 & 23.6 & 22.3 & 21.6 S I & 6052.68 & 25.5 & 20.0 & 25.3 & & & Ca I & 5867.57 & 24.6 & 23.9 & 22.4 & 45.2 & 42.1 & 47.4 Ca I & 6166.44 & 74.5 & 71.0 & 68.7 & 97.8 & 99.3 & 95.6 Sc II & 5526.79 & 83.8 & 90.9 & & 88.3 & 84.0 & 80.5Sc II & 6604.60 & 43.5 & 42.9 & 43.4 & 45.1 & 42.3 & 43.4 Ti I & 5965.83 & 28.8 & 28.5 & 30.8 & 62.8 & 52.1 & 56.5 Ti I & 6126.22 & 19.0 & 17.2 & 18.6 & 44.9 & 44.3 & 41.2 Ti I & 6261.10 & 39.6 & 41.0 & 41.6 & 81.5 & 76.9 & 76.5Ti II & 5336.77 & 96.0 & 80.4 & 90.9 & & 79.1 & 78.2 Ti II & 5418.75 & 59.3 & 41.6 & 56.0 & & 57.3 & 58.5 Cr I & 5787.97 & 45.5 & 47.4 & 49.0 & 69.6 & 70.9 & 67.9 Fe I & 5806.73 & 57.1 & 55.7 & 57.7 & 77.5 & 73.5 & 75.7Fe I & 5852.19 & 43.3 & 40.7 & 41.8 & 63.2 & 59.6 & 57.3Fe I & 5855.13 & 25.6 & 21.4 & 22.3 & 36.1 & 32.7 & 36.6Fe I & 5856.08 & 31.0 & 32.9 & 33.1 & 52.5 & 47.0 & 48.6Fe I & 5956.70 & 45.1 & 41.6 & 42.7 & 82.4 & 77.2 & 84.6Fe I & 6027.06 & 64.6 & 66.5 & 67.4 & 86.2 & 80.5 & Fe I & 6034.04 & & & & 15.7 & 17.0 & 16.3Fe I & 6054.10 & & & & 17.7 & 18.3 & 20.2Fe I & 6079.02 & 48.2 & 51.5 & 50.8 & 66.8 & 63.3 & 71.3Fe I & 6120.25 & & & & 12.8 & 12.6 & Fe I & 6151.62 & 40.2 & 41.7 & 41.7 & 69.5 & 69.2 & 73.8Fe I & 6157.73 & 63.0 & 65.5 & 67.0 & 80.6 & 81.9 & 84.8Fe I & 6159.41 & 10.5 & 13.3 & 13.7 & 19.3 & 20.5 & 21.5Fe I & 6165.37 & 43.3 & 45.2 & 43.0 & 61.0 & 59.6 & 63.0Fe I & 6188.04 & 41.5 & 42.3 & 43.8 & 68.5 & 65.7 & 65.9Fe I & 6226.77 & 28.0 & 29.1 & 32.7 & 55.0 & 49.9 & 43.3Fe I & 6229.23 & 42.0 & 39.4 & 39.5 & 59.2 & 54.6 & 59.3Fe I & 6240.66 & 45.2 & 42.4 & 44.1 & 69.4 & 67.7 & 71.4Fe I & 6270.24 & 55.8 & & 55.3 & 73.5 & 72.8 & 74.4Fe I & 6380.75 & 55.4 & 52.5 & 56.4 & 67.9 & 71.9 & 71.2Fe I & 6385.74 & & & & 23.0 & 19.9 & 18.0Fe I & 6392.55 & 11.7 & 14.2 & 12.5 & 28.8 & 32.0 & 33.7Fe I & 6498.95 & 38.3 & 38.7 & 40.7 & 70.0 & 74.4 & 70.4Fe I & 6581.22 & 18.1 & 17.2 & 16.2 & 40.2 & 44.2 & 46.7Fe I & 6591.32 & & & & 21.9 & 19.5 & 23.6Fe I & 6608.03 & 14.6 & 14.1 & 11.6 & 31.4 & 30.8 & 30.7Fe I & 6627.56 & 28.3 & 31.1 & 29.7 & 45.4 & 42.4 & 40.0Fe I & 6653.88 & 12.3 & & 10.6 & 21.4 & 18.7 & 21.7Fe I & 6710.31 & 12.9 & 9.9 & 11.6 & 38.6 & 32.9 & 39.4Fe I & 6725.39 & 17.0 & & 20.0 & & 27.6 & 26.8Fe I & 6726.67 & 48.2 & & 49.9 & & & Fe I & 6733.16 & 27.0 & 24.1 & 27.7 & & & Fe I & 6739.54 & & & & 27.5 & 26.8 & 27.7Fe I & 6750.15 & 71.2 & 69.6 & 68.5 & 102.8 & 100.5 & 100.6Fe I & 6752.72 & 37.1 & 42.7 & 38.5 & 59.7 & 60.6 & 63.0Fe I & 6855.74 & 19.7 & & 20.4 & & & Fe I & 6861.93 & 13.9 & & & 36.1 & 38.9 & 40.9Fe I & 6862.48 & 30.0 & 28.8 & & 45.0 & 42.1 & 43.9Fe I & 7498.56 & 20.9 & 19.4 & 17.8 & 31.8 & 31.0 & 30.0Fe I & 7507.27 & 63.5 & 59.1 & 62.1 & 90.1 & 85.4 & Fe II & 6084.10 & 29.2 & 34.1 & 35.3 & 23.5 & 20.7 & 26.0Fe II & 6149.25 & 48.3 & 52.7 & 54.3 & 39.3 & 34.5 & 47.7Fe II & 6369.46 & 30.6 & 31.4 & 28.4 & 15.5 & 17.5 & 23.7Fe II & 6416.92 & 53.2 & 50.8 & 52.1 & 49.5 & 43.4 & 46.2Ni I & 6767.77 & 79.6 & 75.1 & 75.4 & 110.7 & 103.6 & 98.0 Eu II & 6645.13 & 8.9 & & 11.8 & & &
[lcccccccccc]{} C I & 5380.32 & 31.9 & 27.4 & 11.9 & 35.2 & 19.7 & 41.6 & 31.1 & & Na I & 6154.23 & 64.9 & 43.4 & & 66.1 & 83.1 & 44.5 & 58.9 & & 91.0Na I & 6160.75 & & 60.4 & 44.7 & & 103.8 & 61.1 & & & 110.1Mg I & 5711.10 & 113.3 & 114.0 & 112.5 & 131.5 & & 109.5 & 120.5 & & Si I & 6125.03 & 53.6 & 42.5 & 25.7 & 57.9 & 44.4 & 44.3 & 48.8 & 54.1 & 48.8Si I & 6145.02 & 61.0 & 47.1 & 30.0 & 62.0 & 51.4 & 50.2 & 56.8 & 56.8 & 57.7S I & 6046.11 & 32.5 & 24.8 & 10.4 & 43.6 & & 31.8 & 26.3 & & S I & 6052.68 & 16.6 & 15.1 & 5.5 & 22.1 & & 24.5 & 14.8 & & Ca I & 5867.57 & 40.8 & 30.3 & 20.4 & 42.8 & 54.7 & 26.5 & 37.5 & 72.6 & 54.8Ca I & 6166.44 & 93.4 & 79.8 & 68.5 & 94.0 & 110.1 & 75.0 & 87.7 & 132.7 & 110.8Ti I & 5965.83 & 46.0 & 36.7 & 30.2 & 52.9 & 65.5 & 29.6 & 43.2 & 93.7 & 62.4Ti I & 6126.22 & 41.8 & 31.0 & 26.3 & 48.5 & 63.0 & 19.2 & 35.5 & 100.0 & 62.1Ti II & 5336.77 & 85.1 & 86.8 & 66.8 & 98.6 & & 88.1 & 82.7 & & Ti II & 5418.75 & 61.8 & 64.3 & 43.4 & 73.7 & & 62.1 & 59.3 & & Fe I & 4961.91 & 46.9 & 35.2 & 18.7 & 53.9 & 53.2 & 28.1 & 41.8 & 73.0 & 56.6Fe I & 5247.05 & 85.2 & 77.3 & 61.4 & 97.3 & 99.0 & 62.2 & 81.8 & 137.8 & 99.6Fe I & 5295.32 & 47.0 & 37.0 & 17.8 & 50.2 & 48.1 & 33.0 & 41.1 & 57.0 & 48.0Fe I & 5373.71 & 79.6 & 72.0 & 49.3 & 81.1 & 83.9 & 68.0 & 74.2 & & 82.0Fe I & 5651.47 & 31.7 & 24.8 & 11.2 & 35.8 & 35.2 & 20.9 & 30.7 & 48.8 & Fe I & 5652.32 & 43.7 & 33.8 & 16.5 & 48.1 & 46.7 & 29.1 & 39.9 & 62.1 & 50.8Fe I & 5677.68 & 14.8 & 9.9 & 4.3 & 15.8 & 17.3 & 7.1 & 12.1 & 31.0 & 20.4Fe I & 5724.45 & 13.2 & 9.2 & 2.9 & 15.2 & 15.8 & 7.0 & 12.3 & 27.9 & 18.7Fe I & 5775.09 & 77.8 & 68.7 & 45.8 & 81.7 & 84.2 & 62.6 & 74.2 & 94.7 & 86.2Fe I & 5814.80 & 38.8 & 29.2 & & 43.8 & 41.9 & 24.3 & 34.7 & 58.0 & 45.7Fe I & 5827.89 & 22.4 & 13.4 & 7.0 & 28.6 & 30.2 & 12.2 & 21.2 & 52.1 & 29.7Fe I & 5852.19 & 61.1 & 49.1 & 28.8 & 64.7 & 65.4 & 44.1 & 56.1 & & 69.5Fe I & 5853.18 & & 9.9 & 5.1 & & 25.7 & & & 55.2 & 26.0Fe I & 5855.13 & 38.4 & 28.4 & 13.4 & 41.0 & 38.0 & 26.3 & 34.8 & 50.3 & 41.8Fe I & 5856.08 & 52.3 & 41.4 & 21.8 & 55.9 & 53.4 & 37.6 & 48.5 & 70.2 & 57.7Fe I & 5956.70 & 69.5 & 62.1 & 43.6 & 80.3 & 82.8 & 46.2 & 66.6 & 116.3 & 85.7Fe I & 6034.04 & 19.0 & 12.0 & 6.2 & 21.8 & 20.2 & 11.0 & 15.6 & 40.9 & 21.0Fe I & 6054.10 & 20.0 & 13.2 & 6.7 & 24.6 & 23.9 & 11.7 & 16.6 & 37.8 & 27.6Fe I & 6079.02 & 64.6 & 54.8 & 32.6 & 71.7 & 71.6 & 52.5 & 63.9 & 76.7 & Fe I & 6105.15 & 24.0 & 15.9 & 6.8 & 25.8 & 25.6 & 12.4 & 20.8 & 41.4 & Fe I & 6120.25 & 11.3 & 8.2 & 4.6 & 16.7 & 21.6 & 4.0 & 8.8 & 50.2 & 21.7Fe I & 6151.62 & 69.0 & 59.2 & 41.5 & 78.1 & 77.1 & 49.3 & 65.1 & 104.1 & 79.2Fe I & 6157.73 & 79.7 & 72.4 & 50.0 & 88.8 & 81.9 & 67.0 & 76.1 & 105.0 & 81.4Fe I & 6159.41 & 26.3 & 17.4 & 6.8 & 28.7 & 29.8 & 14.6 & 22.5 & 48.8 & 34.1Fe I & 6165.37 & 64.7 & 55.3 & 32.6 & 69.4 & 66.9 & 48.9 & 60.5 & 76.3 & 69.3Fe II & 5325.56 & 51.0 & 54.6 & 25.3 & & 33.9 & 61.7 & 50.8 & & Fe II & 5414.05 & 38.7 & 38.2 & 14.6 & 52.3 & 24.3 & 44.2 & 37.4 & 34.5 & 27.2Fe II & 5425.25 & 53.0 & 53.5 & 25.1 & 64.9 & 39.3 & 57.8 & 51.4 & 49.5 & Fe II & 6149.25 & 47.1 & 47.9 & 20.9 & 57.3 & 30.0 & 58.6 & 46.9 & 41.3 & 33.6Cr I & 5787.97 & 65.3 & 53.2 & 33.5 & 70.3 & 76.9 & & 63.3 & & 85.0
[lccccc]{} HR810(n1) & $6084 \pm 61$ & $4.44 \pm 0.06$ & $1.20 \pm 0.15$ & $+0.16 \pm 0.05$ & 34, 4HR810(n2) & $6145 \pm 64$ & $4.44 \pm 0.10$ & $1.20 \pm 0.17$ & $+0.19 \pm 0.05$ & 32, 4HR810(n3) & $6172 \pm 55$ & $4.57 \pm 0.10$ & $1.30 \pm 0.17$ & $+0.21 \pm 0.05$ & 28, 4HR810(avg) & $6136 \pm 34$ & $4.47 \pm 0.05$ & $1.23 \pm 0.09$ & $+0.19 \pm 0.03$ & HD1237(n1) & $5565 \pm 55$ & $4.54 \pm 0.15$ & $1.30 \pm 0.10$ & $+0.19 \pm 0.05$ & 36, 4HD1237(n2) & $5502 \pm 48$ & $4.47 \pm 0.08$ & $1.20 \pm 0.09$ & $+0.14 \pm 0.04$ & 37, 4HD1237(n3) & $5493 \pm 61$ & $4.13 \pm 0.09$ & $1.25 \pm 0.11$ & $+0.16 \pm 0.05$ & 34, 4HD1237(avg) & $5520 \pm 31$ & $4.35 \pm 0.06$ & $1.25 \pm 0.06$ & $+0.16 \pm 0.03$ & HD10697 & $5605 \pm 36$ & $3.96 \pm 0.07$ & $0.95 \pm 0.07$ & $+0.16 \pm 0.03$ & 37, 3HD12661(McD) & $5690 \pm 35$ & $4.41 \pm 0.05$ & $1.05 \pm 0.06$ & $+0.35 \pm 0.03$ & 49, 6HD12661(Keck) & $5736 \pm 33$ & $4.47 \pm 0.04$ & $0.95 \pm 0.05$ & $+0.35 \pm 0.02$ & 24, 4HD12661(avg) & $5714 \pm 24$ & $4.45 \pm 0.03$ & $0.99 \pm 0.04$ & $+0.35 \pm 0.02$ & HD16141 & $5777 \pm 31$ & $4.21 \pm 0.04$ & $1.12 \pm 0.06$ & $+0.15 \pm 0.02$ & 25, 4HD37124 & $5532 \pm 39$ & $4.56 \pm 0.04$ & $0.85 \pm 0.11$ & $-0.41 \pm 0.03$ & 24, 4HD38529 & $5646 \pm 48$ & $3.92 \pm 0.07$ & $1.20 \pm 0.08$ & $+0.37 \pm 0.04$ & 24, 3HD46375 & $5250 \pm 55$ & $4.44 \pm 0.08$ & $0.80 \pm 0.07$ & $+0.21 \pm 0.04$ & 25, 4HD52265(McD) & $6122 \pm 35$ & $4.24 \pm 0.05$ & $1.15 \pm 0.08$ & $+0.26 \pm 0.03$ & 49, 4HD52265(Keck) & $6189 \pm 29$ & $4.40 \pm 0.07$ & $1.30 \pm 0.08$ & $+0.28 \pm 0.02$ & 24, 4HD52265(avg) & $6162 \pm 22$ & $4.29 \pm 0.04$ & $1.23 \pm 0.06$ & $+0.27 \pm 0.02$ & HD75332 & $6305 \pm 47$ & $4.49 \pm 0.07$ & $1.05 \pm 0.11$ & $+0.24 \pm 0.03$ & 39, 4HD89744 & $6338 \pm 39$ & $4.17 \pm 0.05$ & $1.55 \pm 0.09$ & $+0.30 \pm 0.03$ & 35, 6HD92788 & $5775 \pm 39$ & $4.45 \pm 0.05$ & $1.00 \pm 0.06$ & $+0.31 \pm 0.03$ & 24, 4HD130322 & $5410 \pm 35$ & $4.47 \pm 0.08$ & $0.95 \pm 0.06$ & $+0.05 \pm 0.03$ & 50, 6HD134987 & $5715 \pm 50$ & $4.33 \pm 0.08$ & $1.00 \pm 0.07$ & $+0.32 \pm 0.04$ & 51, 7HD168443 & $5555 \pm 40$ & $4.10 \pm 0.12$ & $0.90 \pm 0.06$ & $+0.10 \pm 0.03$ & 51, 6HD177830 & $4818 \pm 83$ & $3.32 \pm 0.16$ & $0.97 \pm 0.09$ & $+0.36 \pm 0.05$ & 23, 3HD192263 & $4964 \pm 59$ & $4.49 \pm 0.13$ & $0.95 \pm 0.13$ & $-0.03 \pm 0.04$ & 46, 4HD209458 & $6063 \pm 43$ & $4.38 \pm 0.10$ & $1.02 \pm 0.09$ & $+0.04 \pm 0.03$ & 58, 9HD217014 & $5795 \pm 30$ & $4.41 \pm 0.03$ & $1.05 \pm 0.06$ & $+0.21 \pm 0.03$ & 43, 3HD217107 & $5600 \pm 38$ & $4.40 \pm 0.05$ & $0.95 \pm 0.06$ & $+0.36 \pm 0.03$ & 37, 3HD222582 & $5735 \pm 32$ & $4.26 \pm 0.03$ & $0.95 \pm 0.06$ & $+0.02 \pm 0.03$ & 37, 3BD-10 3166 & $5320 \pm 74$ & $4.38 \pm 0.10$ & $0.85 \pm 0.09$ & $+0.33 \pm 0.05$ & 22, 2
[lrcccccc]{} Li & 1.06 & $+0.88 \pm 0.05$ & $<-0.07 \pm 0.05$ & $+1.67 \pm 0.05$ & $+2.08 \pm 0.05$ & $+1.01 \pm 0.07$ & $<-0.49 \pm 0.07$ C & 8.56 & $+0.09 \pm 0.04$ & $+0.26 \pm 0.06$ & $+0.03 \pm 0.05$ & $+0.05 \pm 0.04$ & $+0.07 \pm 0.06$ & $-0.07 \pm 0.06$ N & 8.05 & $+0.04 \pm 0.06$ & $+0.32 \pm 0.08$ & $-0.05 \pm 0.07$ & $-0.11 \pm 0.06$ & $-0.06 \pm 0.05$ & O(synth) & 8.83 & $+0.20 \pm 0.06$ & $+0.09 \pm 0.07$ & $+0.07 \pm 0.06$ & $+0.03 \pm 0.05$ & $+0.12 \pm 0.05$ & $-0.01 \pm 0.08$ O(trip) & 8.83 & $+0.15 \pm 0.04$ & $+0.22 \pm 0.04$ & $+0.20 \pm 0.03$ & $+0.21 \pm 0.04$ & $+0.31 \pm 0.04$ & $-0.05 \pm 0.05$ O(corr) & 8.83 & $+0.15 \pm 0.04$ & $+0.20 \pm 0.04$ & $+0.11 \pm 0.03$ & $+0.10 \pm 0.04$ & $+0.16 \pm 0.04$ & $+0.01 \pm 0.05$ O(avg) & 8.83 & $+0.17 \pm 0.03$ & $+0.17 \pm 0.03$ & $+0.10 \pm 0.03$ & $+0.07 \pm 0.03$ & $+0.14 \pm 0.03$ & $+0.00 \pm 0.04$ C/O & $-0.27$ & $-0.08 \pm 0.03$ & $+0.09 \pm 0.05$ & $-0.07 \pm 0.05$ & $-0.02 \pm 0.03$ & $-0.07 \pm 0.06$ & $-0.07 \pm 0.04$ Na & 6.33 & $+0.12 \pm 0.03$ & $+0.36 \pm 0.04$ & $+0.23 \pm 0.02$ & $+0.12 \pm 0.03$ & $+0.24 \pm 0.05$ & $-0.06 \pm 0.06$ Mg(57) & 7.58 & $+0.09 \pm 0.06$ & $+0.22 \pm 0.08$ & $+0.14 \pm 0.07$ & $+0.11 \pm 0.06$ & $+0.19 \pm 0.05$ & $+0.03 \pm 0.07$ Mg(synth) & 7.58 & $+0.19 \pm 0.05$ & $+0.32 \pm 0.05$ & $+0.23 \pm 0.05$ & $+0.09 \pm 0.05$ & $+0.32 \pm 0.05$ & $+0.05 \pm 0.05$ Mg(avg) & 7.58 & $+0.15 \pm 0.04$ & $+0.29 \pm 0.04$ & $+0.20 \pm 0.04$ & $+0.10 \pm 0.04$ & $+0.26 \pm 0.04$ & $+0.04 \pm 0.04$ Al(78) & 6.47 & $+0.05 \pm 0.02$ & $+0.43 \pm 0.03$ & $+0.14 \pm 0.02$ & $-0.01 \pm 0.03$ & $+0.19 \pm 0.02$ & $-0.05 \pm 0.02$ Al(synth) & 6.47 & $+0.20 \pm 0.05$ & & $+0.12 \pm 0.04$ & & $+0.14 \pm 0.04$ & $+0.03 \pm 0.05$ Al(avg) & 6.47 & $+0.07 \pm 0.02$ & $+0.43 \pm 0.03$ & $+0.14 \pm 0.02$ & $-0.01 \pm 0.03$ & $+0.18 \pm 0.02$ & $-0.04 \pm 0.02$ Si & 7.55 & $+0.14 \pm 0.01$ & $+0.37 \pm 0.03$ & $+0.25 \pm 0.02$ & $+0.18 \pm 0.02$ & $+0.29 \pm 0.02$ & $+0.02 \pm 0.03$ S & 7.21 & $+0.14 \pm 0.04$ & $+0.34 \pm 0.03$ & $+0.06 \pm 0.02$ & $+0.00 \pm 0.08$ & $+0.03 \pm 0.02$ & $+0.13 \pm 0.08$ Ca & 6.36 & $+0.12 \pm 0.03$ & $+0.26 \pm 0.03$ & $+0.18 \pm 0.03$ & $+0.22 \pm 0.03$ & $+0.28 \pm 0.03$ & $+0.05 \pm 0.04$ Sc & 3.10 & $+0.23 \pm 0.04$ & $+0.47 \pm 0.03$ & $+0.27 \pm 0.05$ & $+0.22 \pm 0.09$ & $+0.35 \pm 0.07$ & $+0.08 \pm 0.04$ Ti I & 4.99 & $+0.07 \pm 0.05$ & $+0.31 \pm 0.04$ & $+0.20 \pm 0.04$ & $+0.22 \pm 0.05$ & $+0.27 \pm 0.04$ & $+0.06 \pm 0.06$ Ti II & 4.99 & $+0.19 \pm 0.04$ & $+0.36 \pm 0.04$ & $+0.25 \pm 0.06$ & & $+0.33 \pm 0.04$ & $+0.02 \pm 0.04$ Ti(avg) & 4.99 & $+0.14 \pm 0.03$ & $+0.34 \pm 0.03$ & $+0.22 \pm 0.03$ & $+0.22 \pm 0.05$ & $+0.30 \pm 0.03$ & $+0.03 \pm 0.03$ Cr & 5.67 & $+0.11 \pm 0.06$ & $+0.27 \pm 0.08$ & $+0.25 \pm 0.07$ & & $+0.24 \pm 0.05$ & Fe & 7.47 & $+0.16 \pm 0.03$ & $+0.35 \pm 0.03$ & $+0.26 \pm 0.03$ & $+0.24 \pm 0.03$ & $+0.30 \pm 0.03$ & $+0.05 \pm 0.03$ Ni & 6.25 & $+0.13 \pm 0.07$ & $+0.34 \pm 0.08$ & $+0.22 \pm 0.08$ & $+0.20 \pm 0.08$ & $+0.25 \pm 0.06$ & $+0.02 \pm 0.07$ Eu & 0.60 & $+0.14 \pm 0.06$ & $+0.32 \pm 0.07$ & $+0.11 \pm 0.06$ & $+0.08 \pm 0.06$ & $+0.29 \pm 0.05$ & $+0.13 \pm 0.07$
[lccccccc]{} Li & $<-0.37 \pm 0.07$ & $<-0.35 \pm 0.07$ & $<-0.60 \pm 0.07$ & $+1.59 \pm
0.05$ & $+0.24 \pm 0.06$ & $<-0.20 \pm 0.07$ & $<+0.22 \pm 0.08$ C & $+0.25 \pm 0.05$ & $+0.14 \pm 0.05$ & $+0.33 \pm 0.07$ & $-0.18 \pm
0.08$ & $+0.12 \pm 0.04$ & $+0.22 \pm 0.05$ & $-0.16 \pm 0.05$N & $+0.34 \pm 0.09$ & $+0.01 \pm 0.08$ & $+0.56 \pm 0.12$ & $-0.24 \pm
0.07$ & $+0.22 \pm 0.05$ & $-0.08 \pm 0.06$ & $-0.06 \pm 0.05$O(synth) & $+0.31 \pm 0.07$ & $-0.03 \pm 0.06$ & & $-0.07 \pm 0.06$ & $+0.23 \pm 0.06$ & $+0.13 \pm 0.06$ & $+0.00 \pm 0.06$O(trip) & $+0.30 \pm 0.05$ & $+0.24 \pm 0.05$ & $+0.09 \pm 0.09$ & $+0.11 \pm
0.05$ & $+0.15 \pm 0.04$ & $+0.22 \pm 0.04$ & $+0.03 \pm 0.03$O(corr) & $+0.26 \pm 0.05$ & $+0.23 \pm 0.05$ & $+0.20 \pm 0.09$ & $+0.05 \pm
0.05$ & $+0.13 \pm 0.04$ & $+0.22 \pm 0.04$ & $+0.03 \pm 0.03$O(avg) & $+0.28 \pm 0.04$ & $+0.12 \pm 0.04$ & $+0.20 \pm 0.09$ & $+0.00 \pm
0.04$ & $+0.16 \pm 0.03$ & $+0.19 \pm 0.03$ & $+0.02 \pm 0.03$C/O & $-0.03 \pm 0.03$ & $+0.02 \pm 0.04$ & $+0.13 \pm 0.02$ & $-0.18 \pm
0.09$ & $-0.04 \pm 0.03$ & $+0.03 \pm 0.04$ & $-0.18 \pm 0.05$Na & $+0.33 \pm 0.06$ & $+0.02 \pm 0.04$ & $-0.22 \pm 0.07$ & $-0.08 \pm
0.03$ & $+0.18 \pm 0.03$ & $+0.20 \pm 0.04$ & $-0.07 \pm 0.02$Mg(57) & $+0.22 \pm 0.08$ & $+0.09 \pm 0.08$ & $-0.23 \pm 0.10$ & $-0.03 \pm
0.06$ & $+0.12 \pm 0.05$ & $+0.18 \pm 0.06$ & $-0.04 \pm 0.05$Mg(synth) & $+0.28 \pm 0.06$ & $+0.23 \pm 0.06$ & $-0.08 \pm 0.06$ & $+0.02
\pm 0.05$ & $+0.22 \pm 0.05$ & $+0.38 \pm 0.06$ & $-0.05 \pm 0.06$Mg(avg) & $+0.26 \pm 0.05$ & $+0.18 \pm 0.05$ & $-0.12 \pm 0.05$ & $+0.00 \pm
0.04$ & $+0.17 \pm 0.04$ & $+0.28 \pm 0.04$ & $-0.04 \pm 0.04$Al(78) & $+0.23 \pm 0.04$ & $+0.09 \pm 0.03$ & $-0.23 \pm 0.04$ & $-0.10 \pm
0.02$ & $+0.12 \pm 0.02$ & $+0.20 \pm 0.02$ & $-0.08 \pm 0.02$Al(synth) & $+0.32 \pm 0.05$ & $+0.21 \pm 0.04$ & $+0.07 \pm 0.05$ & $-0.04
\pm 0.04$ & $+0.17 \pm 0.04$ & $+0.34 \pm 0.04$ & $+0.07 \pm 0.05$Al(avg) & $+0.27 \pm 0.03$ & $+0.13 \pm 0.02$ & $-0.11 \pm 0.03$ & $-0.09 \pm
0.02$ & $+0.13 \pm 0.02$ & $+0.23 \pm 0.02$ & $-0.06 \pm 0.02$Si & $+0.31 \pm 0.01$ & $+0.14 \pm 0.01$ & $-0.01 \pm 0.04$ & $+0.04 \pm
0.03$ & $+0.21 \pm 0.02$ & $+0.33 \pm 0.02$ & $+0.06 \pm 0.04$S & $+0.29 \pm 0.05$ & $+0.16 \pm 0.07$ & & $-0.17 \pm 0.07$ & $+0.13 \pm 0.04$ & $+0.36 \pm 0.07$ & $+0.01 \pm 0.03$Ca & $+0.26 \pm 0.04$ & $+0.12 \pm 0.04$ & $-0.06 \pm 0.07$ & $+0.19 \pm
0.12$ & $+0.20 \pm 0.03$ & $+0.25 \pm 0.03$ & $+0.00 \pm 0.02$Sc & $+0.43 \pm 0.05$ & $+0.32 \pm 0.05$ & $+0.02 \pm 0.07$ & $+0.09 \pm
0.07$ & $+0.32 \pm 0.02$ & $+0.31 \pm 0.18$ & $+0.08 \pm 0.03$Ti I & $+0.29 \pm 0.06$ & $+0.19 \pm 0.05$ & $+0.01 \pm 0.09$ & $-0.03 \pm
0.05$ & $+0.16 \pm 0.04$ & $+0.31 \pm 0.06$ & $-0.03 \pm 0.03$Ti II & $+0.30 \pm 0.06$ & $+0.24 \pm 0.05$ & $-0.11 \pm 0.06$ & $+0.11 \pm
0.07$ & & $+0.27 \pm 0.08$ & $+0.05 \pm 0.04$Ti(avg) & $+0.30 \pm 0.04$ & $+0.22 \pm 0.04$ & $-0.07 \pm 0.05$ & $+0.02 \pm
0.04$ & $+0.16 \pm 0.04$ & $+0.30 \pm 0.05$ & $+0.00 \pm 0.02$Cr & $+0.24 \pm 0.08$ & & $-0.12 \pm 0.10$ & $+0.05 \pm 0.06$ & $+0.15 \pm 0.05$ & $+0.31 \pm 0.06$ & $+0.03 \pm 0.05$Fe & $+0.32 \pm 0.04$ & $+0.10 \pm 0.03$ & $+0.05 \pm 0.03$ & $+0.04 \pm
0.03$ & $+0.21 \pm 0.03$ & $+0.36 \pm 0.03$ & $+0.02 \pm 0.03$Ni & $+0.33 \pm 0.09$ & $+0.14 \pm 0.08$ & $-0.19 \pm 0.09$ & $+0.01 \pm
0.07$ & $+0.20 \pm 0.06$ & $+0.33 \pm 0.07$ & $-0.02 \pm 0.06$Eu & $+0.13 \pm 0.08$ & $+0.22 \pm 0.08$ & $+0.18 \pm 0.10$ & $-0.03 \pm
0.06$ & $+0.13 \pm 0.04$ & $+0.34 \pm 0.05$ & $-0.06 \pm 0.04$
[lcccc]{} Li & $+1.28 \pm 0.05$ & $+1.32 \pm 0.05$ & $+1.23 \pm 0.05$ & $+1.28
\pm 0.03$ C & $+0.01 \pm 0.05$ & $-0.04 \pm 0.07$ & $+0.02 \pm 0.11$ & $+0.00
\pm 0.04$ O(trip) & $+0.36 \pm 0.07$ & $+0.22 \pm 0.05$ & $+0.20 \pm 0.06$ & $+0.25
\pm 0.03$ O(corr) & $+0.27 \pm 0.07$ & $+0.13 \pm 0.05$ & $+0.11 \pm 0.06$ & $+0.16
\pm 0.03$ C/O & $-0.26 \pm 0.07$ & $-0.17 \pm 0.05$ & $-0.09 \pm 0.11$ & $-0.19
\pm 0.04$ Na & $+0.04 \pm 0.04$ & $+0.17 \pm 0.07$ & $+0.16 \pm 0.04$ & $+0.11
\pm 0.03$ Mg & $-0.03 \pm 0.09$ & $+0.10 \pm 0.08$ & $+0.11 \pm 0.08$ & $+0.07
\pm 0.05$ Al(78) & $+0.04 \pm 0.03$ & $+0.09 \pm 0.03$ & $+0.09 \pm 0.03$ & $+0.07
\pm 0.02$ Al(synth) & $+0.04 \pm 0.05$ & $+0.02 \pm 0.05$ & $+0.05 \pm 0.05$ & $+0.04
\pm 0.03$ Al(avg) & $+0.04 \pm 0.03$ & $+0.07 \pm 0.03$ & $+0.08 \pm 0.03$ & $+0.06
\pm 0.02$ Si & $+0.23 \pm 0.03$ & $+0.23 \pm 0.04$ & $+0.22 \pm 0.03$ & $+0.23
\pm 0.02$ S & $+0.16 \pm 0.07$ & $+0.05 \pm 0.04$ & $+0.10 \pm 0.11$ & $+0.08 \pm
0.03$ Ca & $+0.20 \pm 0.05$ & $+0.20 \pm 0.05$ & $+0.15 \pm 0.04$ & $+0.18
\pm 0.03$ Sc & $+0.15 \pm 0.05$ & $+0.22 \pm 0.09$ & $+0.20 \pm 0.08$ & $+0.17
\pm 0.04$ Ti I & $+0.13 \pm 0.08$ & $+0.18 \pm 0.07$ & $+0.22 \pm 0.07$ & $+0.18
\pm 0.04$ Ti II & $+0.31 \pm 0.17$ & $-0.03 \pm 0.18$ & $+0.24 \pm 0.12$ & $+0.20
\pm 0.09$ Ti(avg) & $+0.16 \pm 0.07$ & $+0.15 \pm 0.07$ & $+0.23 \pm 0.06$ & $+0.19
\pm 0.04$ Cr & $+0.14 \pm 0.09$ & $+0.21 \pm 0.08$ & $+0.23 \pm 0.07$ & $+0.20
\pm 0.05$ Fe & $+0.16 \pm 0.05$ & $+0.19 \pm 0.05$ & $+0.21 \pm 0.05$ & $+0.19
\pm 0.03$ Ni & $+0.15 \pm 0.12$ & $+0.12 \pm 0.11$ & $+0.11 \pm 0.10$ & $+0.12
\pm 0.06$ Eu & $+0.17 \pm 0.08$ & & $+0.37 \pm 0.07$ & $+0.28
\pm 0.05$
[lcccc]{} Li & $+1.05 \pm 0.05$ & $+0.94 \pm 0.05$ & $+0.98 \pm 0.05$ & $+0.99
\pm 0.03$ O(trip) & $+0.09 \pm 0.10$ & $+0.18 \pm 0.06$ & $+0.06 \pm 0.07$ & $+0.12
\pm 0.04$ O(corr) & $+0.11 \pm 0.10$ & $+0.20 \pm 0.06$ & $+0.08 \pm 0.07$ & $+0.14
\pm 0.04$ Na & $+0.14 \pm 0.04$ & $+0.05 \pm 0.03$ & $+0.09 \pm 0.04$ & $+0.08
\pm 0.02$ Mg & $+0.07 \pm 0.09$ & $+0.11 \pm 0.07$ & $+0.09 \pm 0.09$ & $+0.09
\pm 0.05$ Al(78) & $+0.09 \pm 0.04$ & $+0.00 \pm 0.04$ & $+0.13 \pm 0.05$ & $+0.07
\pm 0.02$ Al(synth) & $+0.12 \pm 0.05$ & $+0.01 \pm 0.05$ & $+0.06 \pm 0.05$ & $+0.06
\pm 0.03$ Al(avg) & $+0.10 \pm 0.03$ & $+0.00 \pm 0.03$ & $+0.15 \pm 0.04$ & $+0.07
\pm 0.02$ Si & $+0.18 \pm 0.02$ & $+0.14 \pm 0.02$ & $+0.15 \pm 0.03$ & $+0.16
\pm 0.01$ S & $+0.28 \pm 0.09$ & $+0.27 \pm 0.07$ & $+0.17 \pm 0.08$ & $+0.24 \pm
0.05$ Ca & $+0.20 \pm 0.05$ & $+0.17 \pm 0.04$ & $+0.22 \pm 0.07$ & $+0.19
\pm 0.03$ Sc & $+0.25 \pm 0.06$ & $+0.19 \pm 0.04$ & $+0.02 \pm 0.06$ & $+0.16
\pm 0.03$ Ti I & $+0.24 \pm 0.08$ & $+0.11 \pm 0.06$ & $+0.11 \pm 0.07$ & $+0.14
\pm 0.04$ Ti II & & $+0.16 \pm 0.05$ & $+0.01 \pm 0.05$ & $+0.09
\pm 0.04$ Ti(avg) & $+0.24 \pm 0.08$ & $+0.14 \pm 0.04$ & $+0.04 \pm 0.04$ & $+0.11
\pm 0.03$ Cr & $+0.16 \pm 0.09$ & $+0.16 \pm 0.07$ & $+0.14 \pm 0.09$ & $+0.15
\pm 0.05$ Fe & $+0.19 \pm 0.05$ & $+0.14 \pm 0.04$ & $+0.16 \pm 0.05$ & $+0.16
\pm 0.03$ Ni & $+0.22 \pm 0.10$ & $+0.11 \pm 0.09$ & $-0.03 \pm 0.10$ & $+0.10
\pm 0.06$
[lccccccccc]{} C & $+0.29 \pm 0.05$ & $+0.08 \pm 0.05$ & $-0.10 \pm 0.05$ & $+0.24 \pm
0.06$ & $+0.31 \pm 0.07$ & $+0.20 \pm 0.04$ & $+0.24 \pm 0.05$ & & Na & $+0.33 \pm 0.04$ & $+0.03 \pm 0.04$ & $-0.39 \pm 0.05$ & $+0.34 \pm
0.06$ & $+0.20 \pm 0.06$ & $+0.24 \pm 0.04$ & $+0.28 \pm 0.05$ & & $+0.37 \pm 0.07$Mg & $+0.02 \pm 0.04$ & $+0.07 \pm 0.05$ & $-0.25 \pm 0.05$ & $+0.28 \pm
0.07$ & & $+0.20 \pm 0.04$ & $+0.13 \pm 0.05$ & & Si & $+0.34 \pm 0.01$ & $+0.13 \pm 0.03$ & $-0.22 \pm 0.02$ & $+0.35 \pm
0.03$ & $+0.24 \pm 0.01$ & $+0.27 \pm 0.02$ & $+0.28 \pm 0.01$ & $+0.40 \pm
0.06$ & $+0.36 \pm 0.02$S & $+0.26 \pm 0.10$ & $+0.06 \pm 0.05$ & $-0.16 \pm 0.06$ & $+0.38 \pm
0.15$ & & $+0.11 \pm 0.03$ & $+0.13 \pm 0.08$ & & Ca & $+0.33 \pm 0.03$ & $+0.14 \pm 0.02$ & $-0.24 \pm 0.03$ & $+0.31 \pm
0.04$ & $+0.23 \pm 0.05$ & $+0.27 \pm 0.02$ & $+0.27 \pm 0.03$ & $+0.38 \pm
0.09$ & $+0.32 \pm 0.07$Ti I & $+0.31 \pm 0.05$ & $+0.14 \pm 0.05$ & $-0.20 \pm 0.06$ & $+0.32
\pm 0.07$ & $+0.22 \pm 0.07$ & $+0.28 \pm 0.03$ & $+0.25 \pm 0.04$ & $+0.34
\pm 0.13$ & $+0.24 \pm 0.10$Ti II & $+0.37 \pm 0.04$ & $+0.20 \pm 0.03$ & $-0.20 \pm 0.04$ & $+0.35 \pm
0.06$ & & $+0.25 \pm 0.05$ & $+0.29 \pm 0.04$ & & Ti(avg) & $+0.35 \pm 0.03$ & $+0.18 \pm 0.03$ & $-0.20 \pm 0.03$ & $+0.34 \pm 0.05$ & $+0.22 \pm 0.07$ & $+0.27 \pm 0.03$ & $+0.27 \pm 0.03$ & $+0.34 \pm 0.13$ & $+0.24 \pm 0.10$Cr & $+0.30 \pm 0.04$ & $+0.10 \pm 0.06$ & $-0.44 \pm 0.05$ & $+0.34 \pm
0.07$ & $+0.17 \pm 0.08$ & & $+0.29 \pm 0.05$ & & $+0.39 \pm
0.10$Fe & $+0.35 \pm 0.02$ & $+0.15 \pm 0.02$ & $-0.41 \pm 0.03$ & $+0.37 \pm
0.04$ & $+0.21 \pm 0.04$ & $+0.28 \pm 0.02$ & $+0.31 \pm 0.03$ & $+0.36 \pm
0.05$ & $+0.33 \pm 0.05$
[lccccc]{} Mg & $+0.16 \pm 0.06$ & $+0.09 \pm 0.06$ & $+0.25 \pm 0.05$ & $+0.44 \pm 0.06$ & O(synth) & $-0.01 \pm 0.07$ & $+0.04 \pm 0.07$ & $+0.23 \pm 0.06$ & $+0.10 \pm 0.07$ & EW(7771) & 81.4 & 75.4 & 120.0 & 153.0 & EW(7774) & 70.1 & 63.3 & 105.6 & 135.3 & 95.3EW(7775) & 56.9 & 50.7 & 87.4 & 113.3 & 79.2O(trip) & $+0.10 \pm 0.06$ & $+0.06 \pm 0.06$ & $+0.22 \pm 0.05$ & $+0.42 \pm 0.06$ & $+0.14 \pm 0.05$O(corr) & $+0.10 \pm 0.06$ & $+0.05 \pm 0.06$ & $+0.12 \pm 0.05$ & $+0.25 \pm 0.06$ & $+0.06 \pm 0.05$O(avg) & $+0.05 \pm 0.05$ & $+0.05 \pm 0.05$ & $+0.17 \pm 0.04$ & $+0.19 \pm 0.05$ & $+0.06 \pm 0.05$C/O & $+0.09 \pm 0.05$ & $+0.10 \pm 0.05$ & $-0.11 \pm 0.05$ & $-0.13 \pm 0.05$ & $+0.10 \pm 0.04$
[lccccccc]{} HR810(avg) & $4.22 \pm 0.02$ & $1 \pm 1$ & $1.19 \pm 0.01$ & $4.39 \pm
0.01$ & $-21.2$,$-10.6$,$-1.6$ & $-4.65$ & 1.6HD1237(avg) & $5.36 \pm 0.02$ & & $0.92 \pm 0.03$ & $4.48 \pm
0.02$ & $-23.4$,$-10.0$,$+8.8$ & $-4.44$ & 0.6HD10697 & $3.73 \pm 0.06$ & $7.8 \pm 0.5$ & $1.10 \pm 0.01$ & $3.95 \pm
0.04$ & $+46.9$,$-22.4$,$+23.2$ & $-5.02$ & 6.0HD12661(avg) & $4.58 \pm 0.07$ & $8.0 \pm 1.0$ & $1.01 \pm 0.02$ & $4.31
\pm 0.02$ & $+61.6$,$-23.4$,$+3.5$ & $-5.12$ & 8.4HD16141 & $4.05 \pm 0.11$ & $8.5 \pm 0.5$ & $1.05 \pm 0.03$ & $4.15 \pm
0.05$ & $+95.6$,$-35.4$,$+9.2$ & $-5.05$ & 6.7HD37124 & $5.07 \pm 0.08$ & see text & see text & see text & $+31.7$,$-41.2$,$-38.5$ & $-4.90$ & 3.9HD38529 & $2.80 \pm 0.08$ & $3 \pm 0.5$ & $1.49 \pm 0.07$ & $3.70
\pm 0.05$ & $-2.0$,$-18.6$,$-28.0$ & $-4.89$ & 3.7HD46375 & $5.29 \pm 0.08$ & see text & see text & see text & $+18.5$,$-14.3$,$+14.9$ & $-4.94$ & 4.5HD52265(avg) & $4.05 \pm 0.05$ & $2.1 \pm 0.3$ & $1.22 \pm 0.02$ & $4.34 \pm 0.02$ & $-42.5$,$-14.8$,$-3.2$ & $-4.91$ & 4.0HD75332 & $3.93 \pm 0.05$ & $1 \pm 1$ & $1.27 \pm 0.01$ & $4.34 \pm
0.04$ & $+1.5$,$-5.6$,$+1.0$ & $-4.46$ & 0.7HD89744 & $2.79 \pm 0.06$ & $1.8 \pm 0.1$ & $1.55 \pm 0.03$ & $4.00 \pm
0.04$ & $-1.2$,$-23.6$,$-7.3$ & $-5.12$ & 8.4HD92788 & $4.76 \pm 0.07$ & $4.2^{+1.5}_{-2.0}$ & $1.05 \pm 0.02$ & $4.42 \pm 0.04$ & $+26.0$,$-16.6$,$-14.0$ & $-5.04$ & 6.4HD130322 & $5.67 \pm 0.10$ & & $\sim 0.89$ & $\sim 4.55$ & $+0.3$,$-20.0$,$-5.3$ & $-4.39$ & 0.3HD134987 & $4.42 \pm 0.06$ & $9^{\rm +1}_{\rm -3.5}$ & $1.02^{\rm
+0.07}_{\rm -0.03}$ & $4.24 \pm 0.05$ & $-11.1$,$-34.0$,$+26.4$ & $-5.01$ & 5.8HD168443 & $4.03 \pm 0.07$ & $10.5 \pm 1.5$ & $1.01 \pm 0.02$ & $4.05 \pm
0.04$ & $-20.0$,$-51.7$,$-0.7$ & $-5.08$ & 7.4HD177830 & $3.33 \pm 0.10$ & $11^{\rm +4}_{\rm -3.5}$ & $1.03 \pm 0.11$ & $3.43 \pm 0.13$ & $-13.4$,$-64.7$,$-1.2$ & $-5.28$ & 13.5HD192263 & $6.30 \pm 0.05$ & & $\sim 0.80$ & $\sim 4.58$ & $-6.0$,$+16.9$,$+25.6$ & $-4.37$ & 0.3HD209458 & $4.29 \pm 0.10$ & $3^{\rm +1}_{\rm -1}$ & $1.12 \pm 0.02$ & $4.36 \pm 0.05$ & $+4.4$,$-9.4$,$+6.4$ & $-4.93$ & 4.3HD217014 & $4.54 \pm 0.04$ & $5.5 \pm 0.5$ & $1.07 \pm 0.01$ & $4.33 \pm 0.02$ & $-5.2$,$-22.6$,$+20.9$ & $-5.07$ & 7.1HD217107 & $4.70 \pm 0.03$ & $12 \pm 1.5$ & $0.98 \pm 0.01$ & $4.24 \pm
0.02$ & $+8.6$,$-2.4$,$+16.2$ & $-5.00$ & 5.6HD222582 & $4.59 \pm 0.10$ & $11 \pm 1$ & $0.95 \pm 0.01$ & $4.30 \pm
0.02$ & $+46.7$,$+5.6$,$-5.4$ & $-5.00$ & 5.6
[lc]{} C/Fe & $-4.21$O/Fe & $-0.79$Na/Fe & $-0.56$Mg/Fe & $-0.05$Al/Fe & $+0.00$Si/Fe & $-0.09$Ca/Fe & $+0.00$Ti/Fe & $-0.05$
[^1]: The discovery papers corresponding to these stars are: Butler et al. 2000; Fischer et al. 2000; Marcy et al. 2000; Sivan et al. 2000; Vogt et al. 2000.
[^2]: Note, the theoretical $\log g$ values are derived from theoretical stellar evolutionary isochrones at the age which agrees with the observed T$_{\rm eff}$, M$_{\rm v}$, and \[Fe/H\] values.
[^3]: Note, the youth and activity level of HD1237 make it likely that this star is variable.
[^4]: For HD168443 Gimenez lists T$_{\rm eff} = 6490$ K. This is a typo; it should have been listed as 5490 K in his paper (Alvaro Gimenez, private communication).
[^5]: Note, we retain 62 single G dwarfs (with T$_{\rm eff} > 5250$ K) not known to have planets from Favata et al.’s original sample of 91 G dwarfs. Only one star is in common between our studies, HD210277, for which the \[Fe/H\] estimates differ by only 0.02 dex.
[^6]: For additional details concerning this prediction and the annoucement of a planet orbiting HD89744, see Gonzalez (2000).
[^7]: These will be referred to as ED93, FG98, and CH00, respectively.
[^8]: We are comparing the abundance results from Tomkin et al. (1997) for 51Peg, not Edvardsson et al., who had employed spectra of lesser quality for this star.
[^9]: Unfortunately, this star displays Doppler variations with an amplitude near 50 m s$^{\rm -1}$, which appear to be due to chromospheric activity (Geoff Marcy, private communication). A high level of chromospheric activity for this star is consistent with our age estimate for it.
|
---
abstract: 'In this short note we prove that, given two (not necessarily binary) rooted phylogenetic trees $T_1, T_2$ on the same set of taxa $X$, where $|X|=n$, the hybridization number of $T_1$ and $T_2$ can be computed in time $O^{*}(2^n)$ i.e. $O( 2^{n} \cdot poly(n) )$. The result also means that a Maximum Acyclic Agreement Forest (MAAF) can be computed within the same time bound.'
author:
- 'Leo van Iersel, Steven Kelk, Nela Leki[ć]{}, Leen Stougie'
bibliography:
- 'MAAFexptime.bib'
title: 'A short note on exponential-time algorithms for hybridization number'
---
Introduction
============
Let $X$ be a finite set. A *rooted phylogenetic* $X$-*tree*, henceforth abbreviated to *tree*, is a rooted tree with no vertices with indegree 1 and outdegree 1, a root with indegree 0 and outdegree at least 2, and leaves bijectively labelled by the elements of $X$. A *rooted phylogenetic network*, henceforth abbreviated to *network*, is a directed acyclic graph with no vertices with indegree 1 and outdegree 1 and leaves bijectively labelled by the elements of $X$.
A tree $T$ is *displayed* by a network $N$ if $T$ can be obtained from a subgraph of $N$ by contracting edges. Note that, when $T$ is not binary, this means that the image of $T$ inside $N$ can be more “resolved” than $T$ itself. Using $d^-(v)$ to denote the indegree of a vertex $v$, a *reticulation* is a vertex $v$ with $d^-(v)\geq 2$. The *reticulation number* of a network $N$ with vertex set $V$ is given by
$$r(N)=\sum_{v\in V : d^-(v)\geq 2}(d^-(v)-1).$$ Given two (not necessarily binary) trees $T_1$, $T_2$, the *hybridization number* problem (originally introduced in [@baroni05]) asks us to minimize $r(N)$ ranging over all networks that display $T_1$ and $T_2$.
There has been extensive work on fixed-parameter tractable (FPT) algorithms for the hybridization number problem. The fastest such algorithm currently works only on binary trees and has a running time of $O( 3.18^{r} \cdot poly(n) )$ where $r$ is the hybridization number and $n=|X|$ [@whidden2013fixed]. Given that $n$ is a trivial upper bound on the hybridization number of two trees this immediately yields an exponential-time algorithm with running time $O^{*}( 3.18^{n} )$ for the binary case. In [@firststeps] a $O^{*}( 3^n )$ algorithm was presented (again restricted to the binary case). In [@elusiveness] a $O^{*}( 2^n )$ algorithm was implied but this relied on the claimed equivalence between the softwired cluster model and the model described in [@bafnabansal2006], which was not formally proven. Here we describe explicitly a $O^{*}(2^n)$ algorithm that does not rely on this equivalence. This also means that a Maximum Acyclic Agreement Forest (MAAF) can be computed within the same time bound (see e.g. [@nonbinCK] for related discussions).
For further background and definitions on hybridization number and phylogenetic networks we refer the reader to recent articles such as [@terminusest]. For background and definitions on softwired clusters (which the proof below uses heavily) see [@elusiveness].
Results
=======
Let $T_1$ and $T_2$ be two (not necessarily binary) rooted phylogenetic trees on the same set of taxa $X$, where $|X|=n$. Then the hybridization number $h(T_1, T_2)$ can be computed in time $O^{*}(2^{n})$.
Let ${{\mathcal C}}= Cl(T_1) \cup Cl(T_2)$ be the union of the sets of clusters induced by the edges of the trees $T_1$ and $T_2$. It has been shown that $r({{\mathcal C}})$, the minimum reticulation number of a phylogenetic network representing all the clusters in ${{\mathcal C}}$, is exactly equal to $h(T_1, T_2)$ [@elusiveness Lemma 12] and that optimal solutions for one problem can be transformed in polynomial time into optimal solutions for the other [@terminusest]. We hence focus on computation of $r({{\mathcal C}})$. Recall that an ST-set $S$ of a set of clusters is a subset of $X$ such that $S$ is compatible with every cluster in ${{\mathcal C}}$, and such that all clusters in ${{\mathcal C}}|S$ are pairwise compatible, where ${{\mathcal C}}|S = \{ C \cap S : C \in {{\mathcal C}}\}$. (The non-empty ST-sets are in one-to-one correspondence with common pendant subtrees of $T_1$ and $T_2$ [@terminusest]). For $X' \subseteq X$, we write ${{\mathcal C}}\setminus X'$ to denote $\{ C \setminus X' : C \in {{\mathcal C}}\}$. An ST-set sequence of length $k$ is a sequence $S_1, S_2, \ldots, S_k$ such that each $S_i$ is an ST-set of ${{\mathcal C}}_{i-1}$, where ${{\mathcal C}}_{0} = {{\mathcal C}}$ and for $1 \leq i \leq k$, ${{\mathcal C}}_i = {{\mathcal C}}_{i-1} \setminus S_{i}$. Such a sequence is a *tree* sequence if ${{\mathcal C}}_k$ is compatible. Note that if ${{\mathcal C}}$ is compatible then this is characterized by the empty tree sequence and we say that $k=0$. The value $r({{\mathcal C}})$ is equivalent to the minimum possible length ranging over all ST-set tree sequences [@elusiveness Corollary 9]. Without loss of generality we can assume that $S_i$ is a *maximal* ST-set sequence i.e. where each $S_i$ is a maximal ST-set of ${{\mathcal C}}_{i-1}$. For a given set of clusters on $n$ taxa there are at most $n$ maximal ST-sets, they partition the set of taxa and they can be computed in polynomial time [@elusiveness]. Clearly, $r({{\mathcal C}}) = 0$ if ${{\mathcal C}}$ is compatible which can be checked in polynomial time. Otherwise the above observations yield the following expression, where $ST({{\mathcal C}})$ is the set of maximal ST-sets of ${{\mathcal C}}$: $$r({{\mathcal C}}) = \min_{S \in ST({{\mathcal C}})} \bigg ( 1 + r({{\mathcal C}}\setminus S) \bigg )$$ This can be computed in time $O^{*}(2^n)$ by standard exponential time dynamic programming. That is, compute $r({{\mathcal C}})$ by computing $r({{\mathcal C}}| X')$ for all possible $\emptyset \subset X' \subset X$, increasing the cardinality of $X'$ from small to large. Each $r({{\mathcal C}}| X')$ can then be computed by consulting at most $n$ smaller subproblems. This yields an overall running time of $O(2^n \cdot poly(n))$.
Discussion
==========
A consequence of the above analysis is that, when solving hybridization number, there are at most $2^{n}$ relevant subproblems and each such subproblem can be characterized by a subset of $X$. Any algorithm that attempts to compute the hybridization number by iteratively pruning maximal common pendant subtrees (equivalently, maximal ST-sets) until the input trees are compatible, can thus easily attain a $O^{*}(2^n)$ upper bound on its running time, at the expense of potentially consuming exponential space. That is, by storing the solutions to subproblems in a look-up table (i.e. hashtable), indexed by the subset of $X$ that characterises the subproblem.
Finally, an obvious open question that remains is whether the hybridization number of two trees can be computed in time $O^{*}( c^n )$ for any constant $c < 2$.
|
---
abstract: |
The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity.
We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories.
author:
- Daniil Sorokin
- |
Iryna Gurevych\
Ubiquitous Knowledge Processing Lab (UKP)
- |
Research Training Group AIPHES\
Department of Computer Science, Technische Universität Darmstadt\
[[www.ukp.tu-darmstadt.de](www.ukp.tu-darmstadt.de)]{}\
bibliography:
- 'entity-linking.bib'
title: |
Mixing Context Granularities for Improved Entity Linking\
on Question Answering Data across Entity Categories
---
Introduction
============
Knowledge base question answering (QA) requires a precise modeling of the question semantics through the entities and relations available in the knowledge base (KB) in order to retrieve the correct answer. The first stage for every QA approach is entity linking (EL), that is the identification of entity mentions in the question and linking them to entities in KB. In Figure \[fig:example-question\], two entity mentions are detected and linked to the knowledge base referents. This step is crucial for QA since the correct answer must be connected via some path over KB to the entities mentioned in the question.
The state-of-the-art QA systems usually rely on off-the-shelf EL systems to extract entities from the question [@Yih2015]. Multiple EL systems are freely available and can be readily applied for question answering (e.g. DBPedia Spotlight[^1], AIDA[^2]). However, these systems have certain drawbacks in the QA setting: they are targeted at long well-formed documents, such as news texts, and are less suited for typically short and noisy question data. Other EL systems focus on noisy data (e.g. S-MART, [@Yang2015a]), but are not openly available and hence limited in their usage and application. Multiple error analyses of QA systems point to entity linking as a major external source of error [@Berant2014; @Reddy2014; @Yih2015].
(wikipediasnippet) \[draw=gray,dotted,inner sep=1.5ex,fill=lightgray!10,text width=0.9\] [ ` what are ? ` ]{}; (wikidatids) [ Taylor Swift Q462 & album Q24951125\
]{};
(wikidatids-1-1) – (entity1); (wikidatids-1-2) – (entity2);
(answer) \[inner sep=1ex,draw=cb\_darkyellow,fill=none,rounded corners=1ex,execute at begin node=,xshift=6ex, below=11ex of wikipediasnippet\] [Red, 1989, etc.]{}; (wikidatids-1-1) edge \[->,rel\_arrow,bend right=30\] node\[text\_above\_arrow,midway,xshift=-2em,yshift= 0.5ex\] [performer]{} (answer.west); (wikidatids-1-2) edge \[->,rel\_arrow,bend left=10\] node\[text\_above\_arrow,midway,xshift=2em,yshift= 0.5ex\] [instance of]{} (answer.north);
The QA datasets are normally collected from the web and contain very noisy and diverse data [@Berant2013], which poses a number of challenges for EL. First, many common features used in EL systems, such as capitalization, are not meaningful on noisy data. Moreover, a question is a short text snippet that does not contain broader context that is helpful for entity disambiguation. The QA data also features many entities of various categories and differs in this respect from the Twitter datasets that are often used to evaluate EL systems.
In this paper, we present an approach that tackles the challenges listed above: we perform entity mention detection and entity disambiguation jointly in a single neural model that makes the whole process end-to-end differentiable. This ensures that any token n-gram can be considered as a potential entity mention, which is important to be able to link entities of different categories, such as movie titles and organization names.
To overcome the noise in the data, we automatically learn features over a set of contexts of different granularity levels. Each level of granularity is handled by a separate component of the model. A token-level component extracts higher-level features from the whole question context, whereas a character-level component builds lower-level features for the candidate n-gram. Simultaneously, we extract features from the knowledge base context of the candidate entity: character-level features are extracted for the entity label and higher-level features are produced based on the entities surrounding the candidate entity in the knowledge graph. This information is aggregated and used to predict whether the n-gram is an entity mention and to what entity it should be linked.
**Contributions** The two main contributions of our work are:
1. We construct two datasets to evaluate EL for QA and present a set of strong baselines: the existing EL systems that were used as a building block for QA before and a model that uses manual features from the previous work on noisy data.
2. We design and implement an entity linking system that models contexts of variable granularity to detect and disambiguate entity mentions. To the best of our knowledge, we are the first to present a unified end-to-end neural model for entity linking for noisy data that operates on different context levels and does not rely on manual features. Our architecture addresses the challenges of entity linking on question answering data and outperforms state-of-the-art EL systems.
**Code and datasets** Our system can be applied on any QA dataset. The complete code as well as the scripts that produce the evaluation data can be found here: <https://github.com/UKPLab/starsem2018-entity-linking>.
Motivation and Related Work
===========================
Several benchmarks exist for EL on Wikipedia texts and news articles, such as ACE [@Bentivogli2010] and CoNLL-YAGO [@Hoffart2011a]. These datasets contain multi-sentence documents and largely cover three types of entities: Location, Person and Organization. These types are commonly recognized by named entity recognition systems, such as Stanford NER Tool [@Manning2014]. Therefore in this scenario, an EL system can solely focus on entity disambiguation.
coordinates [(fic. character, 0.009108653220559532) (event, 0.07937540663630449) (location, 0.2192582953806116) (organization, 0.24788549121665582) (professions etc., 0.1561483409238777) (person, 0.2225113858165257) (product, 0.05074821080026025) (thing, 0.014964216005204945) ]{}; coordinates [ (fic. character, 0.03721444362564481) (event, 0.06705969049373618) (location, 0.2969786293294031) (organization, 0.07663964627855564) (professions etc., 0.05821665438467207) (person, 0.39572586588061903) (product, 0.045689019896831246) (thing, 0.02247605011053795) ]{}; coordinates [ (fic. character, 0.03352984524686809) (event, 0.14885777450257923) (location, 0.13964627855563744) (organization, 0.078481945467944) (professions etc., 0.23470891672807664) (person, 0.13264554163596168) (product, 0.05747973470891673) (thing, 0.017317612380250553) ]{}; at (axis cs: person,0.31) [0.4]{};
In the recent years, EL on Twitter data has emerged as a branch of entity linking research. In particular, EL on tweets was the central task of the NEEL shared task from 2014 to 2016 [@Hotho2016]. Tweets share some of the challenges with QA data: in both cases the input data is short and noisy. On the other hand, it significantly differs with respect to the entity types covered. The data for the NEEL shared task was annotated with 7 broad entity categories, that besides Location, Organization and Person include Fictional Characters, Events, Products (such as electronic devices or works of art) and Things (abstract objects). Figure \[fig:types-dist\] shows the distribution of entity categories in the training set from the NEEL 2014 competition. One can see on the diagram that the distribution is mainly skewed towards 3 categories: Location, Person and Organization.
Figure \[fig:types-dist\] also shows the entity categories present in two QA datasets. The distribution over the categories is more diverse in this case. The WebQuestions dataset includes the Fictional Character and Thing categories which are almost absent from the NEEL dataset. A more even distribution can be observed in the GraphQuestion dataset that features many Events, Fictional Characters and Professions. This means that a successful system for EL on question data needs to be able to recognize and to link all categories of entities. Thus, we aim to show that comprehensive modeling of different context levels will result in a better generalization and performance across various entity categories.
**Existing Solutions** The early machine learning approaches to EL focused on long well-formed documents [@Bunescu2006; @Cucerzan2007; @Han2012; @Francis-Landau2016]. These systems usually rely on an off-the-shelf named entity recognizer to extract entity mentions in the input. As a consequence, such approaches can not handle entities of types other than those that are supplied by the named entity recognizer. Named entity recognizers are normally trained to detect mentions of Locations, Organizations and Person names, whereas in the context of QA, the system also needs to cover movie titles, songs, common nouns such as ‘president’ etc.
To mitigate this, @Cucerzan2012 has introduced the idea to perform mention detection and entity linking jointly using a linear combination of manually defined features. @Luo2015a have adopted the same idea and suggested a probabilistic graphical model for the joint prediction. This is essential for linking entities in questions. For example in “*who does maggie grace play in taken?*”, it is hard to distinguish between the usage of the word ‘taken’ and the title of a movie ‘Taken’ without consulting a knowledge base.
@Sun2015a were among the first to use neural networks to embed the mention and the entity for a better prediction quality. Later, @Francis-Landau2016 have employed convolutional neural networks to extract features from the document context and mixed them with manually defined features, though they did not integrate it with mention detection. @Sil2017 continued the work in this direction recently and applied convolutional neural networks to cross-lingual EL.
The approaches that were developed for Twitter data present the most relevant work for EL on QA data. @Guo2013a have created a new dataset of around 1500 tweets and suggested a Structured SVM approach that handled mention detection and entity disambiguation together. @Chang2014 describe the winning system of the NEEL 2014 competition on EL for short texts: The system adapts a joint approach similar to @Guo2013a, but uses the MART gradient boosting algorithm instead of the SVM and extends the feature set. The current state-of-the-art system for EL on noisy data is S-MART [@Yang2015a] which extends the approach from @Chang2014 to make structured predictions. The same group has subsequently applied S-MART to extract entities for a QA system [@Yih2015].
Unfortunately, the described EL systems for short texts are not available as stand-alone tools. Consequently, the modern QA approaches mostly rely on off-the-shelf entity linkers that were designed for other domains. @Reddy2016 have employed the Freebase online API that was since deprecated. A number of question answering systems have relied on DBPedia Spotlight to extract entities [@Lopez2016; @Chen2016]. DBPedia Spotlight [@Mendes2011] uses document similarity vectors, word embeddings and manually defined features such as entity frequency. We are addressing this problem in our work by presenting an architecture specifically targeted at EL for QA data.
**The Knowledge Base** Throughout the experiments, we use the Wikidata[^3] open-domain KB [@Vrandecic2014]. Among the previous work, the common choices of a KB include Wikipedia, DBPedia and Freebase. The entities in Wikidata directly correspond to the Wikipedia articles, which enables us to work with data that was previously annotated with DBPedia. Freebase was discontinued and is no longer up-to-date. However, most entities in Wikidata have been annotated with identifiers from other knowledge sources and databases, including Freebase, which establishes a link between the two KBs.
Entity Linking Architecture {#sec:architecture}
===========================
The overall architecture of our entity linking system is depicted in Figure \[fig:system-diagram\]. From the input question $\mathbf{x}$ we extract all possible token n-grams $N$ up to a certain length as entity mention candidates (Step 1). For each n-gram $n$, we look it up in the knowledge base using a full text search over entity labels (Step 2). That ensures that we find all entities that contain the given n-gram in the label. For example for a unigram ‘obama’, we retrieve ‘Barack Obama’, ‘Michelle Obama’ etc. This step produces a set of entity disambiguation candidates $C$ for the given n-gram $n$. We sort the retrieved candidates by length and cut off after the first $1000$. That ensures that the top candidates in the list would be those that exactly match the target n-gram $n$.
In the next step, the list of n-grams $N$ and the corresponding list of entity disambiguation candidates are sent to the entity linking model (Step 3). The model jointly performs the detection of correct mentions and the disambiguation of entities.
Variable Context Granularity Network
------------------------------------
[ ![image](global_architecture.pdf){width="0.99\linewidth"}]{}
The neural architecture (Variable Context Granularity, VCG) aggregates and mixes contexts of different granularities to perform a joint mention detection and entity disambiguation. Figure \[fig:vcg-diagram\] shows the layout of the network and its main components. The input to the model is a list of question tokens $\mathbf{x}$, a token n-gram $n$ and a list of candidate entities $C$. Then the model is a function $\mathrm{M}(\mathbf{x},n,C)$ that produces a mention detection score $p_n$ for each n-gram and a ranking score $p_c$ for each of the candidates $c \in C$: $p_n, \mathbf{p_c} = \mathrm{M}(\mathbf{x},n,C)$.
**Dilated Convolutions** To process sequential input, we use dilated convolutional networks (DCNN). @Strubell2017 have recently shown that DCNNs are faster and as effective as recurrent models on the task of named entity recognition. We define two modules: $\mathbf{DCNN}_w$ and $\mathbf{DCNN}_c$ for processing token-level and character-level input respectively. Both modules consist of a series of convolutions applied with an increasing dilation, as described in @Strubell2017. The output of the convolutions is averaged and transformed by a fully-connected layer.
**Context components** The *token component* corresponds to sentence-level features normally defined for EL and encodes the list of question tokens $\mathbf{x}$ into a fixed size vector. It maps the tokens in $\mathbf{x}$ to $d_w$-dimensional pre-trained word embeddings, using a matrix $\mathbf{W} \in \mathbb{R}^{|V_w| \times d_w}$, where $|V_w|$ is the size of the vocabulary. We use 50-dimensional GloVe embeddings pre-trained on a 6 billion tokens corpus [@Pennington2014]. The word embeddings are concatenated with $d_p$-dimensional position embeddings $\mathbf{P_w} \in \mathbb{R}^{3 \times d_p}$ that are used to denote the tokens that are part of the target n-gram. The concatenated embeddings are processed by $\mathbf{DCNN}_w$ to get a vector $\mathbf{o_s}$.
*The character component* processes the target token n-gram $n$ on the basis of individual characters. We add one token on the left and on the right to the target mention and map the string of characters to $d_z$-character embeddings, $\mathbf{Z} \in \mathbb{R}^{|V_z| \times d_z}$. We concatenate the character embeddings with $d_p$-dimensional position embeddings $\mathbf{P_z} \in \mathbb{R}^{|x| \times d_p}$ and process them with $\mathbf{DCNN}_c$ to get a feature vector $\mathbf{o_n}$.
We use *the character component* with the same learned parameters to encode the label of a candidate entity from the KB as a vector $\mathbf{o_l}$. The parameter sharing between mention encoding and entity label encoding ensures that the representation of a mention is similar to the entity label.
The KB structure is the highest context level included in the model. *The knowledge base structure component* models the entities and relations that are connected to the candidate entity $c$. First, we map a list of relations $\mathbf{r}$ of the candidate entity to $d_r$-dimensional pre-trained relations embeddings, using a matrix $\mathbf{R} \in \mathbb{R}^{|V_r| \times d_r}$, where $|V_r|$ is the number of relation types in the KB. We transform the relations embeddings with a single fully-connected layer $f_r$ and then apply a max pooling operation to get a single relation vector $\mathbf{o_r}$ per entity. Similarly, we map a list of entities that are immediately connected to the candidate entity $\mathbf{e}$ to $d_e$-dimensional pre-trained entity embeddings, using a matrix $\mathbf{E} \in \mathbb{R}^{|V_e| \times d_e}$, where $|V_e|$ is the number of entities in the KB. The entity embeddings are transformed by a fully-connected layer $f_e$ and then also pooled to produce the output $\mathbf{o_e}$. The embedding of the candidate entity itself is also transformed with $f_e$ and is stored as $\mathbf{o_d}$. To train the knowledge base embeddings, we use the TransE algorithm [@Bordes2013].
Finally, *the knowledge base lexical component* takes the labels of the relations in $\mathbf{r}$ to compute lexical relation embeddings. For each $r \in \mathbf{r}$, we tokenize the label and map the tokens $\mathbf{x_r}$ to word embeddings, using the word embedding matrix $\mathbf{W}$. To get a single lexical embedding per relation, we apply max pooling and transform the output with a fully-connected layer $f_{rl}$. The lexical relation embeddings for the candidate entity are pooled into the vector $\mathbf{o_{rl}}$.
**Context Aggregation** The different levels of context are aggregated and are transformed by a sequence of fully-connected layers into a final vector $\mathbf{o_c}$ for the n-gram $n$ and the candidate entity $c$. The vectors for each candidate are aggregated into a matrix $O = [\mathbf{o_c}| c \in C]$. We apply element-wise max pooling on $O$ to get a single summary vector $\mathfrak{s}$ for all entity candidates for $n$.
To get the ranking score $p_c$ for each entity candidate $c$, we apply a single fully-connected layer $g_c$ on the concatenation of $\mathbf{o_c}$ and the summary vector $\mathfrak{s} $: $p_c = g_c(\mathbf{o_c} \| \mathfrak{s} )$. For the mention detection score for the n-gram, we separately concatenate the vectors for the token context $\mathbf{o_s}$ and the character context $\mathbf{o_n}$ and transform them with an array of fully-connected layers into a vector $\mathbf{o_t}$. We concatenate $\mathbf{o_t}$ with the summary vector $\mathfrak{s}$ and apply another fully-connected layer to get the mention detection score $p_n = \sigma(g_n(\mathbf{o_t} \| \mathfrak{s}))$.
Global entity assignment
------------------------
The first step in our system is extracting all possible overlapping n-grams from the input texts. We assume that each span in the input text can only refer to a single entity and therefore resolve overlaps by computing a global assignment using the model scores for each n-gram (Step 4 in Figure \[fig:system-diagram\]).
If the mention detection score $p_n$ is above the $0.5$-threshold, the n-gram is predicted to be a correct entity mention and the ranking scores $\mathbf{p_c}$ are used to disambiguate it to a single entity candidate. N-grams that have $p_n$ lower than the threshold are filtered out.
We follow @Guo2013b in computing the global assignment and hence, arrange all n-grams selected as mentions into non-overlapping combinations and use the individual scores $p_n$ to compute the probability of each combination. The combination with the highest probability is selected as the final set of entity mentions. We have observed in practice a similar effect as descirbed by @Strubell2017, namely that DCNNs are able to capture dependencies between different entity mentions in the same context and do not tend to produce overlapping mentions.
Composite Loss Function {#sec:loss}
-----------------------
Our model jointly computes two scores for each n-gram: the mention detection score $p_n$ and the disambiguation score $p_c$. We optimize the parameters of the whole model jointly and use the loss function that combines penalties for the both scores for all n-grams in the input question: $$\begin{gathered}
\mathcal{L} = \sum_{n\in N}\sum_{c\in C_n}\mathcal{M}(t_n, p_n) + t_n\mathcal{D}(t_c, p_c),\end{gathered}$$ where $t_n$ is the target for mention detection and is either $0$ or $1$, $t_c$ is the target for disambiguation and ranges from $0$ to the number of candidates $|C|$.
For the mention detection loss $\mathcal{M}$, we include a weighting parameter $\alpha$ for the negative class as the majority of the instances in the data are negative: $$\begin{gathered}
\mathcal{M}(t_n,p_n) = - t_n\log p_n - \alpha(1-t_n)\log(1-p_n)
$$
The disambiguation detection loss $\mathcal{D}$ is a maximum margin loss: $$\mathcal{D}(t_c, p_c) = \frac{\sum_{i=0}^{|C|} \max(0, (m - p_c[t_c] + p_c[i]))}{|C|},$$ where $m$ is the margin value. We set $m=0.5$, whereas the $\alpha$ weight is optimized with the other hyper-parameters.
Architecture comparison
-----------------------
Our model architecture follows some of the ideas presented in @Francis-Landau2016: they suggest computing a similarity score between an entity and the context for different context granularities. @Francis-Landau2016 experiment on entity linking for Wikipedia and news articles and consider the word-level and document-level contexts for entity disambiguation. As described above, we also incorporate different context granularities with a number of key differences:
we operate on sentence level, word level and character level, thus including a more fine-grained range of contexts;
the knowledge base contexts that @Francis-Landau2016 use are the Wikipedia title and the article texts — we, on the other hand, employ the structure of the knowledge base and encode relations and related entities;
@Francis-Landau2016 separately compute similarities for each type of context, whereas we mix them in a single end-to-end architecture;
we do not rely on manually defined features in our model.
Datasets
========
[p[0.45]{} >p[0.2]{} >p[0.2]{}]{} & \#Questions & \#Entities\
WebQSP Train & 3098 & 3794\
WebQSP Test & 1639 & 2002\
GraphQuestions Test & 2608 & 4680\
We compile two new datasets for entity linking on questions that we derive from publicly available question answering data: WebQSP [@Yih2016] and GraphQuestions [@Su2016].
WebQSP contains questions that were originally collected for the WebQuestions dataset from web search logs [@Berant2013]. They were manually annotated with SPARQL queries that can be executed to retrieve the correct answer to each question. Additionally, the annotators have also selected the main entity in the question that is central to finding the answer. The annotations and the query use identifiers from the Freebase knowledge base.
We extract all entities that are mentioned in the question from the SPARQL query. For the main entity, we also store the correct span in the text, as annotated in the dataset. In order to be able to use Wikidata in our experiments, we translate the Freebase identifiers to Wikidata IDs.
The second dataset, GraphQuestions, was created by collecting manual paraphrases for automatically generated questions [@Su2016]. The dataset is meant to test the ability of the system to understand different wordings of the same question. In particular, the paraphrases include various references to the same entity, which creates a challenge for an entity linking system. The following are three example questions from the dataset that contain a mention of the same entity: . . \[ex:graphquestions\] what is the rank of marvel’s **iron man**? . **iron-man** has held what ranks? . **tony stark** has held what ranks?
GraphQuestions does not contain main entity annotations, but includes a SPARQL query structurally encoded in JSON format. The queries were constructed manually by identifying the entities in the question and selecting the relevant KB relations. We extract gold entities for each question from the SPARQL query and map them to Wikidata.
We split the WebQSP training set into train and development subsets to optimize the neural model. We use the GraphQuestions only in the evaluation phase to test the generalization power of our model. The sizes of the constructed datasets in terms of the number of questions and the number of entities are reported in Table \[table:dataset-stats\]. In both datasets, each question contains at least one correct entity mention.
Experiments
===========
[>p[0.4]{} >p[0.1]{} >p[0.1]{} >p[0.1]{}]{} & P & R & F1\
Heuristic baseline & & &\
Simplified VCG & & &\
**VCG** & & &\
Evaluation Methodology
----------------------
We use precision, recall and F1 scores to evaluate and compare the approaches. We follow @Carmel2014 and @Yang2015a and define the scores on a per-entity basis. Since there are no mention boundaries for the gold entities, an extracted entity is considered correct if it is present in the set of the gold entities for the given question. We compute the metrics in the micro and macro setting. The macro values are computed per entity class and averaged afterwards.
For the WebQSP dataset, we additionally perform a separate evaluation using only the information on the main entity. The main entity has the information on the boundary offsets of the correct mentions and therefore for this type of evaluation, we enforce that the extracted mention has to overlap with the correct mention. QA systems need at least one entity per question to attempt to find the correct answer. Thus, evaluating using the main entity shows how the entity linking system fulfills this minimum requirement.
Baselines
---------
**Existing systems** In our experiments, we compare to DBPedia Spotlight that was used in several QA systems and represents a strong baseline for entity linking[^4]. In addition, we are able to compare to the state-of-the-art S-MART system, since their output on the WebQSP datasets was publicly released[^5]. The S-MART system is not openly available, it was first trained on the NEEL 2014 Twitter dataset and later adapted to the QA data [@Yih2015].
We also include a heuristics baseline that ranks candidate entities according to their frequency in Wikipedia. This baseline represents a reasonable lower bound for a Wikidata based approach.
**Simplified VCG** To test the effect of the end-to-end context encoders of the VCG network, we define a model that instead uses a set of features commonly suggested in the literature for EL on noisy data. In particular, we employ features that cover
frequency of the entity in Wikipedia,
edit distance between the label of the entity and the token n-gram,
number of entities and relations immediately connected to the entity in the KB,
word overlap between the input question and the labels of the connected entities and relations,
length of the n-gram.
We also add an average of the word embeddings of the question tokens and, separately, an average of the embeddings of tokens of entities and relations connected to the entity candidate. We train the simplified VCG model by optimizing the same loss function in Section \[sec:loss\] on the same data.
Practical considerations
------------------------
[>p[0.045]{} >p[0.045]{} >p[0.045]{} >p[0.045]{} >p[0.045]{} >p[0.14]{} >p[0.14]{} >p[0.045]{}]{} & &\
$d_w$ & $d_z$ & $d_e$ & $d_r$ & $d_p$ & $\mathbf{DCNN}_w$ & $\mathbf{DCNN}_c$ & $\alpha$\
$50$ & $25$ & $50$ & $50$ & $5$ & $64$ & $64$ & $0.5$\
[>p[0.18]{} >p[0.09]{} >p[0.05]{} >p[0.05]{} >p[0.09]{} >p[0.05]{} >p[0.05]{} >p[0.07]{} >p[0.05]{} >p[0.05]{}]{} & &\
& P & R & F1 & P & R & F1 & `m`P & `m`R & `m`F1\
DBPedia Spotlight & & & & & & & & &\
S-MART & & **** & & & **** & & & **** &\
Heuristic baseline & & & & & & & & &\
Simplified VCG & **** & & & **** & & & & &\
**VCG** & & & & & & & **** & &\
[>p[0.4]{} >p[0.18]{} >p[0.1]{} >p[0.1]{}]{} & P & R & F1\
DBPedia Spotlight & & &\
**VCG** & **** & &\
The hyper-parameters of the model, such as the dimensionality of the layers and the size of embeddings, are optimized with random search on the development set. The model was particularly sensitive to tuning of the negative class weight $\alpha$ (see Section \[sec:loss\]). Table \[table:hyper-params\] lists the main selected hyper-parameters for the VCG model[^6] and we also report the results for each model’s best configuration on the development set in Table \[table:eval-webqsp-dev\].
Results
-------
Table \[table:eval-webqsp-test\] lists results for the heuristics baseline, for the suggested Variable Context Granularity model (VCG) and for the simplified VCG baseline on the test set of WebQSP. The simplified VCG model outperforms DBPedia Spotlight and achieves a result very close to the S-MART model. Considering only the main entity, the simplified VCG model produces results better than both DBPedia Spotlight and S-MART. The VCG model delivers the best F-score across the all setups. We observe that our model achieves the most gains in precision compared to the baselines and the previous state-of-the-art for QA data.
VCG constantly outperforms the simplified VCG baseline that was trained by optimizing the same loss function but uses manually defined features. Thereby, we confirm the advantage of the mixing context granularities strategy that was suggested in this work. Most importantly, the VCG model achieves the best macro result which indicates that the model has a consistent performance on different entity classes.
We further evaluate the developed VCG architecture on the GraphQuestions dataset against the DBPedia Spotlight. We use this dataset to evaluate VCG in an out-of-domain setting: neither our system nor DBPedia Spotlight were trained on it. The results for each model are presented in Table \[table:eval-graph-test\]. We can see that GraphQuestions provides a much more difficult benchmark for EL. The VCG model shows the overall F-score result that is better than the DBPedia Spotlight baseline by a wide margin. It is notable that again our model achieves higher precision values as compared to other approaches and manages to keep a satisfactory level of recall.
coordinates [ (fic. character, 0.7272727272727273) (event, 0.2576687116564417) (location, 0.8166666666666665) (organization, 0.6171428571428571) (professions etc., 0.26086956521739135) (person, 0.9513157894736842) (product, 0.5023696682464455) (thing, 0.27118644067796605) ]{}; coordinates [ (fic. character, 0.6714285714285714) (event, 0.3282051282051282) (location, 0.74235807860262) (organization, 0.7545454545454545) (professions etc., 0.1345565749235474) (person, 0.9136986301369862) (product, 0.543046357615894) (thing, 0.282051282051282) ]{}; coordinates [(fic. character, 0.6511627906976745) (event, 0.3626373626373627) (location, 0.7854984894259819) (organization, 0.7416666666666667) (professions etc., 0.1377245508982036) (person, 0.9106901217861977) (product, 0.5771812080536913) (thing, 0.3755) ]{};
[>p[0.24]{} >p[0.07]{} >p[0.05]{} >p[0.05]{} >p[0.07]{} >p[0.05]{} >p[0.05]{} >p[0.06]{} >p[0.05]{} >p[0.05]{}]{} & &\
& P & R & F1 & P & R & F1 & `m`P & `m`R & `m`F1\
**VCG** & & **** & & **** & **** & & **** & **** &\
w/o token context & & & & & & & & &\
w/o character context & **** & & & & & & & &\
w/o KB structure context & & & & & & & & &\
w/o KB lexical context & & & & & & & & &\
**Analysis** In order to better understand the performance difference between the approaches and the gains of the VCG model, we analyze the results per entity class (see Figure \[fig:eval-webqsp-types\]). We see that the system is slightly better in the disambiguation of Locations, Person names and a similar category of Fictional Character names, while it has a considerable advantage in processing of Professions and Common Nouns. Our approach has an edge in such entity classes as Organization, Things and Products. The latter category includes movies, book titles and songs, which are particularly hard to identify and disambiguate since any sequence of words can be a title. VCG is also considerably better in recognizing Events. We conclude that the future development of the VCG architecture should focus on the improved identification and disambiguation of professions and common nouns.
To analyze the effect that mixing various context granularities has on the model performance, we include ablation experiment results for the VCG model (see Table \[table:ablation-webqsp-test\]). We report the same scores as in the main evaluation but without individual model components that were described in Section \[sec:architecture\].
We can see that the removal of the KB structure information encoded in entity and relation embeddings results in the biggest performance drop of almost 10 percentage points. The character-level information also proves to be highly important for the final state-of-the-art performance. These aspects of the model (the comprehensive representation of the KB structure and the character-level information) are two of the main differences of our approach to the previous work. Finally, we see that excluding the token-level input and the lexical information about the related KB relations also decrease the results, albeit less dramatically.
Conclusions
===========
We have described the task of entity linking on QA data and its challenges. The suggested new approach for this task is a unifying network that models contexts of variable granularity to extract features for mention detection and entity disambiguation. This system achieves state-of-the-art results on two datasets and outperforms the previous best system used for EL on QA data. The results further verify that modeling different types of context helps to achieve a better performance across various entity classes (macro f-score).
Most recently, @Peng2017 and @Yu2017 have attempted to incorporate entity linking into a QA model. This offers an exciting future direction for the Variable Context Granularity model.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1, and via the QA-EduInf project (grant GU 798/18-1 and grant RI 803/12-1).
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
[^1]: <http://www.dbpedia-spotlight.org>
[^2]: <https://www.mpi-inf.mpg.de/yago-naga/aida/>
[^3]: At the moment, Wikidata contains more than 40 million entities and 350 million relation instances:\
<https://www.wikidata.org/wiki/Special:Statistics>
[^4]: We use the online end-point: <http://www.dbpedia-spotlight.org/api>
[^5]: <https://github.com/scottyih/STAGG>
[^6]: The complete list of hyper-parameters and model characteristics can be found in the accompanying code repository.
|
---
abstract: 'We introduce a new proposal for the onset of cosmic acceleration based on mass-varying neutrinos. When massive neutrinos become nonrelativistic, the $Z_2$ symmetry breaks, and the quintessence potential becomes positive from its initially zero value. This positive potential behaves like a cosmological constant at the present era and drives the Universe’s acceleration during the slow roll evolution of the quintessence. In contrast to $\Lambda$CDM model, the dark energy in our model is dynamical, and the acceleration is not persistent. Contrary to some of the previous models of dark energy with mass-varying neutrinos, we do not use the adiabaticity condition which leads to instability.'
author:
- |
H. Mohseni Sadjadi[^1] and V. Anari[^2]\
[Department of Physics, University of Tehran,]{}\
[P. O. B. 14395-547, Tehran 14399-55961, Iran]{}
title: 'Mass varying neutrinos, symmetry breaking, and cosmic acceleration'
---
Introduction
============
The origin of the present acceleration of the Universe [@acc1; @acc2; @acc3; @acc4; @acc5; @acc6; @acc7]is not yet known. One can attribute this acceleration to an exotic matter with negative pressure that permeates the Universe homogeneously, dubbed dark energy. A simple candidate for dark energy is a scalar field known as quintessence, which constitutes nearly 70% of our present Universe with density $\rho_{dark}\sim 10^{-10}eV^4$ [@quint01; @quint02; @quint1; @quint2; @quint3; @quint4; @quint5; @quint6; @quint7; @quint8; @quint9; @quint10; @quint11; @quint12; @quint13; @quint14]. Structure formation requires that the acceleration begin after the matter-dominated era. The equation of state (EoS) parameter of the quintessence is negative, and it dilutes less quickly than dark and ordinary matter and radiation. Today, dark energy density has the same order of magnitude as the sum of other Universe ingredients, hence in the early eras, its ratio density was negligible. “Why, nowadays, dark energy and dark matter densities have the same order of magnitude?” is known as the coincidence problem [@coinc1; @coinc2; @coinc3; @coinc4; @coinc5; @coinc6; @coinc7; @coinc8; @coinc9; @coinc10]. This can be reexpressed as why in the early time, the dark energy density was negligible. The present proportion of dark sectors can be explained by considering the possible interaction of dark energy with other components [@inter1; @inter01; @inter2; @inter3; @inter4; @inter5; @inter6; @inter7; @inter8; @inter9; @inter10; @inter11; @inter12; @inter13; @inter14; @inter15; @inter16; @inter17]. Known physical properties of these components may give some clues to understand the behavior of dark energy. For example, quintessence (also dubbed the acceleron in neutrino dark energy models) and neutrinos interaction, may be employed to relate neutrino masses to EoS and the density of dark energy [@far1; @far2; @far3]. This interaction makes the mass of neutrino a function of the quintessence. Hence the neutrino mass changes by the evolution of the scalar field. The transition of mass varying neutrinos from relativistic to non-relativistic phase deforms the effective potential such that the quintessence velocity decreases, and it follows the minimum of the convex effective potential, gives rise to the Universe acceleration [@far1; @far2; @far3; @far4; @far5; @far6; @far7; @far8; @far9]. In some papers, an adiabatic evolution for the quintessence is considered [@far1; @afsh], such that the quintessence effective mass becomes larger than the Hubble parameter. This scenario may suffer from instabilities which result in the formation of neutrino nuggets [@afsh],[@mota]. These instabilities and the possibility to have stable neutrino lumps are also discussed in [@Wet1], where lumps are considered as non-relativistic particles with effective interactions, and also in [@Wet2] for a large neutrino mass.
In another class of models [@sym1; @sym2; @sym3], to describe the screening effect, a coupling between the quintessence and pressureless matter is considered. When the density of matter is greater than a critical value, the quintessence vacuum expectation value vanishes, leading to zero fifth force. But when the matter density becomes less than the critical value (e.g. by the redshift), the $Z_2$ symmetry is broken, and the quintessence evolves towards the minimum of its effective potential. This evolution may describe the present acceleration of the Universe. But in the symmetron model, the quintessence is too heavy to slow roll, and instead, rolls rapidly toward the minimum of its effective potential and oscillates about it. To remedy this problem, in [@sad1; @sad2], the symmetron is considered in the teleparallel model of gravity which has a de Sitter attractor solution in the late time.
In this article, we try to introduce a new model to explain the onset of the positive acceleration of the Universe from the matter dominated era with zero dark energy density. Motivated by the mass varying neutrino and the symmetron, we introduce a coupled quintessence neutrino model in which the potential and the neutrino mass have $Z_2$ symmetry. By the evolution of mass varying neutrinos from the relativistic regime to the nonrelativistic one, the shape of the effective potential changes and the quintessence begins its evolution from a constant initial fixed point. This procedure may provide enough [*[positive]{}*]{} potential to drive the cosmic acceleration via a slow roll evolution from a decelerated epoch.
In our model the rise of dark energy and its dominance over other components depend on the neutrino mass which determines the time when the neutrinos become nonrelativistic. So the evolution of the quintessence from a zero density is postponed until the nonrelativistic era of neutrinos after which the equivalence of dark matter and dark energy densities may occur. In this way, one may relate the coincidence problem to the neutrino mass. The coincidence problem also depends on the other parameters of the model, especially those which determine the dark energy density.
As the adiabaticity condition (i.e. the quintessence adiabatically traces the minimum of the effective potential) is not used, the model is free from instabilities encountered in some of the growing neutrino quintessence models [@afsh]. Besides, in contrast to the symmetron model [@sym1], the Universe can experience an accelerated phase during a time greater than the Hubble time in the slow roll regime.
The scheme of the paper is as follows: In the second section, we study the possibility of the occurrence of cosmic acceleration triggered by massive neutrinos in a symmetronlike model, from an epoch with zero dark energy density. In the third section, the perturbation equations are obtained, and the stability of the model is discussed. We illustrate our results with some numerical examples. In the last section, we conclude the paper.
Throughout this paper we use units $\hbar=c=k_B=1$ and metric signature (-,+,+,+).
Cosmic acceleration triggered by massive neutrinos in quintessence models with $Z_2$ symmetry
=============================================================================================
We use the action [@fl] $$\label{1}
S=\int d^4x\sqrt{-g}\left[{1\over 2}M_P^2R-{1\over 2}\partial_\mu \phi\partial^\mu \phi-V(\phi)\right]+\sum_jS_j\left[A_j^2(\phi)g_{\mu \nu},\psi_j\right]$$ where $\phi(t)$ is the homogenous quintessence with potential $V(\phi)$, and $\psi_j$ denotes other species . The coupling between the quintessence and $\psi_j$ is given by the conformal coupling $A_j^2(\phi)g_{\mu \nu}$, where $A_j(\phi)>0$. We consider only an interaction between $\psi_{i}$ and the quintessence, then $A_{(j)}(\phi)=\delta_{ij}A(\phi)$. $M_P=2.4\times 10^{18}GeV$ is the reduced Planck mass. The Universe is taken as a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space time $$\label{1.1}
ds^2=-dt^2+a^2(t)(dx^2+dy^2+dz^2),$$ where $a(t)$ is the scale factor.
Variation of (\[1\]) with respect to $\phi$ gives $$\label{2}
\ddot{\phi}+3H\dot{\phi}+V_{,\phi}= -{A_{,\phi}\over A}(\rho_{(i)}-3P_{(i)}),$$ where $\rho_{(i)}$ and $P_{(i)}$ are the energy density and the pressure of the [*[i]{}*]{}th species respectively and $V_{,\phi}={dV\over d\phi}$. Variation of (\[1\]) with respect to the metric yields the Friedmann equation $$\label{3}
H^2={1\over 3M_P^2}\left({1\over 2}\dot{\phi}^2+V+\sum_j \rho_{(j)}\right),$$ and the evolution of the Hubble parameter is given by $$\label{4}
\dot{H}=-{1\over 2M_P^2}\left(\dot{\phi}^2+\sum_j (\rho_{(j)} +P_{(j)})\right).$$ The Universe is positively accelerated provided that, $\dot{H}+H^2>0$, which yields $$\label{14}
2V(\phi)-2\dot{\phi}^2-\sum_i(\rho_{(i)}+3P_{(i)})>0.$$ The continuity equations are given by $$\label{5}
\dot{\rho}_{(i)}+3H(P_{(i)}+\rho_{(i)})={A_{,\phi}\over A}\dot{\phi}(\rho_{(i)}-3P_{(i)}),$$ for interacting [*[i]{}*]{}th species, and $$\label{6}
\dot{\rho}_{(j)}+3H(P_{(j)}+\rho_{(j)})=0,$$ for other components. The neutrino-quintessence interaction resulting from (\[1\]) can also be considered in the context of the coupled quintessence model [@quint01],[@inter1],[@inter01].
By employing the Fermi-Dirac distribution for neutrinos whose masses $m_{(\nu)}(\phi)$ are $\phi$ dependent and are also in thermal equilibrium with temperature $T_{(\nu)}$, one obtains $$\begin{aligned}
\label{fd}
&&\rho_{(\nu)}={T_{(\nu)}^4\over \pi^2}\int_0^\infty {dx x^2 \sqrt{x^2+\xi^2}\over e^x+1}\nonumber \\
&&P_{(\nu)}={T_{(\nu)}^4\over 3\pi^2}\int_0^\infty {dx x^4\over \sqrt{x^2+\xi^2}(e^x+1)},\end{aligned}$$ where $\xi={m_{(\nu)}(\phi)\over T_{(\nu)}}$. By using (\[fd\]) one finds $$\label{7}
\dot{\rho}_{(\nu)}+3H(P_{(\nu)}+\rho_{(\nu)})={m_{(\nu),\phi}(\phi)\over m_{(\nu)}(\phi)}\dot{\phi}(\rho_{(\nu)}-3P_{(\nu)}).$$ Therefore (\[5\]) is the same as the mass varying neutrino continuity equation provided that $A(\phi)={m_{(\nu)}(\phi)\over M}$, where $M$ is a mass scale. For the quintessence we have $$\label{7.1}
\ddot{\phi}+3H\dot{\phi}+V_{eff.,\phi}= 0,$$ where the effective potential is given by $$\label{7.2}
V_{eff.,\phi}=V_{,\phi}+{m_{(\nu),\phi}(\phi)\over m_{(\nu)}(\phi)}(\rho_{(\nu)}-3P_{(\nu)}).$$ So we take the neutrinos interacting with quintessence via (\[1\]) as mass varying neutrinos. For different kinds of neutrinos with a same ${m_{(\nu),\phi}(\phi)\over m_{(\nu)}(\phi)}$, we may still use (\[7.2\]) and (\[7\]), provided that we take $\rho_{(\nu)}=\sum_i \rho_{(\nu_i)}$ and $P_{(\nu)}=\sum_i P_{(\nu_i)}$ .
When interacting neutrinos are relativistic, $m_{(\nu)}\ll T_{(\nu)}$, we have $$\begin{aligned}
\label{8}
\ddot{\phi}+3H\dot{\phi}+V_{,\phi}= 0 \nonumber \\
\dot{\rho}_{(\nu)}+4H\rho_{(\nu)}=0,\end{aligned}$$ and $V_{eff.}=V$, while for nonrelativistic ones, $m_{(\nu)}\gg T_{(\nu)}$, we have $$\begin{aligned}
\label{9}
\ddot{\phi}+3H\dot{\phi}+V_{,\phi}= -{m_{(\nu),\phi}\over m_{(\nu)}}\rho_{(\nu)} \nonumber \\
\dot{\rho}_{(\nu)}+3H\rho_{(\nu)}={m_{(\nu),\phi}\over m_{(\nu)}}\dot{\phi}\rho_{(\nu)}.\end{aligned}$$ We can define a rescaled energy density $\hat{\rho}_{(\nu)}$ as $$\label{10}
\rho_{(\nu)}=m_{(\nu)} \hat{\rho}_{(\nu)},$$ in terms of which (\[9\]) reduces to $$\begin{aligned}
\label{11}
\ddot{\phi}+3H\dot{\phi}+V_{,\phi}+m_{(\nu),\phi}\hat{\rho}_{(\nu)}=0 \nonumber \\
\dot{\hat{\rho}}_{(\nu)}+3H{\hat{\rho}}_{(\nu)}=0.\end{aligned}$$ In the nonrelativistic case , we can write $\rho_{(\nu)}=m_{(\nu)} n_{(\nu)}$, where $n_{(\nu)}$ is the neutrino density number. So we identify $n_{(\nu)}=\hat{\rho}_{(\nu)}$. The solution of the second equation in (\[11\]) is $$\label{12}
\hat{\rho}_{(\nu)}=\hat{\rho}_{(\nu)}^0a^{-3},$$ where “0” denotes the present time, where we take $a_0=1$. Equivalently $n_{(\nu)}=n_{(\nu)}^0a^{-3}$, as expected. From (\[11\]), in the nonrelativistic limit we can define an effective quintessence potential $$\label{13}
V_{eff.,\phi}=V_{,\phi}+m_{(\nu),\phi}n_{(\nu)}.$$
Now we can construct our model. We require
i\) Initially, when massive neutrinos are relativistic, quintessence energy density be negligible, and the Universe be in a decelerated phase.
ii\) The accelerated expansion of the Universe be caused by symmetry breaking triggered by the evolution of mass varying neutrinos from the relativistic regime toward the nonrelativistic one.
To choose appropriate $V(\phi)$ and $m_\nu(\phi)$ to fulfil (i) and (ii), we proceed as follows:
We assume that $V(\phi)$ and $m(\phi)$ have $Z_2$ symmetry and initially the quintessence stays at the minimum of its potential which we take : $V_{min.}=V(\phi*)=0$. Thus dark energy density is negligible in this era $$\rho_\phi={1\over 2}\dot{\phi}^2+V(\phi)=V(\phi*)=0.$$ To have an initial stable solution, we require that the potential be convex at this point $V_{,\phi \phi}{(\phi=\phi*)}>0$. As neutrinos are initially relativistic: $\rho_{(\nu)}\approx 3P_{(\nu)}$, we have $V=V_{eff.}$. From (\[14\]) we find that the Universe is in a decelerated phase. In this era as $\phi$ is a constant, neutrino masses are also constant and the interaction in (\[7\]) is nonoperative. Due to the Universe’s expansion, neutrinos exit from relativistic phase such that $(\rho_{(\nu)}-3P_{(\nu)})$ becomes significant, $\rho_{(\nu)}-3P_{(\nu)}>0$. Hence the effective potential, given by (\[13\]), is no longer equal to the quintessence potential. If we choose $m_{(\nu),\phi\phi}{(\phi=\phi*)}<0$, whenever $(\rho_{(\nu)}-3P_{(\nu)})>-{V_{,\phi \phi}{(\phi=\phi*)}\over m_{(\nu),\phi\phi}{(\phi=\phi*)}}$ , the effective potential becomes concave and $\phi*$ becomes an unstable point. Therefore the quintessence rolls down the effective potential and the $Z_2$ symmetry breaks. Contrary to the effective potential, the potential is convex, and the quintessence climbs its own potential. This can be achieved only when $V_{eff.,\phi}$ and $V_{,\phi}$ have opposite signs. From (\[7.2\]) this implies that the signs of $m_{(\nu),\phi}$ and $V_{,\phi}$ are opposite too. This mechanism provides the positive potential required for cosmic acceleration (see (\[14\])).
This scenario is entirely different from the usual growing neutrino quintessence studied in the literature. In that scenario the interaction of neutrinos and quintessence, after the neutrinos become nonrelativistic, acts as a barrier potential and stops the fast rolling of the quintessence forcing it to follow the minimum of the effective potential giving rise to cosmic acceleration. In some papers an adiabatic evolution for the quintessence is considered [@far1; @afsh]. This adiabaticity, which is absent in our model, gives rise to neutrino perturbation growing and neutrino nugget formation [@afsh]. Our model is also different from the symmetron model where $V_{eff.,\phi}=V_{,\phi}+A_{,\phi}\hat{\rho}$, and $\hat{\rho}$ is the rescaled pressureless matter density. In the symmetron model, by the dilution of matter density, the quintessence becomes tachyonic and rolls down simultaneously its own and its effective potential [@dark]. Therefore the potential decreases by the symmetry breaking, and if it is initially negligible, it will become negative after a while and cannot drive the acceleration [@dark]. So in the symmetron model, the initial dark energy density is assumed to be non-negligible $\rho_{\phi*}=\Lambda>0$.
Based on astrophysical data the EoS of the quintessence, $$w_\phi ={{1\over 2}\dot{\phi}^2-V\over {1\over 2}\dot{\phi}^2 +V},$$ is estimated to be $w_\phi =-1.006 \pm 0.045$ in the present epoch[@Planck] . So the kinetic energy of the quintessence must be much less than its potential. This is the slow roll condition $$\label{18}
{1\over 2}\dot{\phi}^2\ll V(\phi).$$ From (\[11\]) we have $$\label{19}
\dot{\phi}=-{V_{eff.,\phi}\over 3H(1+\chi)},$$ where $\chi={\ddot{\phi}\over 3H\dot{\phi}}$. The slow roll condition is satisfied when $$\label{20}
{1\over 2}\left({V_{eff.,\phi} \over 3H(1+\chi)}\right)^2\ll V.$$ If $\chi\sim \mathcal{O}(1)$ or $\chi \lesssim 1$ , (\[20\]) becomes $$\label{21}
V_{eff.,\phi}^2\ll 9H^2 V.$$ If in the slow roll epoch (like the present era), dark energy and other components densities have the same order of magnitude, by $3M_P^2H^2\sim V$ and (\[21\]) we obtain $$\label{22}
V_{eff.,\phi}\ll {3V^2\over M_P^2}.$$
As a summary, in our formalism of cosmic acceleration, when the mass varying neutrinos become nonrelativistic their interaction with the quintessence becomes operative and triggers the quintessence evolution which augments the potential from its initial zero value. This positive potential is necessary to drive the cosmic acceleration. This mechanism provides a slow roll evolution provided that the effective potential is sufficiently flat (in the sense used in (\[22\])).
To get more intuition about our model let us give an example. We choose the potential as a combination of cosmological constant and a Gaussian type potential [@pot1; @pot2]. We assume that the mass of neutrinos also has a Gaussian form [@mass1; @mass2]. $$\begin{aligned}
\label{14.1}
V(\phi)&=&V_0(1-e^{-\alpha \phi^2})\nonumber \\
m_{(\nu)}(\phi)&=&m^*e^{-\beta \phi^2}\end{aligned}$$ where $\alpha>0$ and $\beta>0$ are constants with inverse mass squared dimensions and $V_0>0$. $m^*$ is the neutrino mass at $\phi=0$.
Initially, neutrinos are relativistic and $V_{eff.}=V$. The quintessence effective mass is assumed to be much less than the Hubble parameter, $2V_0\alpha \ll H^2$, such that (\[11\]) describes an overdamped oscillation equation [@sym2]. Therefore, in this epoch $\phi=0$ is a stable solution (as the potential is convex) of equations of motion, yielding a negligible dark energy density $\rho_\phi(\phi=0)=0$. When temperature decreases, $\rho_{(\nu)}-3P_{(\nu)}$ is no longer negligible and $$\label{14.2}
V_{eff.,\phi}=2\alpha V_0 \phi e^{-\alpha \phi^2}-2\beta\phi (\rho_{(\nu)}-3P_{(\nu)}).$$ When ${\alpha V_0\over \beta}<\rho_{(\nu)}-3P_{(\nu)}$, the effective potential becomes concave at $\phi=0$ and this point becomes unstable and the quintessence, which gains a negative mass squared, rolls down the effective potential while climbing its own potential (since $V_{eff.,\phi}$ and $V_{,\phi}$ have different signs). This holds whenever $$\label{14.31}
{\alpha \over \beta} V_0< e^{\alpha \phi^2}(\rho_{(\nu)}-3P_{(\nu)}).$$ This mechanism provides the positive potential needed for the acceleration.
In the nonrelativistic limit $m^*e^{-\beta \phi^2}\gg T_{(\nu)}$, we can ignore the pressure. The effective potential becomes $$\label{15}
V_{eff.}=V_0(1-e^{-\alpha \phi^2})+n_{(\nu)} m^*e^{-\beta \phi^2},$$ and we can write (\[14.31\]) as $$\label{14.32}
V_0< {\beta\over \alpha}m^*n_{(\nu)} e^{(\alpha-\beta) \phi^2}.$$
When $\phi^2$ increases, $e^{-\alpha \phi^2}\ll 1$ holds and the potential behaves as a cosmological constant (see Fig.(\[fig1\])) at late time. We take this era as our present era and as based on astrophysical data, $\rho_\phi$ constitutes about 0.7 of our Universe density, we can take $V_0\sim \left({7\over 10}\right)3M_P^2 H_0^2$. Because of the exponential factors in (\[14.1\]), the derivative of the effective potential satisfies (\[21\]) when $\alpha\phi^2\sim 1$ and $\beta\phi^2\sim 1$, implying a slow roll motion with $w_{\phi}\simeq -1$. Eventually, by dilution of massive neutrinos, the effective potential becomes the same as the potential and the quintessence rolls down towards its initial point and oscillates around it.
Obtaining analytic solutions for the equations of motions, even with simple potential and mass function, is very complicated if not impossible. So let us illustrate our results via a numerical example by using eqs.(\[3\]),(\[4\]),(\[6\]), and (\[11\]). We assume that the Universe is nearly composed of massive neutrinos $(\nu)$, the quintessence $(\phi)$, the pressureless matter $(c)$ comprising cold dark matter and pressureless baryonic matter, and radiation $(r)$. We choose the parameters of the model and the initial conditions as $\{\alpha=15M_P^{-2},\,\,\,\beta=15M_P^{-2},\,\,\,\, V_0=0.691 \times 3H_0^2 M_P^2=2.74\times 10^{-47} GeV^4 \}$ and $$\begin{aligned}
\label{IC1}
&&{\phi\over M_P}=10^{-10},\,\, \dot{\phi}=10^{-6}M_P H_0,\,\,\,\rho_{(\nu)}= 3.4\times 10^{10}H_0^2M_P^2,\nonumber \\
&& \rho_{(c)}=1.54\times 10^{11} H_0^2 M_P^2,\,\,\, \rho_{(r)}=2.50\times 10^{11}H_0^2M_P^2.\end{aligned}$$ respectively. The initial conditions are set at $\tau=tH_0=0$ which in our model is equivalent to the redshift $z=5500$ corresponding to the radiation-dominated Universe. $H_0$ is the present Hubble parameter, i.e. the Hubble parameter at $a=1$. The relative densities defined by $\Omega_{(i)}={\rho_{(i)}\over 3 M_P^2H^2}$ are derived from (\[IC1\]) as $$\label{IC11}
\Omega_{(r)}=0.571,\,\,\, \Omega_{(\nu)}=0.077,\,\, \Omega_{(c)}=0.352,\,\,\Omega_{(\phi)}=1.14\times 10^{-24},$$ and the Hubble parameter is $H=3.82\times 10^5 H_0$. $\Omega_{(\phi)}=1.14\times 10^{-24}$ shows that the initial values chosen for $\phi$ and $\dot{\phi}$ give only a negligible dark energy contribution in the total density. In our numerical study we assume that neutrinos are completely nonrelativistic at $\tau=0$, i.e. $\rho_{(\nu)}-3P_{(\nu)}\simeq \rho_{(\nu)}$. So we can ignore the neutrinos pressure. In order that $\rho_{(\nu)}-3P_{(\nu)}\simeq \rho_{(\nu)}$ holds, we must have $m^{*}\gg T_{(\nu)}$ at $\tau=0$. The mass varying neutrinos exit from the relativistic regime when $m^*\simeq 3T_{(\nu)}^*$ corresponding to the redshift $z=z_{nr}$. Until this time we have [@Liddle] $$\label{T1}
T_{(\nu)}=\left({4\over 11}\right)^{1\over 3}T_{\gamma},$$ where $T_{\gamma}$ is the photons temperature. In addition we have [@Liddle] $$\label{T2}
T_{\gamma}=T_{\gamma}^0(1+z),$$ where $T_{\gamma}^0$ is the photons temperature at the present time. Hence $$\label{T3}
T_{(\nu)}^*=\left({4\over 11}\right)^{1\over 3}T_{\gamma}^0(1+z_{nr})=0.085\times 10^{-3}(1+z_{nr}).$$ Therefore in our example we must have $m^*\gg 0.92 eV$.
It is worth Note that we have chosen our initial condition at $\tau=0$ in the nonrelativistic regime while the quintessence began its motion in the semirelativistic regime, where $P_{(\nu)}$ was not negligible, therefore the values in (\[IC1\]) are not the variables values just after the symmetry breaking. Our numerical results illustrate the evolution of the Universe from an epoch with $\Omega_{(\phi)}\simeq 0$, to the present dark-energy-dominated epoch. We also study the future behavior of the quintessence. A quantitative study beginning from the semirelativistic regime of neutrinos requires considering the pressure $P_{(\nu)}$ (see (\[fd\])), which makes the equations very complicated to solve. The initial conditions for the scalar field are due to the quantum fluctuations around $\phi=0$, against which the model is no more stable after the symmetry breaking. Therefore by a small deviation from $\phi=0$, the quintessence rolls down its steep effective potential [@sym1].
In Fig.(\[fig1\]), we have depicted the potential and the effective potential for $\rho_{(\nu)}=10 H_0^2M_P^2$. The potential is the same as the effective one in the relativistic limit. For nonrelativistic massing neutrinos the shape of the effective potential changes, and the previous minimum point becomes the new maximum.
In Fig.(\[fig2\]), the deceleration parameter, $q=-{\ddot{a}a\over \dot{a}^2}=-\left(1+{\dot{H}\over H^2}\right)$, is depicted showing the transition of the Universe from a deceleration epoch to acceleration in a time of order of the Hubble time.
.
Although this acceleration, which begins at the redshift $z\simeq0.6$, can last for some Hubble times but is not persistent. Gradually as the neutrinos density dilutes, the effective potential becomes the same as the potential and the quintessence rolls down to its initial position and oscillates about it via an underdamped oscillation. This is due to the fact that the quintessence effective mass becomes larger than the Hubble parameter at late time. This can also be seen from Fig.(\[fig3\]). Fig.(\[fig3\]) shows that the quintessence grows from $\phi=0$ and reaches to an approximately constant value, which is consistent with our previous discussion that when the effective potential becomes nearly flat, the slow roll evolution begins.
. \[fig3\]
Before the symmetry breaking, (\[7.1\]) is an overdamped harmonic oscillator equation, and $\phi=0$ is a stable point. After the symmetry breaking, at $\tau<0$, this point becomes unstable against fluctuations and $\phi$ commences its evolution. So just after the symmetry breaking, $\phi<\phi(\tau=0)$. Note that $V(\phi(\tau=0))\ll V_0$, so $\phi$ cannot overcome the potential initially, and needs a time of the order of the present Hubble time to reach the maximum of its potential to drive the cosmic acceleration. In the future, by dilution of neutrino the quintessence will come back to its initial position, but because the Hubble parameter will be much smaller than the effective mass, the quintessence will have an underdamped oscillation (see Fig.(\[fig3\])).
The effective potential becomes very steep after the symmetry breaking (see Fig.(\[fig1\])), so we expect that $\dot{\phi}$ increases initially. In our example $V_{eff.,\phi}\simeq -2\beta \rho_{(\nu)}\phi\sim -100H_0^2M_P$ which is more efficient than the friction term $3H\dot{\phi}\sim M_PH_0^2$. By the increase of $\dot{\phi}$ and decrease of $\rho_{(\nu)}$, the friction term becomes more relevant providing the required condition for the slow roll (see Fig.(\[fig3\])).
The EoS parameter of the quintessence, $w_{\phi}$, is plotted in Fig.(\[fig4\]), showing that $w_\phi$ decreases and $w_\phi\approx -1$ during the time where the effective potential is nearly flat. Finally in the future due to the quintessence oscillation, $w_{\phi}$ will oscillate between $-1$ and $1$.
In the present era $\tau=0.94$ (corresponding to $a=1$) we find $w_\phi=-0.998$ which is the range expected from Planck 2015 data.
The relative densities defined by $\Omega_{(i)}={\rho_{(i)}\over 3 M_P^2H^2}$, are depicted in Fig.(\[fig5\]), showing that the dark energy density grows while other ingredients ratio densities decrease. $\Omega_{(c)}$, $\Omega_{(\nu)}$, $\Omega_{(r)}$, and $\Omega_{(\phi)}$ are the relative densities of the pressureless matter, the mass varying neutrinos, the radiation, and the dark energy respectively.
. \[fig5\]
In this example, relative densities at the present era $\tau=0.94$ (corresponding to $a=1$) are obtained as $\Omega_{(\phi)}=0.691,\,\,\, \Omega_{(c)}=0.308,\,\,\, \Omega_{(\nu)}=0.00003,\,\,\, \Omega_{(r)}=0.00009$ which lie in the expected region estimated by Planck 2015 data.
Linear Perturbations
====================
In this section we consider the evolution equations of perturbations in the nonrelativistic era, $m_{(\nu)}\gg T_{(\nu)}$. We study the neutrino contrast with the same method used in [@mota]. In the mass varying models of dark energy based on adiabaticity, the linear perturbations grow and give rise to instability and the formation of neutrino nuggets [@afsh]. We first gather the required equations corresponding to our problem, which are derived in [@pert1; @pert2; @pert3]. Then based on these equations, we will continue our discussion through a numerical illustrative example.
The line element of perturbed FLRW space-time can be written as $$\label{p1}
ds^2=-(1+2\varphi)dt^2+2a(t)B_{,i}dtdx^i+a^2(t)(\delta_{ij}+2(E_{,ij}-\psi \delta_{ij}))dx^idx^j,$$ where $\varphi$ (lapse function), $B$ (shift function), $E$, and $\psi$ are four scalar functions and a comma denotes a partial derivative. The stress tensor perturbations are given by $$\begin{aligned}
\label{p2}
&&\delta T_{00}=\sum_j \delta \rho_{(j)}-\varphi\dot{\bar{\phi}}^2+\delta \dot{\phi}\dot{\bar{\phi}}+V'(\bar{\phi})\delta \phi \nonumber \\
&&\delta T_{0i}=a\left(\dot{\bar{\phi}}(\dot{\bar{\phi}}B_{,i}+{1\over a}\delta\phi_{,i})-\sum_j(\bar{\rho}_{(j)}+\bar{P}_{(j)})v_{(j),i} \right)\nonumber \\
&&\delta T_{ij}=\delta_{ij}a^2\left( \sum_j\delta P_{(j)}-\varphi\dot{\bar{\phi}}^2+\delta \dot{\phi}\dot{\bar{\phi}}-V'(\bar{\phi})\delta \phi\right).\end{aligned}$$ By bar we denote the background value of a parameter and a prime denotes derivative with respect to the argument. Four velocities of the fluids are given by $$\begin{aligned}
\label{p3}
u_{(j)0}=-(1+\varphi)\nonumber \\
u_{(j)i}=a(v_{(j)}+B)_{,i}
\end{aligned}$$ where $\bar{u}_{(j)0}=-1$ and $\bar{u}_{(j)i}=0$ have been used. Going to the Fourier space, the evolution equations for density fluctuations are derived as $$\begin{aligned}
\label{p4}
&&\delta \dot{\rho}_{(j)}-\left({k^2 v_{(j)}\over a}+k^2E+3\dot{\psi}\right)(\bar{\rho}_{(j)}+\bar{P}_{(j)})+3H(\delta \rho_{(i)}+\delta P_{(j)})=\nonumber \\
&&\beta_{(j)}(\phi)(\bar{\rho}_{(j)}-3\bar{P}_{(j)})\delta \dot{\phi}+\beta_{(j)}(\phi)(\delta \rho_{(i)}-3\delta P_{(j)})\dot{\bar{\phi}}+\nonumber \\
&&\beta'_{(j)}(\phi)(\bar{\rho}_{(j)}-3\bar{P}_{(j)})\dot{\bar{\phi}}\delta \phi,\end{aligned}$$ in which $\beta_{(\nu)}={m'_{(\nu)}(\phi)\over m_{(\nu)}(\phi)}$ and $\beta_{(j)}=0$ for $(j)\neq (\nu)$, this means that the interaction is considered only between the quintessence and massive neutrinos. From momentum conservation, the constraint $$\begin{aligned}
\label{p5}
&&\dot{v}_{(j)}=-{\beta_{(j)}(\bar{\phi})\over a}{\bar{\rho}_{(\nu)}-3\bar{P}_{(\nu)}\over \bar{\rho}_{(\nu)}+\bar{P}_{(\nu)}}\delta \phi+3H{\dot{\bar{P}}_{(j)}\over \dot{\bar{\rho}}_{(j)}}(v_{(j)}+B)-H(v_{(j)}+B)\nonumber \\
&&-{\varphi \over a}-{\delta P_{(j)}\over a(\bar{\rho}_{(j)}+\bar{P}_{(j)})}-\dot{B},\end{aligned}$$ is obtained. The evolution equation of the scalar field perturbation is $$\begin{aligned}
\label{p6}
&&\delta\ddot{\phi}+3H\delta\dot{\phi}+V''(\bar{\phi})\delta\phi+{k^2\over a^2}\delta \phi-(k^2\dot{E}+3\dot{\psi})\dot{\bar{\phi}}+{k^2\over a}B\dot{\bar{\phi}}-\dot{\bar{\phi}}\dot{\varphi}\nonumber \\
&&+2V'(\bar{\phi})\varphi+2\varphi \beta_{(\nu)}(\bar{\phi})(\bar{\rho}_{(\nu)}-3\bar{P}_{(\nu)})+\beta_{(\nu)}(\bar{\phi})(\delta \rho_{(\nu)}-3\delta P_{(\nu)})\nonumber \\
&&+\beta'_{(\nu)}(\bar{\phi})(\bar{\rho}_{(\nu)}-\bar{P}_{(\nu)})\delta \phi=0.\end{aligned}$$ By considering the Einstein equation, one derives $$\begin{aligned}
\label{p7}
&&3H(\dot{\psi}+\varphi H)+{k^2\over a^2}\left(\psi +H(a^2\dot{E}-a B)\right)=\nonumber \\
&&-{1\over 2M_P^2}\left(\sum_{j}\delta\rho_{(j)}-\varphi \dot{\bar{\phi}}^2+\delta\dot{\phi}\dot{\bar{\phi}}+
V'(\bar{\phi})\delta\phi \right)\end{aligned}$$ from the $0-0$ component, and $$\label{p8}
\dot{\psi}+\varphi H=-{1\over 2M_P^2}\left(\sum_j a(v_{(j)}+B)(\bar{\rho}_{(j)}+\bar{P}_{(j)})-\dot{\bar{\phi}}\delta \phi\right)$$ from the $0-i$ components, and $$\begin{aligned}
\label{p9}
\ddot{\psi}+3H\dot{\psi}+H\dot{\varphi}+(3H^2+2\dot{H})\varphi={1\over 2M_P^2}\left(\sum_j \delta P_{(j)}-\varphi \dot{\bar{\phi}}^2+\delta\dot{\phi}\dot{\bar{\phi}}-V'(\bar{\phi})\delta \phi\right)\end{aligned}$$ by taking the trace of the $i-j$ components. The trace-free part of $i-j$ gives $$\label{p10}
\dot{\sigma_s}+H\sigma_s-\varphi +\psi=0,$$ in which $\sigma_s=a^2\dot{E}-aB$ is the scalar shear.
In the following, we choose the flat gauge $\psi=E=0$. We assume that the Universe is constituted of the cold pressureless matter with $w_c=0$ (baryonic+dark matter), the radiation with $w_r={1\over 3}$, the nonrelativistic massive neutrino with $w_{(\nu)}=0$, and the quintessence. Only the interaction between the scalar field and the massive neutrinos is taken into account. For the background we have $$\begin{aligned}
\label{p11}
\dot{\bar{\rho}}_{(r)}+4H\bar{\rho}_{(r)}&=&0\nonumber \\
\dot{\bar{\rho}}_{(c)}+3H\bar{\rho}_{(c)}&=&0\nonumber \\
\dot{\bar{\rho}}_{(\nu)}+3H\bar{\rho}_{(\nu)}&=&\beta_{(\nu)}(\bar{\phi})\bar{\rho}_{(\nu)}\dot{\bar{\phi}}\nonumber\\
\ddot{\bar{\phi}}+3H\dot{\bar{\phi}}+V'(\bar{\phi})&=&-\beta_{(\nu)}(\bar{\phi})\bar{\rho}_{(\nu)}.\end{aligned}$$ For the densities, we obtain $$\begin{aligned}
\label{p12}
&&\delta \dot{\rho}_r=-4H\delta \rho_{(r)}+{4k^2\over 3a}(\hat{v}_{(r)}-B)\bar{\rho}_{(r)}\nonumber\\
&&\delta \dot{\rho}_{(c)}=-3H\delta \rho_{(c)}+{k^2\over a}(\hat{v}_{(c)}-B)\bar{\rho}_{(c)}\nonumber\\
&&\delta \dot{\rho}_{(\nu)}=-3H\delta \rho_{(\nu)}+{k^2\over a}(\hat{v}_{(r)}-B)\bar{\rho}_{(\nu)}+\beta_{(\nu)}(\bar{\phi})\bar{\rho}_{(\nu)}\delta \dot{\phi}+\beta_{(\nu)}(\bar{\phi})\delta \rho_{(\nu)}\dot{\bar{\phi}}\nonumber \\
&&+\beta'_{(\nu)}(\phi)\bar{\rho}_{(\nu)}\dot{\bar{\phi}}\delta \phi,\end{aligned}$$ and for the velocities, $\hat{v}_{(j)}=v_{(j)}+B$, we derive $$\begin{aligned}
\label{p13}
\dot{\hat{v_{(r)}}}&=&-{\varphi\over a}-{\delta \rho_{(r)}\over 4a\rho_{(r)}}\nonumber\\
\dot{\hat{v_{(c)}}}&=&-{\varphi\over a}-H\hat{v}_{(c)}\nonumber \\
\dot{\hat{v}}_{(\nu)}&=&-{\varphi\over a}-H\hat{v}_{(\nu)}-\beta_{(\nu)}(\bar{\phi}){\delta \phi \over a}.\end{aligned}$$ The scalar field perturbation satisfies $$\begin{aligned}
\label{p14}
&&\delta \ddot{\phi}=-3H\delta \dot{\phi}-V''(\bar{\phi})\delta \phi-{k^2\over a^2}\delta \phi-{k^2\over a}\dot{\bar{\phi}}B-2V'(\bar{\phi})\varphi-2\varphi \beta_{(\nu)}(\bar{\phi})\bar{\rho}_{(\nu)}\nonumber \\
&&+{\dot{\bar{\phi}}\over 2HM_P^2}\left(\sum_j \delta P_j-\varphi \dot{\bar{\phi}}^2+\delta\dot{\phi}\dot{\bar{\phi}}-V'(\bar{\phi})\delta \phi\right)
-{3H^2+2\dot{H}\over H}\varphi \dot{\bar{\phi}}\nonumber \\
&&-\beta_{(\nu)}(\bar{\phi})\delta \rho_{(\nu)}-\beta'_{(\nu)}(\bar{\phi})(\bar{\rho}_{(\nu)}.\end{aligned}$$ By using the components of the Einstein equation we get $$\label{p15}
\varphi=-{1\over 2HM_P^2}\left( -\dot{\bar{\phi}}\delta \phi +a\sum_j\hat{v}_{(j)}(\bar{\rho}_{(j)}+\bar{P}_{(j)})\right),$$ and $$\begin{aligned}
\label{p16}
B&=&{3a\over 2k^2M_P^2}\left({1\over 3H}\left(\sum_j\delta \rho_{(j)}-\varphi \dot{\bar{\phi}}^2+\delta \dot{\phi} \dot{\bar{\phi}}+V'(\bar{\phi})\delta\phi\right)+\dot{\bar{\phi}}\delta\phi \right)\nonumber \\
&-&{3a^2\over 2k^2M_P^2}\sum_j\hat{v}_{(j)}(\bar{\rho}_{(j)}+\bar{P}_{(j)}).\end{aligned}$$
In models in which the coupling of neutrinos and quintessence acts as a potential barrier, and forces the quintessence to trace the minimum of its effective potential, neutrino perturbations grow significantly in the nonrelativistic regime where the adiabaticity condition is used. In our model, the adiabatic condition does not hold, and instead, we use the slow roll condition. So we expect that the model is still stable against linear perturbation [@afsh]. Now let us numerically show this issue via the example (\[14.1\]) introduced in the previous section.
To numerically plot the perturbations, we also need to know the initial conditions for fluid velocities, energy density perturbations and perturbation of the scalar field. At $\tau=tH_0=0$, we take $$\label{IC2}
\delta_{(\nu)}=\delta_c={3\over4 }\delta_{(r)}= 10^{-7},\,\,\delta \phi=10^{-7}\phi,\,\, \delta \dot{\phi}=10^{-7}\dot{\phi},\,\, \hat{v}_{(j)}=10^{-7}H_0^{-1},$$ where $\delta_i={\delta_{\rho(i)}\over \rho_{(i)}}$ is the density contrast of the [*[i]{}*]{}th species. A same initial 3-velocity $\hat{v}_{(i)}$ is assumed for all fluids. We have employed adiabatic initial conditions which imply $\delta_c={3\over4 }\delta_r$ initially [@pert3].
The parameters and initial conditions are taken the same as in the previous section, i.e, $\{\alpha=15M_P^{-2},\,\,\,\beta=15M_P^{-2},\,\,\,\, V_0=0.691 \times 3H_0^2 M_P^2=2.74\times 10^{-47} GeV^4 \}$, and (\[IC1\]).
In Fig.(\[fig6\]) and Fig.(\[fig7\]), using Eqs. (\[p11\])-(\[p16\]), we depict $\varphi$ and the massive neutrino density contrast, respectively
As it is evident from Fig.(\[fig7\]), the neutrino perturbation does not grow critically. This is in contrast to the mass varying models of dark energy based on adiabaticity, where the linear perturbations in the nonrelativistic era grow and give rise to instability and formation of neutrino nuggets [@afsh].
Summary
=======
Inspired by the mass varying neutrino and symmetron models, we propose a new possible [*[dynamical model]{}*]{} of dark energy to describe the onset of the present cosmic acceleration. We assume that the quintessence is initially trapped in the minimum of its potential which has a $Z_2$ symmetry. In this era, both the kinetic and potential energies of the quintessence are negligible. This initial zero density is in agreement with the present astrophysical data which imply that despite the slower redshift of dark energy density with respect to the dark matter, they have the same order of magnitude today(pointed out in the coincidence problem). After their relativistic era, the mass varying neutrinos become nonrelativistic, and the shape of the effective potential changes and the initial stable point becomes unstable. Contrary to the symmetron model, the effective potential and the potential have opposite slopes hence the quintessence climbs its potential while it rolls down the effective potential. This procedure provides enough energy to drive the cosmic acceleration via a slow roll evolution.
The quintessence-neutrino coupling modifies the evolution of the quintessence and consequently the dilution of dark energy. In the mass growing neutrino model, the scalar field dilutes like the dark matter during a significant period of its evolution, and therefore the coincidence problem may be alleviated [@Wet1]. In our model, the beginning of quintessence evolution depends on the initial neutrino mass. It is only after the nonrelativistic epoch that the quintessence can commence its evolution from zero density to gain the same order of magnitude as the dark matter in later times. In this way, one may relate the coincidence problem to the neutrino mass. The coincidence problem also depends on the other parameters of the model, especially those determining the dark energy density. To fix the parameters, we need to confront our model with observation data.
To illustrate how the model works, we used the example(\[14.1\]) and chose the parameters, e.g. the initial neutrino mass, such that the derived present ratio densities are in agreement with the Planck 2015 data and the acceleration begins at $z\simeq 0.6$. In a time of the order of the Hubble time, the dark energy density is given by $V_0$, which plays the role of a cosmological constant. So we fixed it as the value of the present dark energy density. In this period, the model behaves like $\Lambda$CDM, but in contrast to the $\Lambda$CDM model, we have a dynamical dark energy with an initial zero density. Also, unlike the $\Lambda$CDM model, the acceleration is not persistent and by dilution of massive neutrinos the quintessence roles back to its initial position and oscillates about that position. However, to construct our model, we have to fine-tune our parameters like $V_0$, according to the astrophysical data. Note that it is also possible to consider other potentials, like those with an unbounded upper bound such as $V=V_0 (e^{\alpha\phi^2}-1),\,\, \alpha>0$. In these cases, like the example (\[14.1\]), the quintessence climbs its own potential after the symmetry breaking, but unlike (\[14.1\]) the potential does not have a maximum. The potential reaches at $V=V_0 (e^{\alpha\phi_{present}^2}-1)$ in the present era, which during the slow roll evolution may be identified with the present dark energy density. Again, by dilution of neutrinos, the quintessence will come back to its initial position and will oscillate about it via an underdamped oscillation.
In our scenario, as we do not employ the adiabaticity condition used in some of the previous models of neutrino dark energy, we do not encounter the instabilities that arise in those models. This issue was discussed and illustrated via numerical methods by using the example (\[14.1\]).
[99]{} S. Perlmutter et al., Nature (London) 391, 51 (1998) A. G. Riess et al. (Supernova Search Team Collaboration), Astron. J. 116, 1009 (1998) S. Perlmutter et al. (Supernova Cosmology Project Collaboration), Astrophys. J. 517, 565 (1999) L. Amendola and S. Tsujikawa, Dark Energy: Theory and Observations (Cambridge University Press, 2010) M. Sami, R. Myrzakulov, arXiv:1309.4188 \[hep-th\] K. Bamba, S. Capozziello, S. Nojiri, and S. D. Odintsov, Astrophys. Space Sci. 342, 155 (2012), arXiv:1205.3421 \[gr-qc\] Miao Li, Xiao-Dong Li, Shuang Wang, Yi Wang, Commun. Theor. Phys. 56, 525-604 (2011)
C. Wetterich, Nucl. Phys. B 302, 668 (1988) B. Ratra, P. J. E. Peebles, Phys. Rev. D 37, 3406 (1988) R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys. Rev. Lett. 80, 1582 (1998) P. J. E. Peebles and A. Vilenkin, Phys. Rev. D 59, 063505 (1999) P. J. Steinhardt, L. M. Wang and I. Zlatev, Phys. Rev. D 59, 123504 (1999) M. Doran and J. Jaeckel, Phys. Rev. D 66, 043519 (2002) R. R. Caldwell, and E. V. Linder, Phys. Rev. Lett. 95, 141301 (2005) E. Elizalde, S. Nojiri, and S. D. Odintsov, Phys. Rev. D 70, 043539 (2004 ) H. M. Sadjadi, M. Alimohammadi, Phys. Rev. D 74, 043505 (2006) E. V. Linder, Gen. Rel. Grav. 40, 329 (2008) S. Casas, V. Pettorino, C. Wetterich, Phys. Rev. D 94, 103518 (2016) A. Leithes, K. A. Malik, D. J. Mulryne, and N. J. Nunes, arXiv:1608.00908 \[astro-ph.CO\] K. Bamba, G.G.L. Nashed, W. El Hanafy, and Sh.K. Ibraheem, Phys. Rev. D 94, 083513 (2016) S. Dutta, E. N. Saridakis, and R. J. Scherrer, Phys. Rev. D 79, 103005 (2009) Md. W. Hossain, R. Myrzakulov, M. Sami, and E. N. Saridakis, Phys. Rev. D 90, 023512 (2014); C. Geng, Md. W. Hossain, R. Myrzakulov, M. Sami, and E. N. Saridakis, Phys. Rev. D 92, 023522 (2015); Md. W. Hossain, R. Myrzakulov, M. Sami, and E. N. Saridakis, Int. J. Mod. Phys. D 24, 1530014 (2015) S. Nojiri, S. D. Odintsov, Phys. Lett. B 562, 147 (2003); S. Nojiri, S. D. Odintsov, Phys. Rev. D 70, 103522 (2004)
S. Weinberg, Rev. Mod. Phys. 61, 1 (1989) I. Zlatev, L.-M. Wang, and P. J. Steinhardt, Phys. Rev. Lett. 82, 896 (1999) D. Pavon, and W. Zimdahl, Phys. Lett. B 628, 206 (2005) L. P. Chimento, A. S. Jakubi, and D. Pavon, Phys. Rev. D62, 063508 (2000) H. Wei, and R. G. Cai, Phys. Rev. D 71, 043504 (2005) R. J. Scherrer, Phys. Rev. D 71, 063519 (2005) H. M. Sadjadi, and M. Alimohammadi, Phys. Rev. D 74, 103007 (2006) H. M. Sadjadi, Gen. Rel. Grav. 44, 2329 (2012) J. Dutta, W. Khyllep, and N. Tamanini, arXiv:1701.00744 \[gr-qc\] S. Nojiri, S. D. Odintsov, Phys. Lett. B 637, 139 (2006)
L. Amendola, Phys. Rev. D 62, 043511 (2000) C. Wetterich, Astron. Astrophys. 301, 321 (1995) W. Zimdahl, D. Pavon and L.P. Chimento, Phys. Lett. B 521, 133 (2001) H. M. Sadjadi, JCAP 0702, 026 (2007) H. M. Sadjadi, Eur. Phys. J. C 66, 445 (2010) H. M. Sadjadi, and N. Vadood, JCAP 08, 036 (2008) R. C. G. Landim, Eur. Phys. J. C 76, 31 (2016) A. Pasqua, and S. Chattopadhyay: arXiv:1607.03384 \[gr-qc\] F. Felegary, F. Darabi, and M. R. Setare, arXiv:1612.03406 \[gr-qc\] W. Yang, H. Li, Y. Wu, and J. Lu, JCAP 10, 007 (2016) G. Kofinas, E. Papantonopoulos, and E. N. Saridakis, arXiv:1602.02687 \[gr-qc\] R. Herrera, W. S. H. Ricaldi, and N. Videla, arXiv:1607.01806 \[gr-qc\] J. B. Jimenez, D. R. Garcia, D. S. Gomez, and V. Salzano, arXiv:1607.06389 \[gr-qc\] G. S. Sharov, S. Bhattacharya, S. Pan, R. C. Nunes, and S. Chakraborty, Mon. Not. Roy. Astron. Soc, 466, 3497 (2017) W. Yang, Lixin Xu, Phys. Rev. D 90, 083532 (2014) A. Pasqua, S. Chattopadhyay, A. K. Assaf, and I. G. Salako, Eur. Phys. J. Plus 131, 182 (2016) V. Pettorino, Phys. Rev. D 88, 063519 (2013) S. Nojiri, S. D. Odintsov, and S. Tsujikawa, Phys. Rev. D 71, 063004 (2005)
R. Fardon, A. E. Nelson, and N. Weiner, JCAP 0410, 005 (2004) A. W. Brookfield, C. van de Bruck, D.F. Mota, and D. T. Valentini, Phys. Rev. Lett. 96, 061301 (2006) R. D. Peccei, Phys. Rev. D 71, 023527 (2005) S. Antusch, S. Das, and K. Dutta, JCAP 0810, 016 (2008) C. Wetterich, Phys. Lett. B 655, 201 (2007) L. Amendola, M. Baldi, and C. Wetterich, Phys. Rev. D 78, 023015 (2008) R. Takahashi, M. Tanimoto, Phys. Lett. B 633, 675 (2006); R. Takahashi, M. Tanimoto, JHEP 0605, 021 (2006) C. Geng, C. Lee, R. Myrzakulov, M. Sami, and E. N. Saridakis, JCAP 01, 049 (2016) R. Onofrio, Phys. Rev. D 86, 087501 (2012)
N. Afshordi, M. Zaldarriaga and K. Kohri, Phys. Rev. D 72, 065024 (2005) O. E. Bjaelde, A. W. Brookfield, C. v. de Bruck, S. Hannestad, D. F. Mota, L. Schrempp, and D. T. Valentini, JCAP 0801, 026 (2008) Y. Ayaita, M. Weber, and C. Wetterich, Phys. Rev. D 87, 043519 (2013) S. Casas, V. Pettorino, and C. Wetterich, Phys. Rev. D 94, 103518 (2016) K. Hinterbichler and J. Khoury, Phys. Rev. Lett. 104, 231301 (2010) K. Hinterbichler, J. Khoury, A. Levy, and A. Matas, Phys. Rev. D 84, 103521 (2011) M. Honardoost, H. M. Sadjadi, and H. R. Sepangi, Gen. Rel. Grav. 48, 125 (2016),arXiv:1508.06022 \[gr-qc\].
H. M. Sadjadi, JCAP 01, 031 (2017), arXiv:1609.04292 \[gr-qc\]. H. M. Sadjadi, Phys. Rev. D 92, 123538 (2015), arXiv:1510.02085 \[gr-qc\]. R. Bean, E. E. Flanagan, and M. Trodden, Phys. Rev. D 78, 023009 (2008)
H. M. Sadjadi, M. Honardoost, and H. R. Sepangi, Phys. Dark Univ. 14, 40 (2016), arXiv:1504.05678 \[gr-qc\].
P. A. R. Ade et al. (Planck Collaboration), Planck 2015 results. XIII. Cosmological parameters, Astron. Astrophys. 594 (2016) A13 \[arXiv:1502.01589\].
Y. Muromachi, A. Okabayashi, D. Okada, T. Hara, and Y. Itoh, arXiv:1503.03678 \[astro-ph.CO\]. Dutta and R. J. Scherrer, Phys. Rev. D 78, 123525 (2008) A. W. Brookfield, C. van de Bruck, D. F. Mota, and D. Tocchini-Valentini, Phys. Rev. D 73, 083515 (2006) M. Pietroni, Phys. Rev. D 72, 043535 (2005) A. Liddle, “An Introduction to Modern Cosmology” (John Wiley $\&$ Sons, Ltd, 2015) K. A. Malik, and D. Wands, JCAP 02, 007 (2005) K. A. Malik, D. Wands, and C. Ungarelli, Phys. Rev. D 67, 063516 (2003) A. Leithes, K. A. Malik, D. J. Mulryne, and N. J. Nunes, arXiv 1608.00908
[^1]: mohsenisad@ut.ac.ir
[^2]: v.anari@ut.ac.ir
|
---
abstract: 'Recently, the existence and properties of unbounded cavity modes, resulting in extensive plastic deformation failure of two-dimensional sheets of amorphous media, were discussed in the context of the athermal Shear-Transformation-Zones (STZ) theory. These modes pertain to perfect circular symmetry of the cavity and the stress conditions. In this paper we study the shape stability of the expanding circular cavity against perturbations, in both the unbounded and the bounded growth regimes (for the latter the unperturbed theory predicts no catastrophic failure). Since the unperturbed reference state is time dependent, the linear stability theory cannot be cast into standard time-independent eigenvalue analysis. The main results of our study are: (i) sufficiently small perturbations are stable, (ii) larger perturbations within the formal linear decomposition may lead to an instability; this dependence on the magnitude of the perturbations in the linear analysis is a result of the non-stationarity of the growth, (iii) the stability of the circular cavity is particularly sensitive to perturbations in the effective disorder temperature; in this context we highlight the role of the rate sensitivity of the limiting value of this effective temperature. Finally we point to the consequences of the form of the stress-dependence of the rate of STZ transitions. The present analysis indicates the importance of nonlinear effects that were not taken into account yet. Furthermore, the analysis suggests that details of the constitutive relations appearing in the theory can be constrained by the modes of macroscopic failure in these amorphous systems.'
author:
- 'Eran Bouchbinder$^{1,2}$, Ting-Shek Lo$^{1,3}$, Itamar Procaccia$^1$ and Elad Shtilerman$^1$'
title: The Stability of an Expanding Circular Cavity and the Failure of Amorphous Solids
---
Introduction
============
Some of the theoretically most fascinating aspects of crack propagation in amorphous materials are the instabilities that are observed in well controlled laboratory experiments [@99FM]. Besides some exceptions, (see for example [@93YS; @95ABP; @03BHP] and also [@07LBDF; @07BP]), it would be fair to say that the observed instabilities are still poorly understood. It is the opinion of the present authors that the reason for the relative lack of understanding is that the theory of crack propagation did not treat cracks as moving free boundaries whose instabilities stem from the dynamics of the free boundary itself. Instead, “crack tip dynamics" were replaced by energy balance within the theory of Linear Elastic Fracture Mechanics [@Freund], together with an ad-hoc “law” of one nature or another as to where a crack is supposed to move.
In principle this undesirable state of affairs can be greatly improved within the Shear-Transformation-Zones (STZ) theory of amorphous materials [@79Arg; @79AK; @98FL; @07BLanP]. This theory treats developing cracks or growing cavities as free boundaries of a material in which both elasticity and plasticity are taken into account, preserving all the symmetries and conservation laws that promise a possibly correct theory of amorphous materials driven out of mechanical equilibrium. This theory in its various appearances was compared to a number of experiments and simulations (see below), with a growing confidence that although not final, STZ theory is developing in the right direction. Indeed, the application of a highly simplified version of STZ theory to crack propagation resulted in physically interesting predictions, explaining how plasticity can intervene in blunting a crack tip and resulting in velocity selection [@06BPP]. The application of the full fledged theory of STZ to crack propagation is still daunting (although not impossible) due to the tensorial nature of the theory and the need to deal with an extremely stiff set of partial differential equations with a wide range of time-scales and length-scales involved. For that reason it seemed advantageous to apply the full theory to a situation in which the symmetries reduce the problem to an effectively scalar theory; this is the problem of a circular cavity developing under circular symmetric stress boundary conditions [@07BLLP; @07BLP]. While this problem does not reach the extreme conditions of stress concentration that characterizes a running slender crack, it still raises many physical issues that appear also in cracks, in particular the give-and-take between elasticity and plasticity, the way stresses are transmitted to moving boundaries (in apparent excess of the material yield stress) and most importantly for this paper, the possible existence of dynamical instabilities of the moving free boundary. This last issue might also be connected to the difference between ductile and brittle behaviors. In the former, a growing cavity is likely to remain rather smooth, whereas in the latter, one may expect an instability resulting in the growth of “fingers”, possibly ending up being cracks. It is one of the challenges of the present paper to examine whether the theory may predict a transition, as a function of material parameters or a constitutive relation, between these two types of behavior.
Note that we have chosen to study the problem in a purely 2-dimensional geometry; recently quasi 2-dimensional systems exhibited interesting failure dynamics in laboratory experiments, where the 3’rd dimension appears irrelevant for the observed phenomena [@07LBDF; @04SVC]. Our motivation here is however theoretical, to reduce the unnecessary analytic and numerical complications to a minimum and to gain insight as to the main physical effects under the assumption that the thin third dimension in real systems does not induce a catastrophic change in behavior. When this assumption fails, as it does in some examples c.f. [@99FM], the analysis must be extended to include the third dimension. This is beyond the scope of this paper.
In Sec. \[EBC\] we present the equations that describe the problem at hand and specify their boundary conditions. Particular attention is paid to distinguish between the general Eulerian formulation which is model-independent (Subsec. \[general\]) and the constitutive relations involving plasticity where the STZ model is explained (Subsec. \[STZ\]). This section finishes with the presentation of the unperturbed problem, preparing the stage for the linear stability analysis which is discussed in Sec. \[LSA\]. In this section we present a general analysis where inertia and elastic compressibility effects are taken into account. In Appendix \[QS\] we complement the analysis by considering the “quasistatic" (when the velocity of the boundary is sufficiently small) and incompressible case (when the bulk modulus is sufficiently large) and show that both formulations agree with one another in the relevant range. The results of the stability analysis are described in detail in Sec. \[results\] and a few concluding remarks are offered in Sec. \[discussion\].
Equations and Boundary Conditions {#EBC}
=================================
General formulation {#general}
-------------------
We start by writing down the full set of equations for a general two-dimensional elasto-viscoplastic material. A basic assertion of this theory is that plastic strain tensors in such materials are not state variables since their values depend on the entire history of deformation. Thus, one begins by introducing the total rate of deformation tensor $${{\bm{D}}}^{\rm tot} \equiv \frac{1}{2}\Big[{{\bm{\nabla}}} {{\bm{v}}} + \left({{\bm{\nabla}}} {{\bm{v}}}\right)^T\Big] \ , \label{D_tot}$$ where ${{\bm{v}}}({{\bm{r}}}, t)$ is the material velocity at the location ${{\bm{r}}}$ at time $t$ and $T$ denotes here the transpose of a tensor. This type of Eulerian formulation has the enormous advantage that it disposes of any reference state, allowing free discussion of small or large deformations. As is required in an Eulerian frame we employ the full material time derivative for a tensor ${{\bm{T}}}$, $$\frac{{\cal D} {{\bm{T}}}}{{\cal D} t} = \frac{\partial {{\bm{T}}}}{\partial t}
+ {{\bm{v}}}\cdot {{\bm{\nabla}}} {{\bm{T}}} +{{\bm{T}}} \cdot {{\bm{\omega}}} - {{\bm{\omega}}}\cdot {{\bm{T}}} \ , \label{material}$$ where $ {{\bm{\omega}}}$ is the spin tensor $${{\bm{\omega}}} \equiv \frac{1}{2}\Big[{{\bm{\nabla}}} {{\bm{v}}} - \left({{\bm{\nabla}}} {{\bm{v}}}\right)^T\Big] \ . \label{omega}$$ For a scalar or vector quantity ${{\bm{V}}}$ the commutation with the spin tensor vanishes identically. The Eulerian approach allows a natural formulation of moving free boundary problems; this will be shown to lead to a significant advance compared to more conventional treatments.
The plastic rate of deformation tensor ${{\bm{D}}}^{pl}$ is introduced by assuming that the total rate of deformation tensor ${{\bm{D}}}^{\rm tot}$ can be written as a sum of a linear elastic and plastic contributions $${{\bm{D}}}^{\rm tot} = \frac{{\cal D} {{\bm{\epsilon}}}^{el}}{{\cal D} t}
+{{\bm{D}}}^{pl} \label{el_pl} \ .$$ We further assume that ${{\bm{D}}}^{pl}$ is a traceless tensor, corresponding to incompressible plasticity. All possible material compressibility effects in our theory are carried by the elastic component of the deformation. The components of the linear elastic strain tensor ${{\bm{\epsilon}}}^{el}$ are related to the components of stress tensor, whose general form is $$\sigma_{ij} = -p\delta_{ij} + s_{ij} \ , \quad p=-\frac{1}{2}
\sigma_{kk} \ , \label{sig}$$ according to $$\epsilon^{el}_{ij} = -\frac{p}{2K}\delta_{ij} + \frac{s_{ij}}{2\mu}
\ , \label{linear}$$ where $K$ and $\mu$ are the two dimensional bulk and shear moduli respectively. The tensor ${{{\bm{s}}}}$ is referred to hereafter as the “deviatoric stress tensor" and $p$ as the pressure. The equations of motion for the velocity and density are $$\begin{aligned}
\label{eqmot1} \rho \frac{{\cal D} {{\bm{v}}} }{{\cal D} t} &=& {{\bm{\nabla}}}\!\cdot\!{{\bm{\sigma}}} = -{{\bm{\nabla}}} p+ {{\bm{\nabla}}}\!\cdot\! {{\bm{s}}} \ , \\
\quad \frac{{\cal D} \rho}{{\cal D} t} &=& -\rho {{\bm{\nabla}}}\!
\cdot\! {{\bm{v}}} \ . \label{eqmot2}\end{aligned}$$
In order to prepare the general set of equations for the analysis of a circular cavity we rewrite the equations in polar coordinates. For that aim we write $$\label{polarO}
{{\bm{\nabla}}} = {{\bm{e}}}_r \partial_r +\frac{{{\bm{e}}}_\theta}{r} \partial_\theta ,\quad {{\bm{v}}} = v_r {{\bm{e}}}_r + v_\theta {{\bm{e}}}_\theta \ ,$$ where ${{\bm{e}}}_r$ and ${{\bm{e}}}_\theta$ are unit vectors in the radial and azimuthal directions respectively. These expressions enable us to represent the divergence operator ${{\bm{\nabla}}} \cdot$ in the equations of motion and the covariant derivative ${{\bm{v}}} \!\cdot\! {{\bm{\nabla}}}$ in the material time derivative of vectors and tensors. Some care should be taken in evaluating these differential operators in polar coordinates since the unit vectors themselves vary under differentiation according to $$\label{unit_vectors}
\partial_r {{\bm{e}}}_r=0,\quad \partial_r{{\bm{e}}}_\theta=0,\quad \partial_\theta{{\bm{e}}}_r={{\bm{e}}}_\theta,\quad \partial_\theta {{\bm{e}}}_\theta=-{{\bm{e}}}_r \ .$$ We then denote $s_{rr}\equiv -s$, $s_{\theta \theta} \equiv s$, $s_{r\theta}=s_{\theta r} \equiv \tau$ and using Eqs. (\[sig\]) we obtain $$\begin{aligned}
\label{sig_p_s}
\sigma_{rr} &=& -s -p \ ,\nonumber\\
\sigma_{\theta \theta} &=& s-p \ ,\nonumber\\
\sigma_{r \theta} &=& \sigma_{ \theta r} =\tau \ .\end{aligned}$$ In this notation the equations of motion (\[eqmot1\]) can be rewritten explicitly as
$$\begin{aligned}
\rho \left(\frac{\partial v_r}{\partial t} \!+\! v_r \frac{\partial
v_r}{\partial r}\!+\! \frac{v_\theta}{r} \frac{\partial v_r}{\partial
\theta}-\frac{v_\theta^2}{r}\right)\!&=&\! \frac{1}{r}\frac{\partial \tau}{\partial \theta}
-\frac{1}{r^2} \frac{\partial }{\partial r} \left ( r^2 s \right)\! -\!
\frac{\partial
p}{\partial r} \nonumber \ , \\
\rho \left(\frac{\partial v_\theta}{\partial t} \!+\! v_r
\frac{\partial v_\theta}{\partial r}\!+\! \frac{v_\theta}{r}
\frac{\partial v_\theta}{\partial \theta} +\frac{v_\theta v_r}{r}\right)\!&=&\!\frac{\partial
\tau}{\partial r}\! +\! \frac{1}{r} \frac{
\partial s}{\partial \theta}\! -\! \frac{1}{r}
\frac{ \partial p} {\partial \theta} \!+\!\frac{2 \tau}{r} \ , \nonumber\\
\label{EOM}\end{aligned}$$
where ${{\bm{\nabla}}}\!\cdot\!{{\bm{\sigma}}}$ is calculated explicitly in Appendix \[polar\].
Equations (\[el\_pl\]) can be rewritten in components form as $$\begin{aligned}
\label{eq:DA_ij}
D^{\rm tot}_{ij}&=&
\frac{\partial \epsilon^{el}_{ij}}{\partial t} + \left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{\epsilon}}}^{el} \right)_{ij}\\
&+&\epsilon^{el}_{ir}\omega_{rj}+\epsilon^{el}_{i\theta} \omega_{\theta j}-
\omega_{ir}\epsilon^{el}_{rj}-\omega_{i \theta}\epsilon^{el}_{\theta j}+ D^{pl}_{ij}\ .\nonumber\end{aligned}$$ Here the components of the total rate of deformation tensor are related to the velocity according to Eqs. (\[D\_tot\]) as $$\begin{aligned}
D_{rr}^{\rm tot} &\equiv& \frac{\partial v_r}{\partial r},\quad
D_{\theta\theta}^{\rm tot} \equiv \frac{\partial_\theta v_\theta +
v_r}{r} \ ,\nonumber\\
D_{r \theta}^{\rm tot} &\equiv& \frac{1}{2} \left[ \partial_r
v_\theta + \frac{\partial_\theta v_r - v_\theta}{r} \right] \ ,
\label{eq:totalrate}\end{aligned}$$ where the components of the spin tensor ${{\bm{\omega}}}$ in Eq. (\[omega\]) are given by $$\begin{aligned}
\omega_{rr}&=&\omega_{\theta \theta}=0 \ ,\nonumber\\
\omega_{r \theta}&=& - \omega_{\theta r} = \frac{1}{2} \left[
\frac{\partial_{\theta}v_r -v_\theta}{r} - \partial_r v_\theta
\right] \ .\end{aligned}$$ The calculation of the tensor ${{\bm{v}}}\! \cdot\! {{\bm{\nabla}}} {{\bm{\epsilon}}}^{el}$ is presented in Appendix \[polar\]; the linear elastic strain components of Eqs. (\[linear\]) are given by $$\begin{aligned}
\epsilon_{rr}^{el} &=& - \frac{p}{2K} -
\frac{s}{2\mu}\ , \nonumber\\
\epsilon_{\theta \theta}^{el} &=& - \frac{p}{2K}
+ \frac{s}{2\mu}\ , \nonumber\\
\epsilon_{r \theta}^{el} &=& \epsilon_{\theta r}^{el}=\frac{\tau}{2 \mu} \ .
\label{eq:stress-strain}\end{aligned}$$
Since most of the materials of interest have a large bulk modulus $K$, i.e. they are almost incompressible, we assume that the density is constant in space and time $$\rho({{\bm{r}}},t) \simeq \rho \label{density} \ .$$ Therefore, Eq. (\[eqmot2\]) is omitted. Finally, the existence of a free boundary is introduced as the following boundary conditions $$\sigma_{ij}n_j=0 \ , \label{stressBC}$$ where ${{\bm{n}}}$ is the unit normal vector at the free boundary.
Viscoplastic constitutive equations:\
The athermal STZ theory {#STZ}
-------------------------------------
Up to now we have considered mainly symmetries and conservation laws. A general theoretical framework for the elasto-viscoplastic deformation dynamics of amorphous solids should be supplemented with constitutive equations relating the plastic rate of deformation tensor ${{\bm{D}}}^{pl}$ to the stress and possibly to other internal state fields. We use the constitutive equations of the recently proposed athermal Shear Transformation Zones (STZ) theory [@07BLanP]. This theory is based on identifying the internal state fields that control plastic deformation. The basic observation is that stressing a disordered solid results in localized reorganizations of groups of particles. These reorganizations occur upon surpassing a local shear threshold, and when they involve a finite irreversible shear in a given direction, we refer to them as an “STZ transition". Once transformed, due to a local redistribution of stresses, the same local region resists further deformation in that direction, but is particulary sensitive to shearing transformation if the local applied stress reverses its direction. Thus an STZ transition is conceived as a deformation unit that can undergo configurational rearrangements in response to driving forces. Furthermore, the stress redistribution that accompanies an STZ transition can induce the creation and annihilation of other local particle arrangements that can undergo further localized transitions; these arrangements are formed or annihilated at a rate proportional to the local energy dissipation (recall that thermal fluctuations are assumed to be absent or negligible). In this sense the interesting localized events need not depend on “pre-existing" defects in the material, but can appear and disappear dynamically in a manner that we describe mathematically next.
This picture is cast into a mathematical form in terms of a scalar field $\Lambda$ that represents the normalized density of regions that can undergo STZ transitions, a tensor ${{\bm{m}}}$ that represents the difference between the density of regions that can undergo a transition under a given stress and the reversed one, and an effective disorder temperature $\chi$ that characterizes the state of configurational disorder of the solid [@04Lan]. The present state of the theory relates these internal state fields, along with the deviatoric stress tensor ${{{\bm{s}}}}$, to the plastic rate of deformation tensor ${{\bm{D}}}^{pl}$ according to $$\begin{aligned}
\label{eq:Dpl}
\tau_0 D^{pl}_{ij} \!=\! \epsilon_0 \Lambda {{\mathcal{C}}}(\bar{s})\left(\frac{s_{ij}}{\bar{s}}-m_{ij}\right),\quad\bar{s} \equiv \sqrt{\frac{s_{ij}s_{ij}}{2}}\ .\end{aligned}$$ This equation represents the dependence of the plastic rate of deformation on the current stress $s_{ij}$ and the recent history encoded by the internal state tensorial field ${{\bm{m}}}$. This field acts as a back-stress, effectively reducing the local driving force for STZ transitions, up to the possible state of jamming when the whole parentheses vanishes. The parentheses provides information about the orientation of the plastic deformation. The function $ {{\mathcal{C}}}(\bar{s})$ determines the magnitude of the effect, and is re-discussed below. The field $\Lambda$ appears multiplicatively since the rate of plastic deformation must be proportional to the density of STZ. The second equation describes the dynamics of the internal back stress field $$\begin{aligned}
\label{eq:m}&&\tau_0\frac{{\cal D} m_{ij} }{{\cal D} t} =
2\frac{\tau_0 D^{pl}_{ij}}{\epsilon_0 \Lambda
}- \Gamma(s_{ij}, m_{ij})m_{ij}\frac{e^{-1/\chi}}{\Lambda} \ ,\nonumber\\
&&\hbox{with}\quad\Gamma(s_{ij}, m_{ij}) = \frac{\tau_0
s_{ij}D^{pl}_{ij}}{\epsilon_0 \Lambda} \ .\label{Gamma}\end{aligned}$$ This equation captures the dynamical exchange of stability when the material yields to the applied stress. The equation has a jammed fixed point when the plastic deformation vanishes, in agreement with the state of STZ being all in one orientation, without the production of a sufficient number of new ones in the other orientation. The jammed state is realized when the applied stress is below the yield stress. When the stress exceeds the threshold value the stable fixed point of this equation corresponds to a solution with non-vanishing plastic rate of deformation. This state corresponds to a situation where enough STZ are being created per unit time to allow a persistent plastic flow. The quantity $\Gamma$ represents the rate of STZ production in response to the flow ${{\bm{D}}}^{pl}$. The next equation, for the density of STZ $\Lambda$, is an elementary fixed point equation reading $$\label{eq:Lambda} \tau_0 \frac{{\cal D} \Lambda }{{\cal D} t} =
\Gamma(s_{ij}, m_{ij})\left(e^{-1/\chi}- \Lambda\right) \ .$$ The unique fixed point of this equation is the equilibrium solution $\Lambda=e^{-1/\chi}$ where $\chi$ is a normalized temperature-like field which is not necessarily the bath temperature when the system is out of thermal and/or mechanical equilibrium. The last equation is for this variable, reading $$\begin{aligned}
\label{eq:chi} \tau_0 c_0 \frac{{\cal D} \chi }{{\cal D} t}
&=& \epsilon_0 \Lambda \Gamma(s_{ij},
m_{ij})\left[\chi_\infty\left(\tau_0\bar{D}^{pl}\right)-\chi\right],\nonumber\\
\hbox{with}\quad \bar{D}^{pl}&\equiv & \sqrt{\frac{D^{pl}_{ij}D^{pl}_{ij}}{2}} \ .\end{aligned}$$ This is a heat-like equation for the configurational degrees of freedom; it is discussed in detail below. Here and elsewhere we assume that quantities of stress dimension are always normalized by the yield stress $s_y$; this is justified as the STZ equations exhibit an exchange of dynamic stability from jamming to flow at $s\!=\!1$, i.e. at a stress that equals to $s_y$ [@07BLanP]. The set of Eqs. (\[eq:Dpl\])-(\[eq:chi\]) is a tensorial generalization of the effectively scalar equations derived in [@07BLanP]; such a generalization can be obtained by following the procedure described in Ref. [@05Pech]. In these equations, $\tau_0$ is the elementary time scale of plasticity, $\epsilon_0$ is a dimensionless constant and $c_0$ is a specific heat in units of $k_B$ per particle.
A weak point of the theory is the lack of a first-principle derivation that determines the function ${\cal C}(s)$ in Eq. (\[eq:Dpl\]), which lumps together much of the microscopic physics that controls the stress-dependent rate of STZ transitions. Our theory constrains it to be a symmetric function of $s$ that vanishes with vanishing derivatives at $s\!=\!0$, due to the athermal condition that states that no transitions can occur in a direction opposite to the direction of $s$ [@07BLanP]. This constraint is not sufficient, however, to determine ${\cal C}(s)$. To appreciate the uncertainties, recall that STZ transitions are relaxation events, where energy and stress are expected to re-distribute. Even without external mechanical forcing, aging in glassy systems involves relaxation events that are poorly understood [@01LN]. The situation is even more uncertain when we deal with dynamics far from mechanical equilibrium. The best one can do at present is to choose the function ${{{\mathcal{C}}}}(s)$ by examining its influence on the resulting macroscopic behaviors [@07BL]. Thus in this paper we will examine the sensitivity of the stability of the expanding cavity to two different choices of ${\cal C}(s)$. At present we use the one-parameter family of functions, ${\cal
C}(\bar{s})={{\mathcal{F}}}(\bar{s}; \zeta)$, proposed in [@07BLanP] $$\label{C_s} {{\mathcal{F}}}(\bar{s};\zeta)\equiv
\,\frac{\zeta^{\zeta+1}}{\zeta!}\int_0^{|\bar
s|}(|\bar s|-s_{\alpha})\,s_{\alpha}^{\zeta}\,\exp (-\zeta \,
s_{\alpha})\,d s_{\alpha}\ .$$ The integral is over a distribution of transition thresholds whose width is controlled by a parameter $\zeta$ (and see [@07BLanP] for details). For finite values of $\zeta$ there can be nonzero sub-yield plastic deformation for $|s|\!<\!1$. This behavior is well documented in the literature cf. [@Lubliner] in the context of experimental stress-strain relations and plastic deformations. We note that for $s$ very small or very large, $$\begin{aligned}
\label{limits}
{\cal C}(s) &\sim& s^{\zeta+2} \quad\hbox{for}\quad s \to 0^+ \ ,\nonumber\\
{\cal C}(s) &\simeq& s-1 \quad\hbox{for}\quad s \gg 1
\ .\end{aligned}$$ In Sec. \[rate\_function\] we propose a different one-parameter family of functions ${{\mathcal{G}}}(\bar{s}; \lambda)$ and study in detail the implications of this different choice on the stability of the expanding cavity.
Eq. (\[eq:chi\]) deserves special attention. It is a heat-like equation for the effective disorder temperature $\chi$ with a fixed-point $\chi_\infty$ which is attained under steady state deformation. This reflects the observations of Ref. [@Ono], where the effective temperature $\chi$ was shown to attain a unique value in the limit $t_0\bar{D}^{pl}\!\to\! 0$, where $t_0$ was the particles vibrational time scale. Indeed, in most applications, realistically [*imposed*]{} inverse strain rates are much larger than the elementary time scale $t_0$, i.e. $t_0\bar{D}^{pl}\!\ll\! 1$. If we identify our $\tau_0$ with the vibrational time scale $t_0$ (see for example [@07BLanPb]), we conclude that $\chi_\infty$ can be taken as a constant, independent of the plastic rate of deformation. This assumption was adopted in all previous versions of STZ theory. Note also that a low plastic rate of deformation is associated with $s\!\to\! 1^+$, i.e. a deviatoric stress that approaches the yield stress from above. However, the situation might be very different in free boundary evolution problems, where high stresses concentrate near the boundary, reaching levels of a few times the yield stress. Estimating $\chi$ in the typical range of $0.1-0.15$ [@07BLanPb; @07SKLF], $e^{-1/\chi}$ is in the range $10^{-4}\!-\!10^{-3}$. Therefore, estimating the other factors in Eq. (\[eq:Dpl\]), for the high stresses near the free boundary, in the range $1\!-\!10$, we conclude that $\tau_0\bar{D}^{pl}$ can reach values in the range $10^{-4}\!-\!10^{-2}$. Very recent simulations [@07HL] demonstrated convincingly that in this range of normalized plastic rates of deformation, $\chi_\infty$ shows a considerable dependence on this rate, see Fig. \[HL\]. Since $\chi$ affects plastic deformation through an exponential Boltzmann-like factor, even small changes of $\chi_\infty$ in Eq. (\[eq:chi\]) can generate significant effects [@comment0]. This issue is of particular importance for the question of stability (and localization) under study since the strain rate sensitivity of $\chi_\infty$ might incorporate an instability mechanism; fluctuations in the plastic rate of deformation, caused for example by fluctuations in $\chi$, can induce, through $\chi_\infty$, a further localized increase in plastic deformation and so on. This intuitive idea will be studied in the analysis to follow.
The set of Eqs. (\[eq:Dpl\])-(\[eq:chi\]) [@comment] (and slight variants) was shown to capture viscoelastic behavior in a variety of examples. These include small stress and finite plasticity at intermediate stresses [@00FL], a transition to flow at the yield stress (as discussed above) [@07BLanP], the deformation dynamics of simulated amorphous silicon [@07BLanPb], the necking instability [@03ELP], the deformation dynamics near stress concentrations [@07BLLP], the cavitation instability [@07BLP] and strain localization [@07MLC]. In this work we focus on the implications of these constitutive equations on the stability of propagating free boundaries in relation to the failure modes of amorphous solids.
The unperturbed problem {#zeroth}
-----------------------
In this subsection we adapt the general theory to the circular symmetry of the unperturbed expanding cavity problem. We consider an infinite medium with a circular cavity of radius $R^{(0)}(t)$, loaded by a radially symmetric stress $\sigma^{\infty}$ at infinity. The superscript $(0)$ in all the quantities denotes the fact that they correspond to the perfectly symmetric case that is going to be perturbed later on. For the perfect circular symmetry the velocity field ${{\bm{v}}}^{(0)}({{\bm{r}}},\theta)$ is purely radial and independent of the azimuthal angle $\theta$, i.e. $$v_r^{(0)}({{\bm{r}}},t)=v_r^{(0)}(r,t),\quad v_\theta^{(0)}({{\bm{r}}},t) = 0 \
.$$ This symmetry also implies that $$\tau^{(0)}({{\bm{r}}},t) = 0, \quad {D^{pl}_{r \theta}}^{(0)}({{\bm{r}}},
t)=0,\quad m^{(0)}_{r \theta}({{\bm{r}}},t)=0$$ and all the diagonal components are independent of $\theta$. Eqs. (\[el\_pl\]), after a simple manipulation, can be rewritten as $$\begin{aligned}
\frac{v_r^{(0)}}{r}+\frac{\partial v_r^{(0)}}{\partial r}\! &=&\!
-\frac{1}{K}\left(\frac{\partial p^{(0)}}{\partial t}+v_r^{(0)}
\frac{\partial p^{(0)}}{\partial r}\right)\ , \label{kinematic1a} \\
\frac{v_r^{(0)}}{r}-\frac{\partial v_r^{(0)}}{\partial r}\!&=&\!
\frac{1}{\mu}\left(\frac{\partial s^{(0)}}{\partial t}+v_r^{(0)}
\frac{\partial s^{(0)}}{\partial r}\right)\!+\!2{D^{pl}}^{(0)} \ .
\nonumber\\ \label{kinematic2a}\end{aligned}$$ where we have defined $${D^{pl}_{\theta \theta}}^{(0)}=-{D^{pl}_{rr}}^{(0)} \equiv
{D^{pl}}^{(0)} \ . \label{D}$$ The equations of motion (\[EOM\]) reduce to $$\rho \left(\frac{\partial v_r^{(0)}}{\partial t} + v_r^{(0)}
\frac{\partial v_r^{(0)}}{\partial r} \right)= -\frac{1}{r^2}
\frac{\partial }{\partial r} \left (r^2 s^{(0)} \right) -
\frac{\partial p^{(0)}}{\partial r} \ .\label{EOM0}$$ The boundary conditions are given by $$\begin{aligned}
\sigma^{(0)}_{rr}(R^{(0)},t)=-p^{(0)}(R^{(0)},t)-s^{(0)}(R^{(0)},t)=0 \ ,\nonumber\\
\sigma^{(0)}_{rr}(\infty,t)=-p^{(0)}(\infty,t)-s^{(0)}(\infty,t)=\sigma^{\infty}
\label{BC0}.\end{aligned}$$ The initial conditions are chosen to agree with the solution of the static linear-elastic problem, i.e. $$\begin{aligned}
p^{(0)}(r,t=0)&=&-\sigma^{\infty}\ , \nonumber\\
s^{(0)}(r,t=0)&=& \sigma^{\infty}\frac{\left(R^{(0)}(t=0)\right)^2}{r^2} \ , \nonumber\\
v_r^{(0)}(r,t=0)&=& 0 \label{initial0} \ .\end{aligned}$$ This choice reflects the separation of time scales between elastic and plastic responses. This separation of time scales can be written explicitly in terms of the typical elastic wave speed, the radius of the cavity and the time scale of plasticity: $$R^{(0)}(t=0)\sqrt{\frac{\rho}{\mu}} \ll \tau_0e^{1/\chi} \ .$$ Finally, the rate of the cavity growth is simply determined by $$\dot{R}^{(0)}(t)=v_r^{(0)}(R^{(0)},t) \ . \label{edge_velocity0}$$
For the circular symmetry, the STZ equations (\[eq:Dpl\])-(\[Gamma\]) reduce to $$\begin{aligned}
\label{eq:Dpl0}
&\tau_0&\!\!\! {D^{pl}}^{(0)} = \epsilon_0 \Lambda^{(0)} {{\mathcal{C}}}(s^{(0)})\left(\frac{s^{(0)}}{|s^{(0)}|}-m^{(0)}\right) \ , \\
\label{eq:m0} &\tau_0&\!\!\! \left(\frac{\partial m^{(0)}}{\partial
t}+v_r^{(0)}\frac{\partial m^{(0)}}{\partial r}\right)=\nonumber\\
&2&\!\!\!\frac{\tau_0 {D^{pl}}^{(0)}}{\epsilon_0 \Lambda^{(0)}
}- \Gamma^{(0)}(s^{(0)}, m^{(0)})m^{(0)}\frac{e^{-1/\chi^{(0)}}}{\Lambda^{(0)}} \ ,\\
\label{eq:Lambda0} &\tau_0&\!\!\! \left(\frac{\partial
\Lambda^{(0)}}{\partial t}+v_r^{(0)}\frac{\partial
\Lambda^{(0)}}{\partial r}\right) =\nonumber\\
&\Gamma&\!\!\!\!^{(0)}(s^{(0)}, m^{(0)})\left(e^{-1/\chi^{(0)}}- \Lambda^{(0)}\right) \ ,\\
\label{eq:chi0} &\tau_0&\!\!\! c_0 \left(\frac{\partial
\chi^{(0)}}{\partial t}+v_r^{(0)}\frac{\partial \chi^{(0)}}{\partial
r}\right) =\\
&\epsilon_0&\!\!\! \Lambda^{(0)} \Gamma^{(0)}(s^{(0)},
m^{(0)})\left[\chi_\infty\left(\tau_0{D^{pl}}^{(0)}\right)\! -\!
\chi^{(0)}\right] \ . \nonumber\end{aligned}$$
Note that the $\chi$ and $D^{pl}$ equations contain a factor of the small STZ density $\epsilon_0 \Lambda^{(0)}$, which implies they are much stiffer than the $m$ and $\Lambda$ equations. Therefore, whenever the advection terms can be neglected this separation of time scales [@07BLLP] allows us to replace the equations for $m^{(0)}$ and $\Lambda^{(0)}$ by their stationary solutions $$\label{m0_fix} m^{(0)}=\cases{ \frac{s^{(0)}}{|s^{(0)}|} &if $|
s^{(0)}|\le 1$\cr \frac{1}{ s^{(0)}} & if $| s^{(0)}| >1$}$$ and $$\Lambda^{(0)} = e^{-1/\chi^{(0)}} \ . \label{Lam0_fix}$$ Note that Eq. (\[eq:m0\]) has two stable fixed-point solutions given by Eq. (\[m0\_fix\]), where we used Eq. (\[Lam0\_fix\]) and omitted the advection term. The transition between these two solutions corresponds to a transition between a jammed and a plastically flowing state for a deviatoric stress below and above the yield stress respectively [@07BLanP]. Eq. (\[eq:Dpl0\]) exhibits the corresponding solutions in terms of the plastic rate of deformation, zero and finite, below and above the yield stress respectively.
The unperturbed problem was studied in detail in Ref. [@07BLP]. It was shown that for stresses $\sigma^\infty$ smaller than a threshold value $\sigma^{th}\!\simeq\!5$ the cavity exhibits transient dynamics in which its radius approaches a finite value in a finite time. When this happens the material is jammed. On the other hand, for $\sigma^\infty\!>\!\sigma^{th}$ the cavity grows without bound, leading to a catastrophic failure of the material, accompanied by large scale plastic deformations. We stress that to our knowledge this mode of failure by propagating a plastic solution is new, apparently not related to other recently discovered failure fronts [@06GSW]. One major goal of the present study is to analyze the stability of the unbounded growth modes that result from this cavitation. However, we are also interested in the range $\sigma^\infty\!<\!\sigma^{th}$ where the unperturbed theory predicts no catastrophic failure. In this range, a failure can still occur if the cavity, prior to jamming, loses its perfect circular symmetry in favor of relatively slender propagating “fingers”. In that case, stress localization near the tips of the propagating “fingers” can lead to failure via fracture. Such a scenario is typical of brittle fracture where the stress localization due to the geometry of the defect drives crack propagation that might lead to macroscopic failure.
Linear Stability Analysis {#LSA}
=========================
We derive here a set of equations for the linear perturbations of the perfect circular symmetry where both inertia and elastic compressibility effects are taken into account. In Appendix \[QS\] we complement the analysis by considering the quasi-static and incompressible case. This case is mathematically more involved as it contains no explicit time evolution equation for the velocity and the pressure fields. By comparing the results of the two formulations we test for consistency and obtain some degree of confidence in the derivation and the numerical implementation of the equations presented in this section.
Equations of motion and kinematics {#inertial}
----------------------------------
The quantities involved in the problem are the tensors $${{\bm{s}}}=\left(\begin{array}{cc}-s&\tau\\\tau&s\end{array}\right)\
,\quad
{{{\bm{D}}}}^{pl}=\left(\begin{array}{cc}-D^{pl}&D^{pl}_{r\theta}\\D^{pl}_{r\theta}&D^{pl}\end{array}\right)
\ ,$$ as well as the pressure $p({{\bm{r}}},t)$, the velocity ${{\bm{v}}}({{\bm{r}}},t)$ and the location of the free boundary $R(\theta,t)$. We start by expanding all these quantities as follows $$\begin{aligned}
R(\theta, t)&=& R^{(0)}(t) + e^{i n \theta} R^{(1)}(t)\ ,\nonumber\\
s(r,\theta, t)&=&s^{(0)}(r,t) + e^{i n \theta} s^{(1)}(r,t)\ ,\nonumber\\
\tau (r,\theta, t)&=&i e^{i n \theta} \tau^{(1)}(r,t)\nonumber\\
p(r,\theta, t)&=&p^{(0)}(r,t) + e^{i n \theta} p^{(1)}(r,t)\ ,\nonumber\\
v_\theta(r,\theta, t)&=&i e^{i n \theta} v_\theta^{(1)}(r,t)\ ,\nonumber\\
v_r(r,\theta, t)&=&v_r^{(0)}(r,t) + e^{i n \theta} v_r^{(1)}(r,t)\ ,\nonumber\\
D^{pl}(r,\theta, t)&=& {D^{pl}}^{(0)}(r,t)+
e^{in\theta}{D^{pl}}^{(1)}(r,t)\ ,\nonumber\\
D^{pl}_{r\theta}(r,\theta, t) &=& i
e^{in\theta}{D^{pl}_{r\theta}}^{(1)}(r,t) \ .
\label{basic_perturbations}\end{aligned}$$ Here all the quantities with the superscript $(1)$ are assumed to be much smaller than their $(0)$ counterparts and $n$ is the discrete azimuthal wave-number of the perturbations. The small perturbation hypothesis results in a formal linear decomposition in which each linear mode of wave-number $n$ is decoupled from all the other modes. When nonlinear contributions are non-negligible, all the modes become coupled and the formal linear decomposition is invalid.
We expand then the equations of motion (\[EOM\]) to first order to obtain $$\begin{aligned}
\label{EOM1}
&&\rho \left(\frac{\partial v_r^{(1)}}{\partial t} +v_r^{(0)}
\frac{\partial v_r^{(1)}}{\partial r}+v_r^{(1)} \frac{\partial
v_r^{(0)}}{\partial r} \right)= \nonumber\\
&&-\frac{n\tau^{(1)}}{r} -\frac{1}{r^2} \frac{\partial }{\partial r}
\left ( r^2 s^{(1)} \right) - \frac{\partial p^{(1)}}{\partial r} \
, \\\label{EOM2} &&\rho \left(\frac{\partial
v_\theta^{(1)}}{\partial t} +v_r^{(0)}
\frac{\partial v_\theta^{(1)}}{\partial r}+
\frac{v_r^{(0)} v_\theta^{(1)}}{r}\right)=\nonumber\\
&&\frac{\partial \tau^{(1)}}{\partial r} + \frac{n s^{(1)}}{r}
- \frac{n p^{(1)}}{r}
+\frac{2 \tau^{(1)}}{r} \ .\end{aligned}$$ We proceed by expanding Eqs. (\[el\_pl\]) to first order, which after a simple manipulation yields $$\begin{aligned}
\label{first_eq}&&\frac{\partial v^{(1)}_r}{\partial r}+\frac{-n v^{(1)}_\theta +
v^{(1)}_r}{r}=\\
&& -\frac{1}{K}\left(\frac{\partial p^{(1)}}{\partial t}+v_r^{(0)}
\frac{\partial p^{(1)}}{\partial r}+v_r^{(1)}
\frac{\partial p^{(0)}}{\partial r}\right)\ ,\nonumber\\
&&\frac{-n v^{(1)}_\theta + v^{(1)}_r}{r}-\frac{\partial
v^{(1)}_r}{\partial r} =\\
&&\frac{1}{ \mu} \left[ \frac{\partial s^{(1)}}{\partial t} +
v_r^{(0)} \frac{\partial s^{(1)}}{\partial r} + v^{(1)}_r
\frac{\partial s^{(0)}}{\partial r}\right] + 2{D^{pl}}^{(1)} \ ,
\nonumber\\
&&\frac{1}{2} \left[\frac {\partial v^{(1)}_\theta}{\partial r} +
\frac{n v^{(1)}_r - v^{(1)}_\theta}{r} \right] = \\
&&\frac{1}{2 \mu} \left[ \frac{\partial \tau^{(1)} }{\partial t} +
v^{(0)}_r \frac{\partial \tau^{(1)}}{\partial r} -\frac{2 s^{(0)} v_\theta^{(1)}}{r}\right]
+ {D^{pl}_{r \theta}}^{(1)}\nonumber \ . \label{last_eq}\end{aligned}$$
At this point we derive an evolution equation for the dimensionless amplitude of the shape perturbation $R^{(1)}/R^{(0)}$. To that aim we note that $$\dot{R}=v_r(R)+{\cal O}\left[\left(\frac{R^{(1)}}{R^{(0)}} \right)^2
\right] \ .$$ Expanding this relation using Eqs. (\[basic\_perturbations\]), we obtain to zeroth order Eq. (\[edge\_velocity0\]) and to first order $$\dot{R}^{(1)}(t)=v_r^{(1)}(R^{(0)})+R^{(1)} \frac{\partial
v_r^{(0)}(R^{(0)})}{\partial r} \ . \label{edge_velocity1}$$ Therefore, we obtain $$\begin{aligned}
\label{smallness}
&&\frac{d}{dt}\left(\frac{R^{(1)}}{R^{(0)}}
\right)=\\
&&\frac{R^{(1)}}{R^{(0)}}\!\left[\!\frac{v_r^{(1)}(R^{(0)})}{R^{(1)}}\!+\!\frac{\partial
v_r^{(0)}(R^{(0)})}{\partial r}\!-\!
\frac{v_r^{(0)}(R^{(0)})}{R^{(0)}}\!\right] \ .\nonumber\end{aligned}$$ This is an important equation since a linear instability manifests itself as a significant increase in $R^{(1)}/R^{(0)}$ such that nonlinear terms become non-negligible. Note that the two last terms in the square brackets are always negative, therefore an instability can occur only if the first term in the square brackets is positive with absolute value larger than the sum of the two negative terms. Moreover, recall that the problem is non-stationary, implying that all the zeroth order quantities depend on time.
In order to derive the boundary conditions for the components of the stress tensor field we expand to linear order the normal unit vector ${{\bm{n}}}$ (not to be confused with the discrete wave-number $n$) and tangential unit vector ${{\bm{t}}}$ at the free boundary, obtaining $$\label{unit_n} {{\bm{n}}} = \left( 1, -i \frac{R^{(1)}}{R^{(0)}}ne^{in
\theta} \right)\ ,\quad \label{unit_t} {{\bm{t}}} = \left(i
\frac{R^{(1)}}{R^{(0)}}ne^{in \theta} , 1 \right) \ .$$ Eqs. (\[stressBC\]), expanded to first order, translate to $$\begin{aligned}
\label{spb}
s^{(1)}(R^{(0)})\!\!&+&\!\!p^{(1)}(R^{(0)})=\nonumber\\
\!\!&-&\!\!R^{(1)}\left[\frac{\partial
s^{(0)}(R^{(0)})}{\partial r}+ \frac{\partial p^{(0)}(R^{(0)})}{\partial r}\right]\ , \\
\label{taub} \tau^1(R^{(0)})\!\!&=&\!\! n
\left[s^{(0)}(R^{(0)})-p^{(0)}(R^{(0)})\right]
\frac{R^{(1)}}{R^{(0)}}.\end{aligned}$$ In addition, all the first order fields decay as $r\!\to\!\infty$. The initial conditions are determined by the perturbation scheme that is being studied.
To avoid dealing with an infinite and time-dependent domain we applied the following time-dependent coordinate transformation $$\xi=R(t)/r \ . \label{trans}$$ This transformation allows us to integrate the equations in the time-independent finite domain $\xi\! \in\! [0,1]$, with the price of introducing new terms in the equations. Controlling the equations at small distances required the introduction of an artificial viscosity on the right-hand-side (RHS) of Eq. (\[eqmot1\]). The term introduced is $\rho\eta\! \nabla^2\! {{\bm{v}}}$, with $\eta$ chosen of the order of the square of space discretization over the time discretization. This introduces zeroth order contributions on the RHS of Eq. (\[EOM0\]) and first order contributions on the RHS of Eqs. (\[EOM1\])-(\[EOM2\]).
Linear perturbation analysis of the STZ equations {#perturbSTZ}
-------------------------------------------------
The only missing piece in our formulation is the perturbation of the tensorial STZ equations. In addition to the fields considered up to now, the analysis of the STZ equations includes also the internal state fields $${\bf m}=\left(\begin{array}{cc}-m&m_{r\theta}\\m_{r\theta}&m
\end{array}\right),\quad\Lambda\quad\hbox{and}\quad\chi \ .$$ Therefore, in addition to Eqs. (\[basic\_perturbations\]) we have $$\begin{aligned}
m(r,\theta, t)&=& m^{(0)}(r,t)+
e^{in\theta}m^{(1)}(r,t)\ ,\nonumber\\
m_{r\theta}(r,\theta, t)&=&i e^{i n \theta} m_{r\theta}^{(1)}(r,t)\ ,\nonumber\\
\Lambda(r,\theta, t)&=& \Lambda^{(0)}(r,t)+e^{in\theta}\Lambda^{(1)}(r,t)\ ,\nonumber\\
\chi(r,\theta,t) &=& \chi^{(0)}(r,t) + e^{in\theta}\chi^{(1)}(r,t)
\ . \label{STZ_perturbations}\end{aligned}$$
We then expand systematically Eqs. (\[eq:Dpl\])-(\[C\_s\]). First, we have $$\begin{aligned}
\label{sbar_exp}
\bar{s}&=&\sqrt{\frac{2(s^{(0)}+e^{in\theta}s^{(1)})^2 +
2(\tau^{(1)} e^{in\theta})^2}{2}} \\
&\simeq& |s^{(0)} +
e^{in\theta}s^{(1)}|=|s^{(0)}|+e^{in\theta}s^{(1)}{\rm
sgn}\left(s^{(0)}\right) \ .\nonumber\end{aligned}$$ Accordingly we expand ${{\mathcal{C}}}(\bar s)$ (assuming $s^{(0)}>0$) in the form $${{\mathcal{C}}}(\bar s) = {{\mathcal{C}}}(s^{(0)} + e^{in\theta}s^{(1)}) \simeq {{\mathcal{C}}}(s^{(0)}) + \frac{d{{\mathcal{C}}}}{ds}\left(s^{(0)}\right)e^{in\theta}s^{(1)},$$ where $$\frac{d{{\mathcal{C}}}}{ds}\left(s^{(0)}\right) =
\frac{\zeta^{\zeta+1}}{\zeta!}\int_0^{|s^{(0)}|}s_\alpha^\zeta
exp(-\zeta s_\alpha)ds_\alpha \ .$$ Substituting the last three equations into (\[eq:Dpl\]) and expanding to first order, we obtain
$$\begin{aligned}
\label{Dpl1} \tau_0{D^{pl}}^{(1)} &=&
\epsilon_0\Lambda^{(0)}\left[\left(\frac{\Lambda^{(1)}}{\Lambda^{(0)}}{{\mathcal{C}}}\left(s^{(0)}\right) + s^{(1)}\frac{d{{\mathcal{C}}}\left(s^{(0)}\right)}{ds}\right)\left(
{\rm
sgn}\left(s^{(0)}\right)-m^{(0)}\right)
- {{\mathcal{C}}}\left(s^{(0)}\right)m^{(1)}\right] \ , \\
\label{Dploff1} \tau_0 {D^{pl}_{r\theta}}^{(1)} &=& \epsilon_0
\Lambda^{(0)}{{\mathcal{C}}}\left(s^{(0)}\right)\left(\frac{\tau^{(1)}}{|s^{(0)}|}-m_{r\theta}^{(1)}\right)
\ .\end{aligned}$$
We then expand $\Gamma$ in the form $$\begin{aligned}
\Gamma \!&=&\! \Gamma^{(0)} +
e^{in\theta}\Gamma^{(1)}\quad\hbox{with}\quad \Gamma^{(0)} =
\frac{2 \tau_0
s^{(0)}{D^{pl}}^{(0)}}{\epsilon_0 \Lambda^{(0)}} \ ,\nonumber\\
\Gamma^{(1)}\! &=&\! \frac{2 \tau_0 }{\epsilon_0 \Lambda^{(0)}}
\left[s^{(0)}{D^{pl}}^{(1)}\! +\! s^{(1)}{D^{pl}}^{(0)}
\!-\!\frac{s^{(0)}{D^{pl}}^{(0)}\Lambda^{(1)}}{\Lambda^{(0)}}\right]
\
.\nonumber\\\end{aligned}$$ Eq. (\[eq:m\]) is now used to obtain $$\begin{aligned}
\label{m1} &&\tau_0\left(\frac{\partial m^{(1)}}{\partial t}
+v_r^{(0)} \frac{\partial m^{(1)}}{\partial r}+v_r^{(1)}
\frac{\partial
m^{(0)}}{\partial r} \right) = \nonumber\\
&&\frac{2\tau_0}{\epsilon_0 \Lambda^{(0)}}\left( {D^{pl}}^{(1)} -
{D^{pl}}^{(0)}\frac{\Lambda^{(1)}}{\Lambda^{(0)}}\right)-\frac{e^{-1/\chi^{(0)}}}{\Lambda^{(0)}}\times\\
&&\left[\Gamma^{(0)}
m^{(1)} +\Gamma^{(1)}
m^{(0)}+ \Gamma^{(0)}
m^{(0)}\left(\frac{\chi^{(1)}}{\left[\chi^{(0)}\right]^2}-
\frac{\Lambda^{(1)}}{\Lambda^{(0)}}\right) \right] \ ,\nonumber\end{aligned}$$ and $$\begin{aligned}
\label{moff1} &&\tau_0\left(\frac{\partial
m_{r\theta}^{(1)}}{\partial t} +v_r^{(0)} \frac{\partial
m_{r\theta}^{(1)}}{\partial r}-\frac{m^{(0)} v_\theta^{(1)}}{r} \right) =\nonumber\\
&& \frac{2\tau_0
{D^{pl}_{r\theta}}^{(1)}}{\epsilon_0 \Lambda^{(0)}} -\Gamma^{(0)}
m_{r\theta}^{(1)}\frac{e^{-1/\chi^{(0)}}}{\Lambda^{(0)}} \ .\end{aligned}$$ Using Eq. (\[eq:Lambda\]) we obtain $$\begin{aligned}
\label{Lambda1} &\tau_0&\!\!\!\!\! \left(\frac{\partial
\Lambda^{(1)}}{\partial t} +v_r^{(0)} \frac{\partial
\Lambda^{(1)}}{\partial r}+v_r^{(1)} \frac{\partial
\Lambda^{(0)}}{\partial r} \right)= \\
&\Gamma^{(0)}&\!\!\!\left(e^{-1/\chi^{(0)}}\frac{\chi^{(1)}}{\left[\chi^{(0)}\right]^2}\!-\!\Lambda^{(1)}
\right)\!+\!\Gamma^{(1)}\left(e^{-1/\chi^{(0)}}\!-\!\Lambda^{(0)}
\right) \ . \nonumber\end{aligned}$$ Expanding $\bar{D}^{pl}$, similarly to Eq. (\[sbar\_exp\]), we obtain $$\bar{D}^{pl} \simeq
|{D^{pl}}^{(0)}|+e^{in\theta}{D^{pl}}^{(1)}{\rm
sgn}\left({D^{pl}}^{(0)}\right) \ .$$ Accordingly we expand $\chi_\infty\left(\tau_0 \bar{D}^{pl}
\right)$ (with ${D^{pl}}^{(0)}\!>\!0$) in the form $$\begin{aligned}
&&\chi_\infty\left(\tau_0 \bar{D}^{pl} \right) =
\chi_\infty\left(\tau_0 {D^{pl}}^{(0)} + e^{in\theta}\tau_0
{D^{pl}}^{(1)}\right) \\
&&= \chi_\infty\left(\tau_0 {D^{pl}}^{(0)}\right) +
\frac{d\chi_\infty}{d\bar{D}^{pl}}\left(\tau_0
{D^{pl}}^{(0)}\right)e^{in\theta} {D^{pl}}^{(1)} \
.\nonumber\end{aligned}$$ Then, using Eq. (\[eq:chi\]) we obtain
$$\begin{aligned}
\label{chi1} &&\tau_0 c_0 \left(\frac{\partial
\chi^{(1)}}{\partial t} +v_r^{(0)} \frac{\partial
\chi^{(1)}}{\partial r}+v_r^{(1)} \frac{\partial
\chi^{(0)}}{\partial r} \right)= \epsilon_0\left(\Lambda^{(0)}\Gamma^{(1)}+\Gamma^{(0)}\Lambda^{(1)}\right)
\left(\chi_\infty\left(\tau_0{D^{pl}}^{(0)}\right)-\chi^{(0)}\right)+\nonumber\\
&&\epsilon_0\Lambda^{(0)}\Gamma^{(0)}\left(\frac{d\chi_\infty}{d\bar{D}^{pl}}\left(\tau_0
{D^{pl}}^{(0)}\right){D^{pl}}^{(1)}-\chi^{(1)}\right)\ .\end{aligned}$$
Thus, Eqs. (\[Dpl1\])-(\[Dploff1\]), (\[m1\])-(\[moff1\]), (\[Lambda1\]) and (\[chi1\]) constitute our equations for the dynamics of the first order STZ quantities.
These equations already reveal some interesting features. First note that the coupling between ${D^{pl}}^{(1)}$ (which is the quantity that is expected to be of major importance in determining $v_r^{(1)}$ in Eq. (\[smallness\]) through Eqs. (\[first\_eq\])-(\[last\_eq\])) and $\chi^{(1)}$, $m^{(1)}$ depends on ${{\mathcal{C}}}\left(s^{(0)}\right)$. This means that the strength of the coupling depends on $\zeta$. Similarly, the coupling between ${D^{pl}}^{(1)}$ and $s^{(1)}$ depends on $d{{\mathcal{C}}}\left(s^{(0)}\right)/ds$ which is also a function of $\zeta$. These observations demonstrate the importance of the precise form of the function ${{\mathcal{C}}}(s)$. This issue is further discussed in Sec. \[rate\_function\]. Finally, note that whenever the advection terms can be neglected, the known separation of time scales [@07BLLP] allows us to use Eqs. (\[m0\_fix\])-(\[Lam0\_fix\]) and to replace the equations for $m^{(1)}$, $m_{r\theta}^{(1)}$ and $\Lambda^{(1)}$ by their stationary solutions $$\label{m1_fix} m^{(1)}=\cases{ 0 &if $s^{(0)}\le 1$\cr
-\frac{s^{(1)}}{\left[s^{(0)}\right]^2} & if $ s^{(0)} >1$} \ ,$$ $$\label{m_rth_fix} m_{r\theta}^{(1)}=\cases{
\frac{\tau^{(1)}}{s^{(0)}} &if $s^{(0)}\le 1$\cr
\frac{\tau^{(1)}}{\left[s^{(0)}\right]^2} & if $s^{(0)} >1$}$$ and $$\Lambda^{(1)} =
\frac{\chi^{(1)}}{\left[\chi^{(0)}\right]^2}e^{-1/\chi^{(0)}} \ .
\label{Lam1_fix}$$
In the next section we summarize the results of our analysis of the equations derived in Sec. \[zeroth\], \[inertial\] and \[perturbSTZ\].
Results
=======
We are now ready to present and discuss the results of the stability analysis of the expanding circular cavity. The full set of equations was solved numerically as discussed above. Time and length are measured in units of $\tau_0$ and $R^{(0)}(t\!=\!0)$ respectively. $\Lambda$ and $m$ are set initially to their respective fixed-points. The material-specific parameters used are $\epsilon_0\!=\!1$, $c_0\!=\!1$, $\mu/s_y\!=\!50$, $K/s_y\!=\!100$, $\rho=1$, $\chi^{(0)}\!=\!0.11$, $\chi_\infty\!=\!0.13$ and $\zeta\!=\!7$, unless otherwise stated. In Subsec. \[pert\_shape\] we study perturbations of the shape of the cavity and of the effective temperature $\chi$. In Subsec. \[strain\_rate\] we study the effect of the rate dependence of $\chi_\infty$ on the stability analysis and in Subsec. \[rate\_function\] we analyze the effect of the stress-dependent rate function ${{\mathcal{C}}}(s)$.
Perturbing the shape and $\chi$ {#pert_shape}
-------------------------------
Studying the linear stability of the expanding cavity can be done by selecting which fields are perturbed and which are left alone. In practice each of the fields involved in the problem may experience simultaneous fluctuations, including the radius of the cavity itself. Therefore, one of our tasks is to determine which of the possible perturbation leads to a linear instability. To start, we perturb the radius of the expanding cavity at $t\!=\!0$ while all the other fields are left alone. In Fig. \[pertR\] we show the ratio $R^{(1)}/R^{(0)}$ as a function of time for various loading levels $\sigma^\infty$ (both below and above the cavitation threshold) and wave-numbers $n$. The initial amplitude of the perturbation was set to $R^{(1)}/R^{(0)}\!=\!10^{-3}$. The observation is that the ratio $R^{(1)}/R^{(0)}$ does not grow in time in all the considered cases were the radius was perturbed, implying that here the circular cavity is stable against shape perturbations. Note that $R^{(1)}/R^{(0)}$ decays faster for larger n and for larger $\sigma^\infty$. Also note that for $\sigma^\infty\!=\!6.1$. i.e. for unbounded zeroth order expansion, the ratio $R^{(1)}/R^{(0)}$ decays to zero while below the cavitation threshold this ratio approaches a finite value. The latter observations means that when the material approaches jamming (with $R^{(0)}$ attaining a finite value in a finite time) the perturbation have not yet disappeared entirely.
We stress at this point the non-stationary nature of the problem in which $R^{(0)}(t)$ is an increasing function of time. Thus, even if the absolute magnitude of the amplitude of the shape perturbation $R^{(1)}(t)$ increases with time, an instability is not automatically implied; $R^{(1)}(t)$ should increase sufficiently faster than $R^{(0)}(t)$ in order to imply an instability. To exemplify this feature of the problem, we present in Fig. \[onlyR1\] $R^{(1)}(t)$ for $\sigma^\infty\!=\!2.1$ and $n\!=\!4$. It is observed that even though $R^{(1)}$ increases, the smallness parameter $R^{(1)}/R^{(0)}$ decreases, see Fig. \[pertR\]. Note also that $R^{(1)}$ does not increase exponentially as expected in stationary linear stability analysis, but rather tends to asymptote to a constant.
Next we have tested the stability of the expanding cavity against initial perturbations in the velocity field or in the stress field. The results were quantitatively similar to those for the shape perturbations summarized in Figs. \[pertR\] and \[onlyR1\], all implying linear stability.
In light of these results, we concentrated then on the effect of perturbations in the STZ internal state fields. Since the dynamics of the tensor ${{\bm{m}}}$ are mainly determined by the deviatoric stress field $s$, we focus on fluctuations in the effective disorder temperature $\chi$. This may be the most liable field to cause an instability. Indeed, in Ref. [@07MLC] it was shown that $\chi$ perturbations control strain localization in a shear banding instability. Qualitatively, an instability in the form of growing “fingers” involves strain localization as well; plastic deformations are localized near the leading edges of the propagating “fingers”. In Ref. [@07MLC], based on the data of Ref. [@07SKLF], it was suggested that the typical spatial fluctuations in $\chi$ have an amplitude reaching about $30\%$ of the homogeneous background $\chi$. Obviously we cannot treat such large perturbations in a linear analysis and must limit ourselves to smaller perturbations.
In Fig. \[pertChi\] we show the ratio $R^{(1)}/R^{(0)}$ as a function of time for a perturbation of size $\chi^{(1)}/\chi^{(0)}\!=\!0.03$, introduced at time $t\!=\!0$. The wave-number was set to $n\!=\!4$ and $\sigma^\infty$ was set both below and above the cavitation threshold. First, note that for both loading conditions $R^{(1)}/R^{(0)}$ increases on a short time scale of about $1000\tau_0$, a qualitatively different behavior compared to the system’s response to shape perturbations. Second, note the qualitatively different response below and above the cavitation threshold. In the former case, $R^{(1)}/R^{(0)}$ increases monotonically, approaching a constant value when $R^{(0)}$ attains a finite value (i.e. jamming). In the latter case, $R^{(1)}/R^{(0)}$ increases more rapidly initially, reaches a maximum and then decays to 0 in the large $t$ limit. Therefore, in spite of the initial growth of $R^{(1)}/R^{(0)}$, for this magnitude of $\chi$ perturbations, the expanding circular cavity is linearly stable; below the cavitation threshold the relative magnitude of the deviation from a perfect circular symmetry $R^{(1)}/R^{(0)}$ tends to a finite constant, i.e. a shape perturbation is “locked in” the material, while above the threshold the cavity retains its perfect circular symmetry in the large $t$ limit. Nevertheless, in light of the significant short time increase in $R^{(1)}/R^{(0)}$ (here up to $0.6\%$), we increased the initial ($t\!=\!0$) $\chi$ perturbation to the range $\chi^{(1)}/\chi^{(0)}\!=\!0.05-0.06$, in addition to shape perturbations of a typical size of $R^{(1)}/R^{(0)}\!=\!0.02-0.03$. In these cases $R^{(1)}/R^{(0)}$ grows above $5\%$; even more importantly, the field $\chi^{(1)}({{\bm{r}}}, \theta)$ (as well as other fields in the problem) becomes larger than $0.1\chi^{(0)}({{\bm{r}}},
\theta)$ near the boundary of the cavity, [*invalidating*]{} the small perturbation hypothesis behind the perturbative expansion and signaling a linear instability. Naturally, this breakdown of the linearity condition takes place firstly near a peak of the ratio $R^{(1)}/R^{(0)}$, similar to the one observed in Fig. \[pertChi\].
We thus propose that sufficiently large perturbations in the shape of the cavity and the effective disorder temperature $\chi$, but still of formal linear order, may lead to an instability. This dependence on the magnitude of the perturbations in a linear analysis is a result of the non-stationarity of the growth. Another manifestation of the non-stationarity is that even in cases where we detect an instability, it was not of the usual simple exponential type where an eigenvalue changes sign as a function of some parameter (or group of parameters). Combined with the evidence for the existence of large fluctuations in $\chi$ [@07SKLF; @07MLC], the present results indicate that it will be worthwhile to study the problem by direct boundary tracking techniques where the magnitude of the perturbation is not limited.
We conclude that the issue of the stability of the expanding cavity can be subtle. Sufficiently small perturbations are stable, though there is a qualitative difference in the response to perturbations in the effective disorder temperature $\chi$, where the ratio $R^{(1)}/R^{(0)}$ increases (at least temporarily), and other perturbations, where $R^{(1)}/R^{(0)}$ decays. We have found that for large enough $\chi$ perturbations combined with initial shape perturbations, but still within the formal linear regime, the growth of $R^{(1)}/R^{(0)}$ takes the system beyond the linear regime, making nonlinear effects non-negligible and signaling an instability. This observation is further supported by the existence of large $\chi$ fluctuations discussed in [@07SKLF; @07MLC]. Note that none of these conclusions depend significantly on variations in $\epsilon_0$ and $c_0$. Moreover, perturbing the expanding cavity at times different than $t\!=\!0$ or introducing a pressure inside the cavity instead of a tension at infinity did not change any of the results.
The effect of the rate dependence of $\chi_\infty$ {#strain_rate}
--------------------------------------------------
The analysis of Sec. \[pert\_shape\] indicates the existence of a linear instability as a result of varying the magnitude of the perturbations, mainly in $\chi$, and not as a result of varying material parameters. Here, and in Sec. \[rate\_function\], we aim at studying the effect of material-specific properties on the stability of the expanding cavity. Up to now we considered $\chi_\infty$ as a constant parameter. However, as discussed in detail in Sec. \[STZ\], the plastic rate of deformation near the free boundary can reach values in the range where changes in $\chi_\infty$ were observed. Therefore, we repeated the calculations using the function $\chi_\infty(\tau_0 \bar{D}^{pl})$ plotted in Fig. \[HL\]. In Fig. \[StrainRate\] we compare $R^{(1)}/R^{(0)}$ as a function of time with and without a plastic rate of deformation dependence of $\chi_\infty$, both above and below the cavitation threshold. The initial perturbation has $\chi^{(1)}/\chi^{(0)}\!=\!0.03$ and $n\!=\!4$.
Both below and above the cavitation threshold the plastic rate of deformation dependent $\chi_\infty(\tau_0 \bar{D}^{pl})$ induces a stronger growth of $R^{(1)}/R^{(0)}$, though the effect is much more significant above the threshold. This is understood as significantly higher rate of deformation is developed above the cavitation threshold, where unbounded growth takes place [@07BLP], compared to below the threshold where the rate of deformation vanishes at a finite time. We note that the dependence of $\chi_\infty$ on $ \bar{D}^{pl}$ affects both the zeroth and first order solutions such that $R^{(0)}$ and $R^{(1)}$ increase. Our results show that $R^{(1)}$ is more sensitive to this effect than $R^{(0)}$, resulting in a tendency to lose stability at yet smaller perturbations. We conclude that the tendency of $\chi_\infty$ to increase with the rate of deformation plays an important role in the stability of the expanding cavity and might be crucial for other strain localization phenomena as the shear banding instability [@07MLC]. Moreover, this material-specific dependence of $\chi_\infty$, that was absent in previous formulations of STZ theory, might distinguish between materials that experience catastrophic failure and those that do not, and between materials that fail through a cavitation instability [@07BLP] and those who fail via the propagation of “fingers” that may evolve into cracks. This new aspect of the theory certainly deserves more attention in future work. We note in passing that recently an alternative equation to Eq. (\[eq:chi\]) for the time evolution of the effective temperature $\chi$ was proposed in light of some available experimental and simulational data [@08Bouch]. Preliminary analysis of the new equation in relation to the stability analysis performed in this paper indicates that the circular cavity [*does*]{} become linearly unstable [@unpublished]. A more systematic study of this effect may be a promising line of future investigation.
The effect of changing the stress-dependent rate function ${{\mathcal{C}}}(s)$ {#rate_function}
------------------------------------------------------------------------------
Here we further study the possible effects of details of the constitutive behavior on the macroscopic behavior of the expanding cavity. In this subsection we focus on the material function ${{\mathcal{C}}}(s)$. This phenomenological function, as discussed in Sec. \[STZ\], describes the stress-dependent STZ transition rates. It is expected to be symmetric and to vanish smoothly at $s\!=\!0$ in athermal conditions [@07BLanP]. The plastic rate of deformation for $s\!>\!1$ can be measured in a steady state stress-controlled simple shear experiment. For such a configuration the deviatoric stress tensor is diagonal and the stable fixed-points of Eqs. (\[eq:m\])-(\[eq:chi\]) imply that the steady state plastic rate of deformation of Eq. (\[eq:Dpl\]) reads $$\label{steadyDpl} \tau_0 D^{pl} = \epsilon_0 e^{-1/\chi_\infty} {{\mathcal{C}}}(s)\left(1-\frac{1}{s}\right) \ .$$ Therefore, if the steady state relation $\chi_\infty(s)$ is known, ${{\mathcal{C}}}(s)$ can be determined from measuring the steady state value of $D^{pl}$ for various $s\!>\!1$, see for example [@07HL]. The idea then is to interpolate the $s\!\to\!0^+$ behavior to the $s\!>\!1$ behavior with a single parameter that controls the amount of sub-yield deformation in the intermediate range. In fact, a procedure to measure ${{\mathcal{C}}}(s)$ at intermediate stresses was proposed in Ref. [@07BL]. Up to now we used the one-parameter family of functions ${{\mathcal{F}}}(s;\zeta)$ of Eq. (\[C\_s\]), where $\zeta$ controls the sub-yield deformation.
We now aim at studying the effect of choosing another function ${\cal C}(s)$. Here we specialize for ${\cal
C}(\bar{s})={{\mathcal{G}}}(\bar{s}; \lambda)$, with $$\label{C_s1} {{\mathcal{G}}}(\bar{s};\lambda)\equiv
\frac{|\bar{s}|^{1+\lambda}}{1+|\bar{s}|^\lambda} \ .$$ In Fig. \[changingC\] we show ${{\mathcal{C}}}(s)$ according to the previous choice of Eq. (\[C\_s\]) with $\zeta\!=\!7$ and also ${{\mathcal{C}}}(s)$ according to the present choice of Eq. (\[C\_s1\]) with $\lambda\!=\!30$. The different behaviors of ${{\mathcal{C}}}(s)$ and $d{{\mathcal{C}}}(s)/ds$ near $s\!=\!1$ might affect differently $R^{(0)}$ and $R^{(1)}$, thus influencing the stability of the expanding cavity.
In Fig. \[changingC1\] we compare $R^{(1)}/R^{(0)}$ as a function of time for ${{\mathcal{C}}}(s)$ of Eq. (\[C\_s\]) (previous choice) with $\zeta\!=\!7$ and ${{\mathcal{C}}}(s)$ of Eq. (\[C\_s1\]) (present choice) with $\lambda\!=\!30$, both for a constant $\chi_\infty$. An effective temperature perturbation with $\chi^{(1)}/\chi^{(0)}\!=\!0.03$ and $n\!=\!4$ was introduced at $t\!=\!0$ for $\sigma^\infty\!=\!4.1$. We observe that $R^{(1)}/R^{(0)}$ grows faster for the present choice compared to the previous one. For the sake of illustration we added the result of a calculation with the plastic rate of deformation dependent $\chi_\infty$ as discussed in Sec. \[strain\_rate\]. As expected, the effect is magnified. We conclude that the material-specific function of the stress dependence of the STZ transition rates ${{\mathcal{C}}}(s)$ can affect the stability of the expanding cavity, possibly making it unstable for smaller perturbations. Again, the relations between this constitutive property and the macroscopic behavior should be further explored in future work. Bringing into consideration explicit macroscopic measurements, one can constrain the various phenomenological features of the theory of amorphous plasticity. This philosophy provides a complementary approach to obtaining a better microscopic understanding of the physical processes involved.
Concluding Remarks {#discussion}
==================
We presented in this paper a detailed analysis of the linear stability of expanding cavity modes in amorphous elasto-viscoplastic solids. The stability analysis is somewhat delicate due to the non-stationarity of the problem, thus a perturbation may grow leaving the problem stable if this growth is slower than the growth of the radius of the cavity. The radial symmetry of the expanding cavity makes it surprisingly resilient to perturbations in shape, velocity, external strains and pressure. On the other hand the radial symmetry may be lost due to perturbations in the internal state fields, especially $\chi$, and is also sensitive to details of the constitutive relations that are employed in the STZ theory. In this respect we highlight the role of the plastic rate of deformation dependent $\chi_\infty(\tau_0 \bar{D}^{pl})$ and of the stress-dependent rates of STZ transitions ${{\mathcal{C}}}(s)$. It is difficult to reach conclusive statements, since growth of perturbations beyond the linear order invalidate the approach taken here, calling for new algorithms involving surface tracking, where the size of perturbations is not limited. Nevertheless the results point out that instabilities are likely, motivating further research into the nonlinear regime. Of particular interest is the possibility to select particular forms of constitutive relations by comparing the predictions of the theory to macroscopic experiments. This appears as a promising approach in advancing the STZ theory towards a final form.
[**Acknowledgements**]{} We thank T. Haxton and A. Liu for generously sharing with us their numerical data, and Chris Rycroft for pointing out an error in an early version of the manuscript. This work had been supported in part by the German Israeli Foundation and the Minerva Foundation, Munich, Germany. E. Bouchbinder acknowledges support from the Center for Complexity Science and the Lady Davis Trust.
Differential operators in polar coordinates {#polar}
===========================================
The aim of this Appendix is to derive some expressions in polar coordinates that were used earlier in the paper. Specifically, our goal is to calculate the divergence and covariant derivative of a tensor in polar coordinates. We represent a general second order tensor ${{\bm{T}}}$ in polar coordinates as $$\label{Tcomp}
{{\bm{T}}} = T_{ij} {{\bm{e}}}_i \otimes {{\bm{e}}}_j \ ,$$ where ${{\bm{e}}}_i$ and ${{\bm{e}}}_j$ are unit vectors in polar coordinates and $\otimes$ denotes a tensor product. Using Eqs. (\[polarO\])-(\[unit\_vectors\]) we obtain $$\begin{aligned}
&&{{\bm{e}}}_r \cdot \left({{\bm{\nabla}}} \cdot {{\bm{T}}} \right)= \partial_r T_{rr}+\frac{T_{rr}}{r}+\frac{\partial_\theta T_{\theta r}}{r} -\frac{T_{\theta\theta}}{r} \ ,\nonumber\\
&&{{\bm{e}}}_\theta \cdot \left({{\bm{\nabla}}} \cdot {{\bm{T}}}\right)= \partial_r T_{r\theta}+\frac{T_{r\theta}}{r}+\frac{T_{\theta r}}{r}+ \frac{\partial_\theta T_{\theta \theta}}{r}\ .\end{aligned}$$ Substituting Eqs. (\[sig\_p\_s\]) for ${{\bm{T}}}$, we obtain the right-hand-sides of Eqs. (\[EOM\]).
We proceed now to calculate the covariant derivative of a tensor ${{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{T}}}$. Using Eqs. (\[polarO\]), (\[unit\_vectors\]) and (\[Tcomp\]) we obtain $$\begin{aligned}
\left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{T}}}\right)_{rr}&=&v_r \partial_r T_{rr}+\frac{v_\theta}{r}\partial_\theta T_{rr}-\frac{v_\theta}{r}T_{r\theta}-\frac{v_\theta}{r}T_{\theta r} \ ,\nonumber\\
\left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{T}}}\right)_{r\theta}&=&v_r \partial_r T_{r\theta}+\frac{v_\theta}{r}T_{rr}+\frac{v_\theta}{r}\partial_\theta T_{r\theta}-\frac{v_\theta}{r}T_{\theta\theta} \ , \nonumber\\
\left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{T}}}\right)_{\theta r}&=& v_r \partial_r T_{\theta r}+ \frac{v_\theta}{r} T_{rr}+\frac{v_\theta}{r}\partial_\theta T_{\theta r}-\frac{v_\theta}{r}T_{\theta\theta} \ ,\nonumber\\
\left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{T}}}\right)_{\theta\theta}&=&v_r\partial_r T_{\theta\theta}+\frac{v_\theta}{r}T_{r \theta}+\frac{v_\theta}{r}T_{\theta r}+\frac{v_\theta}{r}\partial_\theta T_{\theta \theta} \ .\nonumber\\\end{aligned}$$ Substituting Eqs. (\[eq:stress-strain\]) for ${{\bm{T}}}$ we obtain the needed expressions for $\left({{\bm{v}}} \cdot {{\bm{\nabla}}} {{\bm{\epsilon}}}^{el}\right)_{ij}$ in Eq. (\[eq:DA\_ij\]).
The quasi-static and incompressible case {#QS}
========================================
The aim of this appendix is to derive independently the linear perturbation theory for a quasi-static and incompressible case and to compare to the inertial and compressible case in the limit of large bulk modulus $K$ and small velocities $v$. We show that the results in this limit agree, giving us some degree of confidence in the derivation and the numerical implementation of the equations in both cases.
The unperturbed problem in the quasi-static and incompressible limit was discussed in detail in [@07BLLP] and is obtained by taking the quasi-static and the incompressible limits in the equations of Sec. \[zeroth\]. Before considering the linear stability problem, we stress that the linear perturbation theory of the STZ equations, presented in Sec. (\[perturbSTZ\]) remains unchanged in the present analysis. Only the equations of motion and the kinematic equations are being modified. In the absence of inertial terms, the equations of motion (\[eqmot1\]) become $$\frac{\partial \tau}{\partial r} + \frac{1}{r} \frac{
\partial s}{\partial \theta} - \frac{1}{r}
\frac{ \partial p} {\partial \theta} +\frac{2 \tau}{r} = 0 \ ,$$ $$\frac{1}{r}\frac{\partial \tau}{\partial \theta} -\frac{1}{r^2}
\frac{\partial }{\partial r} \left ( r^2 s \right) = \frac{\partial
p}{\partial r} \ .$$ To first order we obtain $$\label{force1}
-\frac{1}{r}n \tau^{(1)} - \frac {2 s^{(1)}}{r} = \frac{\partial
p^{(1)}}{\partial r} + \frac{\partial s^{(1)}}{\partial r} \ ,$$ $$\label{force2}
\frac{\partial \tau^{(1)}}{\partial r} + \frac{n }{r} (s^{(1)} -
p^{(1)}) +\frac{2 \tau^{(1)}}{r} = 0 \ .$$
The boundary conditions of Eqs. (\[spb\])-(\[taub\]) can be further simplified by using the force balance equation to zeroth order and the zeroth order boundary conditions of Eq. (\[BC0\]) $$\label{use_eom} \frac{\partial p^{(0)}}{\partial r} =
-\frac{\partial s^{(0)}}{\partial r} - \frac{2s^{(0)}}{r} \ .$$ Substituting into (\[spb\]) and (\[taub\]) we obtain $$\begin{aligned}
\label{spb1} s^{(1)}(R^{(0)})\!+\!p^{(1)}(R^{(0)})\!\!&=&\!\!\frac{2s^{(0)}(R^{(0)})R^{(1)}}{R^{(0)}} \ ,\\
\label{taub1}\tau^{(1)}(R^{(0)}) \!\!&=&\!\! n
\frac{2s^{(0)}(R^{(0)})R^{(1)}}{R^{(0)}}.\end{aligned}$$ In addition, all the first order fields decay as $r\!\to\!\infty$. In principle, the initial conditions for the partial differential equations for the first order fields depend on the type of perturbation under consideration. For explicit perturbations in the shape of the cavity, i.e. $R^{(1)}(0)\!\ne\!0$, we can determine the initial stress field by assuming it is simply the quasi-static linear elastic solution corresponding to the perturbed circle. In order to obtain this solution we start with the bi-Laplace equation for the Airy stress function $\Phi$ [@86LL] $$\nabla^2\nabla^2\Phi=0 \label{bi-Laplace} \ ,$$ where the stress tensor components are given by $$\begin{aligned}
\sigma_{rr}&=&\frac{1}{r}\frac{\partial \Phi}{\partial r}+\frac{1}{r^2}\frac{\partial^2 \Phi}{\partial \theta^2} \ , \nonumber\\
\sigma_{\theta \theta}&=&\frac{\partial^2 \Phi}{\partial r^2} \
,\quad \sigma_{r \theta}=-\frac{\partial}{\partial
r}\left(\frac{1}{r}\frac{\partial \Phi}{\partial \theta} \right)
\label{Airy_components} \ .\end{aligned}$$ We then expand the solution in the form $$\Phi(r,\theta)=\Phi^{(0)}(r)+\Phi^{(1)}(r)e^{in\theta} \label{Phi} \
.$$ The general solutions for $\Phi^{(1)}(r)$, that also decay at infinity, are given by $$\Phi^{(1)}(r) = a r^{-n+2}+ b r^{-n} \ ,$$ with $n\!>\!0$. Substituting in Eqs. (\[Airy\_components\]), using the boundary conditions to first order and the following zeroth order solution $$\sigma_{rr,\theta \theta}^{(0)} = \sigma^{\infty}\left(1 \mp
\frac{\left(R^{(0)}\right)^2}{r^2}\right)\ ,\quad
\sigma_{r\theta}^{(0)} = 0 \ ,$$ one obtains $$a=-\left[R^{(0)} \right]^n\!\!
\sigma^{\infty}\frac{R^{(1)}}{R^{(0)}}\ ,\quad b=\left[R^{(0)}
\right]^{n+2}\!\! \sigma^{\infty}\frac{R^{(1)}}{R^{(0)}}\ .$$ The resulting stress components are easily calculated, from which we obtain $$\begin{aligned}
\!\!\!&p&\!\!\!^{(1)} =
2\sigma^\infty\frac{R^{(1)}}{R^{(0)}}(1-n)\left(\frac{R^{(0)}}{r}\right)^n\
,\quad s^{(1)} = \tau^{(1)} = \nonumber\\
\!\!\!&\sigma&\!\!\!^\infty\frac{R^{(1)}}{R^{(0)}}\left[n(1-n)\left(\frac{R^{(0)}}{r}\right)^n\!+\!n(n+1)\left(\frac{R^{(0)}}{r}\right)^{n+2}\right]\ .\nonumber\\\end{aligned}$$ These are the initial conditions for the first order stress tensor components in terms of the initial $R^{(1)}$. To proceed we expand Eqs. (\[el\_pl\]) to first order, assuming $K\!\to\!\infty$, $$\begin{aligned}
\label{kinematic_1st_a} &&\frac{\partial v^{(1)}_r}{\partial
r}=\\
&& -\frac{1}{2 \mu} \left[\! \frac{\partial s^{(1)}}{\partial t}
\!+\! v_r^{(0)} \frac{\partial s^{(1)}}{\partial r} \!+\! v^{(1)}_r
\frac{\partial s^{(0)}}{\partial r}\right]
\!-\! {D^{pl}}^{(1)}\!,\nonumber\\
\label{kinematic_1st_b} &&\frac{\!-\!n v^{(1)}_\theta \!+\!
v^{(1)}_r}{r} = \\
&&\frac{1}{2 \mu} \left[\! \frac{\partial s^{(1)}}{\partial t} +
v_r^{(0)} \frac{\partial s^{(1)}}{\partial r} \!+\! v^{(1)}_r
\frac{\partial s^{(0)}}{\partial r}\right]
\!+\! {D^{pl}}^{(1)},\nonumber\\
\label{kinematic_1st_c} &&\frac{1}{2} \left[\!\frac {\partial
v^{(1)}_\theta}{\partial r} \!+ \!\frac{n v^{(1)}_r\!-\!
v^{(1)}_\theta}{r}
\right] = \\
&&\frac{1}{2 \mu} \left[\! \frac{\partial \tau^{(1)} }{\partial t} +
v^{(0)}_r \frac{\partial \tau^{(1)}}{\partial r}-\frac{2 s^{(0)} v_\theta^{(1)}}{r} \right] +
{D^{pl}_{r \theta}}^{(1)} \nonumber\ .\end{aligned}$$ In order to propagate $s^{(1)}$ and $\tau^{(1)}$ in time according to these equations we need to know $v^{(1)}_r$ and $v^{(1)}_\theta$ at each time step. However, a basic feature of the quasi-static problem is that there is no evolution equation for the velocity field. Therefore, we must calculate $v^{(1)}_r$ and $v^{(1)}_\theta$ in a different way.
We now discuss the major mathematical difficulty in the quasi-static formulation, i.e. the absence of an explicit evolution equation for the velocity field ${{\bm{v}}}^{(1)}$. To overcome this difficulty, we should derive new [*ordinary differential equations*]{} for $v^{(1)}_r$ and $v^{(1)}_\theta$ such that their time evolution is inherited from the other fields in the problem. The first equation can be obtained readily by adding (\[kinematic\_1st\_a\]) to (\[kinematic\_1st\_b\]) $$\label{final_1} \frac{\partial v^{(1)}_r}{\partial r}
+\frac{v_r^{(1)}}{r}-\frac{n v^{(1)}_\theta}{r}=0 \ ,$$ from which we can extract $v^{(1)}_\theta$ $$\label{theta} v^{(1)}_\theta = \frac{1}{n}\left(r\frac{\partial
v^{(1)}_r}{\partial r} +v_r^{(1)}\right) \ .$$ In order to obtain the second equation, we eliminate $p^{(1)}$ from the equations by operating with $\frac{\partial }{\partial r}
\frac{r}{n}$ on Eq. (\[force2\]), adding the result to Eq. (\[force1\]) and taking the partial time derivative to obtain $$\label{elim_p} 2\frac{\partial \dot s^{(1)}}{\partial r} +
\frac{1}{n} \frac{\partial}{\partial r}\left(r\frac{\partial \dot
\tau^{(1)}}{\partial r}\right) +\frac{2}{n}\frac{\partial\dot
\tau^{(1)} }{\partial r} + \frac{n\dot \tau^{(1)}}{r} + \frac{2
\dot s^{(1)}}{r}=0 \ .$$ Here and elsewhere the dot denotes partial time derivative. Using Eqs. (\[kinematic\_1st\_a\]) and (\[kinematic\_1st\_c\]) we obtain $$\begin{aligned}
\label{tau_dot} \!\!\!\!&\dot\tau^{(1)}&\!\!\!=2\mu\!
\left[\!-\!{D^{pl}_{r \theta}}^{(1)}\!+\!\frac{1}{2}\left(\! \frac
{\partial v^{(1)}_\theta}{\partial r} + \frac{n v^{(1)}_r -
v^{(1)}_\theta}{r}
\right)\!\right]\nonumber\\
&&-v_r^{(0)}\frac{\partial\tau^{(1)}}{\partial
r}+\frac{2s^{(0)}v_\theta^{(1)}}{r}\ ,\\
\label{s_dot}\!\!\!\!&\dot s^{(1)}&=-\!2\mu\! \left(\!\frac{\partial
v_r^{(1)}}{\partial
r}\!+\!{D^{pl}}^{(1)}\right)-\!v_r^{(1)}\frac{\partial
s^{(0)}}{\partial r}\nonumber\\
&&-v_r^{(0)}\frac{\partial s^{(1)}}{\partial r} \ .\end{aligned}$$ Substituting the last two relations in Eq. (\[elim\_p\]) and using Eq. (\[theta\]), we obtain a [*fourth order linear ordinary differential equation*]{} for $v_r^{(1)}$. Since it is straightforward to obtain, but very lengthy, we do not write it explicitly here. It is important to note that the coefficients in this equation depend on time and therefore by solving it at each time step we effectively have a time evolution for the velocity field. Once one solves for $v_r^{(1)}$, Eq. (\[theta\]) can be used to calculate $v^{(1)}_{\theta}$. The forth order linear differential equation requires four boundary conditions.
The first boundary condition is obtained by using Eq. (\[taub1\]), with Eqs. (\[kinematic\_1st\_c\]), (\[edge\_velocity1\]) and (\[theta\]) we obtain a linear relation between $v_r^{(1)}(R^{(0)})$, $\partial_r v_r^{(1)}(R^{(0)})$ and $\partial^2_r
v_r^{(1)}(R^{(0)})$, which is the required boundary condition. Another boundary condition is obtained by multiplying Eq. (\[force2\]) by r, operating with $\frac{{\cal D} }{{\cal D}
t}=\partial_t +v_r^{(0)} \partial_r$ on the result and using Eq. (\[spb1\]). Additional simple manipulations result in a linear relation between $v_r^{(1)}(R^{(0)})$, $\partial_r v_r^{(1)}(R^{(0)})$, $\partial^2_r
v_r^{(1)}(R^{(0)})$ and $\partial^3_r v_r^{(1)}(R^{(0)})$. This is a second boundary relation. Two other boundary conditions are obtained from the requirement that $v_r^{(1)}$ vanishes at $\infty$ with vanishing derivative $$\label{BCinfty} v_r^{(1)}(\infty)=0 \quad\hbox{and}\quad\
\frac{\partial v_r^{(1)}(\infty)}{\partial r}=0 \ .$$ With these four boundary conditions the forth order differential equation can be solved in the following way: at each step we guess $v_r^{(1)}(R^{(0)},t)$ and $\partial_r v_r^{(1)}(R^{(0)},t)$ and use the first two boundary conditions to calculate $\partial_r^2
v_r^{(1)}(R^{(0)},t)$ and $\partial_r^3 v_r^{(1)}(R^{(0)},t)$. Then we use the forth order differential equation to calculate $v_r^{(1)}$ and $\partial_r v_r^{(1)}$ at $\infty$. We improve our guess until the solution satisfies Eqs. (\[BCinfty\]) (the shooting method).
Thus, we have a complete solution procedure (assuming that the plastic rate of deformation is known, see Section \[perturbSTZ\]); for a given $s^{(1)}(r,t)$ and $\tau^{(1)}(r,t)$ we solve the forth order differential equation for $v_r^{(1)}(r,t)$ following the procedure described above. Having $v_r^{(1)}(r,t)$ we use Eq. (\[theta\]) to obtain $v_{\theta}^{(1)}(r,t)$. Then we use Eqs. (\[kinematic\_1st\_a\]), (\[kinematic\_1st\_c\]) and (\[edge\_velocity1\]) to propagate $s^{(1)}(r)$, $\tau^{(1)}(r)$ and $R^{(1)}$ in time. We follow the same procedure at each time step to obtain the full time evolution of the perturbation. We note that we have eliminated $p^{(1)}(r,t)$ from the problem, though we can calculate it at every time step using Eq. (\[force1\]) or (\[force2\]).
We are now able to compare the quasi-static and incompressible case to the inertial and compressible counterpart in the limit of small velocities and large bulk modulus $K$. We introduced at $t\!=\!0$ a perturbation of magnitude $R^{(1)}/R^{(0)}\!=\!10^{-3}$ to the radius of the cavity, with a discrete wave-number $n\!=\!2$, and solved the dynamics in both formulations. We chose $\sigma^\infty\!<\!\sigma^{th}$ such that the velocities are small and $K\!=\!1000$ in the inertial case in order to approach the incompressible limit. In Fig. \[compareR\] we compare $R^{(0)}$ and $R^{(1)}$ for both the quasi-static and the inertial formulations. The agreement is good. Note that the stability of the expanding cavity depends on the time dependence of the ratio $R^{(1)}/R^{(0)}$; however, we do not discuss the stability yet, but focus on the comparison between the two formulations. In Fig. \[compareFields\] we further compare the predictions of the two formulations for the zeroth and first order deviatoric stress field $s$ and effective disorder temperature $\chi$ at a given time. In all cases the differences are practically indistinguishable. We thus conclude that the quasi-static formulation and the inertial one agree with one another, giving us some confidence in the validity of both. In particular we conclude that the inertial formulation can be used with confidence also for high velocities where the quasi-static counterpart becomes invalid.
[99]{}
J. Fineberg and M. Marder, Phys. Rep. [**313**]{}, 1 (1999).
A. Yuse and M. Sano, Nature (London) [**362**]{}, 329 (1993).
M. Adda-Bedia and Y. Pomeau, Phys. Rev. E [**52**]{}, 4105 (1995).
E. Bouchbinder, H. G. E. Hentschel and I. Procaccia, Phys. Rev. E [**68**]{}, 036601 (2003).
A. Livne, O. Ben-David and J. Fineberg, Phys. Rev. Lett. [**98**]{}, 124301 (2007).
E. Bouchbinder and I. Procaccia, Phys. Rev. Lett. [**98**]{}, 124302 (2007).
L. B. Freund, [*Dynamic Fracture Mechanics*]{}, (Cambridge University Press, Cambridge, 1998).
A. S. Argon, Acta metall. [**27**]{}, 47 (1979).
A. S. Argon and H. Kuo, Mat. Sci. Eng. [**39**]{}, 101 (1979).
M. L. Falk and J. S. Langer, Phys. Rev. E [**57**]{}, 7192 (1998).
E. Bouchbinder, J. S. Langer and I. Procaccia, Phys. Rev. E [**75**]{}, 036107 (2007).
E. Bouchbinder, A. Pomyalov and I. Procaccia, Phys. Rev. Lett. [**97**]{}, 134301 (2006).
E. Bouchbinder, J. S. Langer, T. S. Lo and I. Procaccia, Phys. Rev. E, [**76**]{}, 026115 (2007).
E. Bouchbinder, T. S. Lo and I. Procaccia, Phys. Rev. E [**77**]{}, 025101 (2008).
S. Santucci, L. Vanel and S. Ciliberto, Phys. Rev. Lett. [**93**]{}, 095505 (2004).
J. S. Langer, Phys. Rev. E [**70**]{}, 041502 (2004).
L. Pechenik , Phys. Rev. E [**72**]{}, 021507 (2005).
, edited by A. J. Liu and S. R. Nagel (Taylor and Francis, New York, 2001).
E. Bouchbinder and T. S. Lo, arXiv:0707.4573 (2007).
J. Lubliner, [*Plasticity Theory*]{}, (Macmillan, New York, 1990 p.69-82).
I. K. Ono, C. S. O’Hern, D. J. Durian, S. A. Langer, A. J. Liu and S. R. Nagel, Phys. Rev. Lett. [**89**]{}, 095703 (2002).
E. Bouchbinder, J. S. Langer and I. Procaccia, Phys. Rev. E [**75**]{}, 036108 (2007).
Y. Shi, M. B. Katz, H. Li and M. L. Falk, Phys. Rev. Lett. [**98**]{} 185505 (2007).
T. K. Haxton and A. J. Liu, Phys. Rev. Lett. [**99**]{}, 195701 (2007).
Note that for situations where $\chi_\infty$ can become very large one should rewrite Eq. (\[eq:chi\]) such that term in the square brackets reads $[1-\chi/\chi_\infty]$. This modification is not needed here as we do not consider such situations.
Mainly in situations where all the tensors can be simultaneously diagonalized, resulting in effectively scalar equations.
M. L. Falk and J. S. Langer, MRS Bull. [**25**]{}, 40 (2000).
L. O. Eastgate, J. S. Langer and L. Pechenik, Phys. Rev. Lett. [**90**]{}, 045506 (2003).
M. L. Manning, J. S. Langer and J. M. Carlson, Phys. Rev. E [**76**]{}, 056106 (2007).
M. A. Grinfeld, S. E. Schoenfeld and T. W. Wright, App. Phys. Lett. [88]{}, 104102 (2006).
E. Bouchbinder, Phys. Rev. E [**77**]{}, 051505 (2008).
E. Bouchbinder and T. S. Lo (unpublished).
L. D. Landau and E. M. Lifshitz, [*Theory of Elasticity*]{}, 3rd ed. (Pergamon, London, 1986).
|
---
author:
- 'J. Scharwächter'
- 'A. Eckart'
- 'S. Pfalzner'
- 'J. Zuther'
- 'M. Krips'
- 'C. Straubmeier'
bibliography:
- 'h4735.bib'
date: 'Received (date); accepted (date)'
title: 'A multi-particle model of the host'
---
Introduction\[sec:intro\]
=========================
[@2001AJ....121.2843B] was the first QSO to be discovered optically [@1961ST....21.148M; @1963ApJ...138...30M] and the first QSO to be directly identified with a host galaxy [@1982Natur.296..397B]. Its basic properties are listed in Table \[tab:3c48data\]. has attracted much attention regarding the proposed evolutionary sequence of active nuclei [@1988ApJ...325...74S]. According to this scheme, interactions and mergers of galaxies trigger an evolution via ultra-luminous infrared galaxies (ULIRGs) to QSOs. The observational evidence, however, is hampered by the fact that many transitionary objects show only dubious indications of past or recent mergers. Clarification requires detailed multi-particle modeling which helps with disentangling the complex spatial structure of merger remnants.
$$
[lll]{} & & $\citet{1998AJ....116..516M}$\
& & $\citet{1998AJ....116..516M}$\
& 0.367 & $\citet{2001AJ....121.2843B}$\
& 1,581 &\
& 846 &\
& 1 4.1 &\
$$
$\mathrm{H_0=75\ km\ s^{-1}\ Mpc^{-1}}$ and $q_0=0.5 $ will be adopted throughout the paper.
is an example of a transitionary object with prototypal properties in many respects: It has the typical far-infrared excess, originating from thermal radiation of dust which is heated by the quasar nucleus and by newly forming stars in the host galaxy [@1985ApJ...295L..27N; @1991AJ....102..488S]. Large amounts of molecular gas indicate the possibility of a young stellar population in the host. Finally, long-slit spectroscopy gives evidence for an ongoing starburst in the host which currently seems to be close to its maximum activity [@2000ApJ...528..201C]. But the merger scenario for is still unclear: Indeed, the host has a significant tail-like extension to the northwest whose tidal origin is rather compelling with regard to the kinematics and ages of its stars [@2000ApJ...528..201C]. However, the nature of the apparent second nucleus about $1\arcsec $ northeast of the QSO [@1991AJ....102..488S] and the location of the expected counter tidal tail remain an unsolved problem. could as well be due to the radio jet [@1991Natur.352..313W] interacting with the dense interstellar medium . A feature at the southeast of the host, previously interpreted as a counter tidal tail [@1999MNRAS.302L..39B], has turned out to be a background galaxy [@2000ApJ...528..201C]. Instead, a counter tail extending from the southeast to the southwest is suspected [@2000ApJ...528..201C] but not yet identified. @2000ApJ...528..201C suggest that such a location of the two tidal tails might be explicable by a certain projection of the merger scenario used to simulate the ”Antennae”.
This paper reports the first successful multi-particle model for the host. Suggesting simple solutions for the problem and the counter tail problem, the model largely resolves doubts about the merger hypothesis for .
Methods \[sec:methods\]
=======================
Numerical simulations
---------------------
The stellar-dynamical 3-dimensional N-body simulations are performed with TREESPH [@1989ApJS...70..419H], a tree code used in its non-collisional mode. Stable initial particle distributions for the model galaxies are set up with BUILDGAL [see @1993ApJS...86..389H]. The spatial orientation of the two disk galaxies is parameterized by their inclinations $i$ with respect to the orbital plane and their pericentric arguments $\omega $ as introduced by @1972ApJ...178..623T. Both galaxies have extended mass distributions so that the orbit of their encounter is not Keplerian but decaying. Two descriptions can be used to characterize these orbits: The first one is a pseudo-Keplerian description using the parameters of eccentricity $e$, pericentric distance $r_{\mathrm{peri}}$, and angle to pericentre $\Omega_{\mathrm{peri}}$ of the corresponding Keplerian orbit for which the total mass of each galaxy is associated with a point mass at the respective centre of mass. The second one is a direct description of the decaying orbit using the true apocentric and pericentric distances $r_{\mathrm{apo}}$ and $r_{\mathrm{peri}}$ of the first passage to define an eccentricity in its generalized formulation $e = (r_{\mathrm{apo}}-r_{\mathrm{peri}})/(r_{\mathrm{apo}}+r_{\mathrm{peri}})$. For convenience, the system of units, which remains intrinsically scale-free, is scaled to the system suitable for .
The results of the simulations are analyzed as 2-dimensional projections. In order to mock the pixel array data of imaging observations, the particles are sorted into a $512\times 512$ grid. The virtual pixel values are computed by adding up all particles located in a grid cell along the line-of-sight. Without any special weighting of a nuclear component, the mock images are comparable to QSO-subtracted images of the host. Spectra for each grid cell are generated by sorting the particles into velocity channels according to their respective line-of-sight velocities. Thus, an average stellar line-of-sight velocity is assigned to each virtual pixel. The resulting data arrays are spatially smoothed by Gaussian convolution and converted into FITS format to facilitate the further data processing with standard astronomical software.
Observational data on
----------------------
The data presented by @2000ApJ...528..201C are used for comparing the simulations with observations. They provide information about the optical surface-brightness of the QSO-subtracted host (Fig. 1 therein) and about the stellar kinematics along the four slits A, B, C, G (Fig. 1 and Table 2 therein). In reference to these data, the basic proportions of the main body of the host are classified by dimensionless length ratios (left panel of Fig. \[fig:ratios\] and left column of Table \[tab:ratios\]). Such a comparison is independent of the length scaling, in contrast to the comparison of line-of-sight velocities which requires a positioning of the four slits on the mock image.
Having determined the final physical length unit of the simulations, the curvature of the northwestern tidal tail is compared by using an angle-versus-distance plot (right panel of Fig. \[fig:ratios\] and Fig. \[fig:curv\]).
The look-alike\[sec:results\]
=============================
Different mass ratios of the initial galaxies, different snapshots during the merger process, and different projection angles of the merger remnants were probed in a still limited parameter study.
The nearest look-alike is found for the merger of two identical galaxies whose physical and numerical properties are given in Table \[tab:galparams\]. With these parameters the galaxies are similar to spirals of type Sb.
$$\begin{array}{l|ccc}
\hline
\noalign{\smallskip}
\hline
\mathrm{Parameter} & \mathrm{Bulge} & \mathrm{Disk} & \mathrm{Halo} \\
\hline
\mathrm{number\ of\ particles} & 8,000 & 8,000 & 8,000 \\
\mathrm{softening\ length\ [kpc]} & 0.21 & 0.28
& 1.4 \\
\mathrm{mass\ [10^{10}M_{\sun}]} & 1.86 & 5.60
& 32.48 \\
\mathrm{scale\ length\ [kpc]} & 0.88 & 3.50
& 35.0 \\
\mathrm{maximum\ radius\ [kpc]} & 7.0 & 52.5
& 105.0 \\
\hline
\mathrm{scale\ height\ [kpc]} & & 0.7 & \\
\mathrm{maximum\ height\ [kpc]} & & 7.0 & \\
\hline
\end{array}$$
The experimental setup is the same as used for simulations of the ”Antennae” – i.e. both galaxies are symmetrically oriented with $i_1 = i_2 = 60\degr $ and $\omega_1 = \omega_2 = -30\degr $ [see @1972ApJ...178..623T; @1988ApJ...331..699B]. The model galaxies are initialized near the apocentre of the corresponding elliptical Keplerian orbit which is defined by the eccentricity of $e = 0.5$, the pericentric distance of $r_{\mathrm{peri}}=20$ kpc, and the period of 1.2 Gyr. The time step in the simulations is fixed to 1 Myr which guarantees that energy is conserved to better than 1% during the merger. The true decaying orbit is characterized by the generalized eccentricity of $e = 0.8$ and the pericentric distance of $r_{\mathrm{peri}}=7$ kpc. The outer regions of the two galaxy bulges begin to merge after $\sim 272.5$ Myr, just before pericentric passage. The merging is not a single process but the centres of the two bulges are repeatedly flung apart before they settle down in one common density peak.
$$\begin{array}{l|cc}
\hline
\noalign{\smallskip}
\hline
\mathrm{Ratios} & \mathrm{\object{3C\ 48}\ host} & \mathrm{Look-alike} \\
\hline
L1/L2 & 0.76 & 0.70 \\
L1/W1 & 1.25 & 1.44 \\
L1/W2 & 1.67 & 1.86 \\
L1/D & 0.52 & 0.55 \\
L2/W1 & 1.65 & 2.06 \\
L2/W2 & 2.20 & 2.64 \\
L2/D & 0.69 & 0.79 \\
W1/W2 & 1.33 & 1.29 \\
W1/D & 0.42 & 0.38 \\
W2/D & 0.31 & 0.30 \\
\hline
\end{array}$$
The nearest look-alike emerges after 461.1 Myr. Two projections of this merger remnant are shown in Fig. \[fig:antennae\]. In the left panel (”Antennae” look-alike), the view is perpendicular to the orbital plane x-y. In the right panel, ( look-alike) the orbital plane is tilted southwards, westwards, and counterclockwise by $120\degr $, $160\degr $, and $116\degr $, respectively. The proportions of the look-alike are listed in the right column of Table \[tab:ratios\]. The positions of the four slits A, B, C, G on the look-alike host and the resulting physical coordinate system are shown in Fig. \[fig:simdens\].
In Fig. \[fig:vlosfit\], the scaled velocities along the slits are compared to the confidence region of stellar line-of-sight velocities given for by @2000ApJ...528..201C. The angle-versus-distance comparison for the curvature of the northwestern tidal tail is shown in Fig. \[fig:curv\].
Discussion\[sec:discussion\]
============================
The nature of
--------------
Optical and near-infrared images of the host show two luminosity peaks at the positions of and , the latter being located about $1\arcsec $ northeast of [e.g. @1991AJ....102..488S; @2000ApJ...528..201C]. With softening lengths of $\sim 0.25$ kpc ($0\farcs06 $) for the bulge and disk particles (see Table \[tab:galparams\]), the spatial resolution of the simulations is high enough to identify corresponding density peaks in the model. As shown in the small inset in Fig. \[fig:simdens\], the centres of the bulge components of the two merging galaxies are still separated at the stage of the look-alike. Their distance of about $0\farcs 6 $ (2.5 kpc) and their relative positions on a southwestern to northeastern axis are similar to the observed configuration of and . Thus, a scenario with and being the two centres of merging galaxies is possible. However, the exact configuration of the density peaks in the simulations is very sensitive to the projection angle and to the time at which the snapshot is taken. About 20 Myr later, the two peaks have already merged into one. Furthermore, the purely stellar-dynamical model does not address the question of a possible nuclear activity at the positions of and . A detailed discussion of nuclear activity depends on whether or not a black hole exists at the mentioned positions and on the respective fueling rates.
Location of the counter tidal tail
----------------------------------
Since each of the two merging disk galaxies forms a tidal tail, the missing second tidal tail has always been a caveat of the merger hypothesis for . Here, the simulations suggest a simple solution: At the projection angle of the look-alike, the second tidal tail is mainly located in front of the body of the host and, therefore, severely foreshortened. It extends from the southwest towards the northeast, roughly along slit B, so that measurements along this slit trace a mixture of line-of-sight velocities from the body and from the tail. This could explain why the observed and the simulated line-of-sight velocity signatures along slit B (Fig. \[fig:vlosfit\]) are dominated by scattering around a mean velocity close to zero. Slits A, C, and G, in contrast, are characterized by large absolute line-of-sight velocities (up to $\mathrm{\sim 200\ to\ 300\ km\ s^{-1}}$) and strong variations along the slits. A counter tidal tail extending from the southwest towards the northeast in front of the main body of the host is a completely new alternative. Regarding the information about stellar kinematics, this location seems to be more likely than the two suggested tails in the southeast and from the southeast towards the southwest [@1999MNRAS.302L..39B; @2000ApJ...528..201C] which have failed identification so far.
The evolutionary history of
----------------------------
Conclusions about the orbital parameters for and the original parameters of the merging galaxies can only be tentative. The orbital period of the best fit model amounts to about 20% of the age of the universe at the redshift of ($\sim 5.4$ Gyr). A merger scenario with such an orbital period is plausible, assuming an initially highly eccentric orbit of the merging galaxies which is transformed into a bound orbit by dynamical friction of their dark matter halos [e.g. @1989AJ.....98.1557J]. It has been found that the morphology and the kinematics of tidal tails are very sensitive to the rotation curve of the interacting model galaxies [@1996ApJ...462..576D; @1998ApJ...494..183M; @1999ApJ...526..607D]. Thus, instead of two identical galaxies, an alternative model for could start from two galaxies with different rotation curves so that only one of them forms an extended tidal tail. However, even in its generality the multi-particle model presented in this paper gives rather compelling evidence that the formation of is linked to a merger process. Therewith, ranks among these transitional objects which support the evolutionary scenario [@1988ApJ...325...74S] in its original merger-driven definition.
Our special thanks go to Prof. Dr Lars Hernquist who kindly provided the codes TREESPH and BUILDGAL and gave helpful advice. This project was supported in part by the Deutsche Forschungsgemeinschaft (DFG) via grant SFB 494. J. Scharwächter is supported by a scholarship for doctoral students of the Studienstiftung des deutschen Volkes.
|
---
abstract: 'Autocatalytic fibril nucleation has recently been proposed to be a determining factor for the spread of neurodegenerative diseases, but the same process could also be exploited to amplify minute quantities of protein aggregates in a diagnostic context. Recent advances in microfluidic technology allow analysis of protein aggregation in micron-scale samples potentially enabling such diagnostic approaches, but the theoretical foundations for the analysis and interpretation of such data are so far lacking. Here we study computationally the onset of protein aggregation in small volumes and show that the process is ruled by intrinsic fluctuations whose volume dependent distribution we also estimate theoretically. Based on these results, we develop a strategy to quantify in silico the statistical errors associated with the detection of aggregate containing samples. Our work opens a new perspective on the forecasting of protein aggregation in asymptomatic subjects.'
author:
- Giulio Costantini
- Zoe Budrikis
- Alessandro Taloni
- 'Alexander K. Buell'
- Stefano Zapperi
- 'Caterina A. M. La Porta'
title: 'Fluctuations in protein aggregation: Design of preclinical screening for early diagnosis of neurodegenerative disease'
---
Introduction
============
The presence of aberrant conformations of the amyloid $\beta$ peptide and the protein $\alpha$-synuclein is considered to be a key factor behind the development of Alzheimer’s and Parkinson’s diseases, respectively. The polymerization kinetics of these proteins has been shown to consist of nucleation and growth processes and to be strongly accelerated by the presence in solution of pre-existing fibrils [@Jarrett1993; @Buell2014], thereby circumventing the slow primary nucleation of aggregates. It was found that surfaces, such as lipid bilayers [@Grey2015; @Galvagnion2015] and hydrophobic nanoparticles [@Vacha2014] can accelerate the nucleation process dramatically. Indeed, in the case of $\alpha$-synuclein, it was found that in the absence of suitable surfaces, the primary nucleation rate is undetectably slow [@Buell2014]. Under certain conditions, the surfaces of the aggregates themselves appear to be able to catalyse the formation of new fibrils, leading to autocatalytic behavior and exponential proliferation of the number of aggregates [@Ruschak2007; @Cohen2013; @Buell2014]. This so-called secondary nucleation process is likely to play an important role in the spreading of aggregate pathology in affected brains [@Peelaerts2015], as the transmission of a single aggregate into a healthy cell with a pool of soluble protein might be sufficient for the complete conversion of the soluble protein into aggregates.
An intriguing idea is to exploit this observation to screen biological samples based on the presence of very low concentrations of aggregates for pre-clinical diagnosis of neurodegenerative diseases. Indeed, this has been achieved in the case of the prion diseases in a methodology called prion misfolding cyclic amplification [@Morales2012], which is based on the amplification of aggregates through repeated cycles of mechanically induced fragmentation and growth. Recently, the applicability of this approach to the detection of aggregates formed from the amyloid $\beta$ peptide has been demonstrated [@Salvadores2014]. Furthermore, the autocatalytic secondary nucleation of amyloid $\beta$ fibrils has been exploited to demonstrate the presence of aggregates during the lag phase of aggregation [@Arosio2014].
However, none of these methods currently allow to easily determine the absolute number of aggregates in a given sample. One strategy to address this problem is to divide a given sample into a large number of sub volumes and determine for each of the sub-volumes whether it contains an aggregate or not. Due to advances in microfluidic technology and microdroplet fabrication [@Theberge2010], it is now possible to monitor protein aggregation in micron-scale samples [@Knowles2011], a technique that could be used to design microarrays targeted for protein polymerization assays. To be successful this program needs guidance from theory to quantify possible measuring errors due to false positive and negative detection. Current understanding of protein polymerization is based on mean-field reaction kinetics that have proved successful to describe key features of the aggregation process in macroscopic samples [@Knowles2009; @Cohen2011; @Cohen2013]. This theory is, however, designed to treat the system in the infinite volume limit, where the intrinsic stochasticity of the nucleation processes cannot manifest itself, so that its applicability to small volume samples is questionable. The importance of noise in protein aggregation was clearly illustrated in Ref. [@Szavits-Nossan2014], who proposed and solved the master equation kinetics of a model for polymer elongation and fragmentation, obtaining good agreement with experiments on insulin aggregation [@Knowles2011].
Here we address the problem by numerical simulations of a three dimensional model of diffusion-limited aggregation of linear polymers [@budrikis2014], including explicitly auto-catalytic secondary nucleation processes [@Ruschak2007; @Cohen2013; @Buell2014]. A three dimensional model overcomes the limitations posed by both mean-field kinetics [@Knowles2009; @Cohen2011; @Cohen2013] and master equation approaches [@Szavits-Nossan2014], which do not consider diffusion and spatial fluctuations. Most practical realizations of protein aggregation reactions are not diffusion limited, due to the slow nature of the aggregation steps, caused by significant free energy barriers [@Buell2012]. This leads to the system being well mixed at all times and mean-field theories providing a good description. There are, however, cases both [*in vitro*]{} (e. g. when protein concentrations and ionic strengths are high, leading to gel formation [@Buell2014]) and [*in vivo*]{} (due to the highly crowded interior of the cell), where a realistic description cannot be achieved without the explicit consideration of diffusion. Here we use our model to study fluctuations in the aggregation process induced by small volumes and to provide predictions for the reliability of a seed detection assay.
Three dimensional model {#three-dimensional-model .unnumbered}
=======================
Simulations are performed using a variant of the protein aggregation model described in Ref. [@budrikis2014] where individual protein molecules sit on a three dimensional cubic lattice. The model considers primary nucleation due to monomer-monomer interaction, polymer elongation due to addition of monomers to the polymer endpoints and secondary nucleation processes in which the rate of monomer-monomer interaction is enhanced when the process occurs close to a polymer (see Fig. \[fig:1\]a for an illustration) [@Cohen2011c]. In particular, monomers diffuse with rate $k_{\rm D}$ and attach to neighboring monomers with rate $k_{\rm M}$ (primary nucleation) but when a monomer is nearest neighbor to a site containing a polymer composed of at least $n^*$ monomers, then the nucleation rate increases to $k_{\rm S} \gg k_{\rm M}$ (secondary nucleation). We do not consider polymer fragmentation, since this term is mostly relevant for samples under strong mechanical action [@Cohen2013], and some of the most important amyloid-forming proteins have been shown to exhibit aggregation kinetics dominated by secondary nucleation under quiescent conditions [@Cohen2013; @Buell2014]. A monomer can attach to a polymer with rate $k_{\rm H}$ if it meets its endpoints. Polymers move collectively by reptation with a length-dependent rate $k_{\rm R}/i^2$, where $i$ is the number of monomers in the polymer (see Ref. [@Binder1995] p. 89), and locally by end rotations, with rate $k_{\rm E}$, and kink moves with rate $k_{\rm K}$ (for a review of lattice polymer models see [@Binder1995]). Simulations start with a constant number of $N$ monomers in a cubic system of size $L= m_0 L_0$ (with $m_0$ an integer) where $L_0$ is the typical monomer diameter, with periodic boundary conditions in all directions. We perform numerical simulations using the Gillespie Monte Carlo algorithm [@Gillespie1976] and measure time in units of $1/k_{\rm S}$ and rates in units of $k_{\rm S}$. We explore the behavior of the model by varying independently both the monomer concentration $\rho\equiv N/m_0^3$ and the number of monomers $N$ at fixed $\rho$, but also the rate constants. For the simulations results reported in the following, the rates describing polymer motion are chosen to be $k_{\rm E}=k_{\rm R}=k_{\rm K}=10^{-2}$ which is smaller or equal than the diffusion rate of the monomers $k_{\rm D}$.
As expected, secondary nucleation efficiently decreases the half-time before rapid polymerization. We illustrate this by changing the critical polymer size $n^*$ needed to induce secondary nucleation. We observe that the lower $n^*$ the shorter the half-time (see Fig. \[fig:1\](b)). Currently, no experimental data exists on the value of $n^*$, but it can be expected to be of a similar magnitude as the smallest possible amyloid fibril, defined as the smallest structure for which monomer addition becomes independent of the size of the aggregate and an energetic downhill event.
Mean-field theory {#mean-field-theory .unnumbered}
=================
The progress of reactions observed experimentally in bulk systems can be well approximated by a mean-field model [@Knowles2009; @Cohen2011; @Cohen2013], without fragmentation or depolymerization of polymers. Such models are in contrast to our three dimensional computational model, which describes also monomer diffusion and polymer motion due to reptation, kink motion and end-rotations, which are not treated by mean-field approximation. Despite this, it is still possible to fit polymerization curves resulting from three dimensional simulation results through mean-field theory with effective diffusion dependent parameters. The fact that both experimental and simulated polymerization curves are described by the same mean-field theory ensures that our model is appropriate to describe experiments. In the mean-field model, the evolution of the concentration $f_j$ of polymers of length $j\geq n_c$, where $n_c$ is the nucleation size, is given by [@Cohen2011] $$\label{eq_polymer_evolution}
\dot{f}_j(t) = k_n m(t)^{n_c} \delta_{j,n_c} + 2m(t)k_{+} f_{j-1}(t)-2m(t)k_{+}f_j(t) + k_2 m(t)^{n_2} \sum_{i=n_c}^{\infty}i f_i(t) \delta_{j,n_2},$$ where dots indicate time derivatives and $m(t)$ is the monomer concentration. The first term on the right-hand side represents an increase in the concentration of polymers of size $n_c$ due to polymer nucleation by aggregation of $n_c$ monomers with rate constant $k_n$; this is a generalized version of the dimer formation with rate constant $k_{\rm M}$ in the 3d model. The second term represents an increase in the concentration of polymers of size $j$ by attachment of a monomer to a polymer of size $j-1$, with rate constant $k_{+}$. The third term is the corresponding loss of concentration of polymers of size $j$ when they attach to a monomer. These two terms are the mean-field equivalent of the endpoint attachment of monomers to polymers with rate constant $k_{\rm H}$ in the 3d model. The final term represents secondary nucleation, which in the mean-field model is described as an increase in concentration of polymers of size $n_2$ (the secondary nucleus size) occurring at a rate proportional to the mass of polymers and with a rate constant $k_2$. By conservation of mass, the evolution of the monomer concentration is $$\label{eq_monomer_evolution}
\dot{m}(t) = - \sum_{i=n_c}^{\infty} i \dot{f}_i(t)$$
The evolution of the number concentration $P(t)=\sum_{j\geq n_c} f_j(t)$ and mass concentration $M(t)=\sum_{j\geq n_c} j f_j(t)$ can be found by summing over $j$ in . After some algebra, one obtains $$\begin{gathered}
\dot{P}(t) = k_2 M(t) m(t)^{n_2} + k_n m(t)^{n_c},\\
\dot{M}(t) = 2 k_{+} m(t) P(t) + n_2 k_2 m(t)^{n_2} + n_c k_n m(t)^{n_c},\end{gathered}$$ Analytical approximation [@Knowles2009; @Cohen2011; @Cohen2013] of the system of equations gives a solution that depends on two parameters, $\lambda=\sqrt{2k_+k_nm(0)^{n_c}}$ and $\kappa=\sqrt{2k_+k_2m(0)^{n_2+1}}$. We fit our data with the form given in Eq. 1 of Ref. [@Cohen2013], using a least squares method. Each curve is fitted independently. Diffusion plays an important role in the aggregation progress, shifting the aggregation curves as shown in Fig. \[fig:2\]a and Fig. \[fig:2\]b. For a considerable parameter range, however, the time evolution of the fractional polymer mass can be fitted by mean-field theory (lines in Fig. \[fig:2\]a and Fig. \[fig:2\]b) with effective parameters that now depend on the diffusion rate $k_{\rm D}$ (see Fig. \[fig:2\]c and \[fig:2\]d). Similarly, mean-field theory describes the density dependence of the aggregation curves as shown in Fig. \[fig:2\]b.
Fluctuations in protein aggregation {#fluctuations-in-protein-aggregation .unnumbered}
===================================
Having confirmed that our computational model faithfully reproduces polymerization kinetics in macroscopic samples, we now turn to the main focus of the paper, the study of sample-to-sample fluctuations in small volumes, a feature that can not be studied with mean-field kinetics. When the sample volume is reduced, we observe increasing fluctuations in the aggregation kinetics as shown in Figs. \[fig:3\]a and \[fig:3\]b. These results are summarized in Fig \[fig:3\]c showing the complementary cumulative distributions of half-times for different monomer numbers $N$ and constant monomer concentration $$S(t_{1/2}) \equiv \int_{t_{1/2}}^\infty P(x)dx$$ where $P(x)$ is the probability density function and $t_{1/2}$ is defined as the half-time of the polymerization curve (i.e. the time at which $M/M_0=1/2$).
The steepness of the aggregation curves in Figs. \[fig:3\]a and \[fig:3\]b suggests that, for $k_S\gg k_M$, fluctuations are mostly ruled by the time of the first primary nucleation event, $t_0$, whose complementary cumulative distribution $S_0(t_0)$ can be estimated analytically as $$S_0(t_0)=e^{-f_{M}k_MNt_0}
\label{eq:S0}$$ where $f_{M}$ is the average number of possible primary nucleation events per unit monomer. We estimate that $f_M =3\rho$ using a Poisson approximation, as we show in details in the following section. Note that Eq. \[eq:S0\] displays a size dependence that is reminiscent of extreme value statistics $S_0(x,N) = \exp(-N F(x))$ where $F(x)$ is a function that does not depend on $N$ [@gumbel; @weibull39]. If $k_S \gg k_M$, the half-time $t_{1/2}$ differs from the nucleation time by a weakly fluctuating time $\tau$. This comes from the observation that, once the first primary nucleation event has happened, the polymerization follows rapidly, thanks to fast growth and secondary nucleation. This yields a weakly fluctuating delay $\tau(N,\rho)=t_{1/2}-t_0$, which in general depends on the number density $\rho$ and on the number of monomers $N$. The distribution and average values of $\tau$ are reported in Fig. \[figSalpha\]. The average value of $\tau$ decreases with $\rho$ and displays only a smaller dependence on $N$. The distribution of $\tau$ is always peaked around its average but while at small values of $\rho$ the peak shifts with $N$ while the standard deviation remains constant, for higher values of $\rho$ only the standard deviation depends on $N$ and the peak position does not change. Since the fluctuations in $\tau$ are much smaller than the fluctuations of $t_0$ we can safely assume that $t_{1/2}\simeq t_0+\langle \tau\rangle$ for $t_0\geq 0$, so the complementary cumulative distribution takes the form $$S(t_{1/2}) =\left\{
\begin{array}{ll}
1 & t_{1/2}\leq \langle \tau\rangle\\
S_0(t_{1/2}-\langle\tau\rangle) & t_{1/2}> \langle \tau \rangle.
\end{array}
\right.
\label{eq:Slag}$$ The prediction of Eqs. \[eq:S0\] and \[eq:Slag\] are in agreement with numerical simulations results for $S_0(t_0)$ and $S(t_{1/2})$, respectively (see Figs. \[fig:3\]c and \[figS0\_32\]). In particular, the behavior of $S_0(t_0)$ is obtained without any fitting parameters, while $S(t_{1/2})$ only needs the estimate of the single parameter $\langle \tau \rangle$ (additional comparisons for different values of $\rho$ are reported in Figs. \[figS0\_16\] and \[figS0\_72\]). The corresponding average values of $\langle t_{1/2}\rangle$ are shown in the inset of Fig. \[fig:3\]c as a function of $N$ (see also Fig. \[figS0\_32\]b).
Theoretical derivation of the half-time distribution {#theoretical-derivation-of-the-half-time-distribution .unnumbered}
====================================================
In this section, we provide a detailed derivation of Eqs. \[eq:S0\] and \[eq:Slag\] in the limit of relatively large diffusion when the system is well mixed. To this end, we consider a cubic lattice composed by $m_0^3$ nodes, in which $N$ monomers are placed randomly at time $t=0$. As illustrated in Fig. \[fig:lattice\], each monomer $i$ sits near $l^{(i)}$ neighboring monomers and $6-l^{(i)}$ neighboring empty sites, where $l^{(i)}$ is in general a fluctuating time dependent quantity. In the model, each monomer $i$ can either diffuse into one of the $6-l^{(i)}$ empty sites or form a dimer with one of the $l^{(i)}$ neighboring monomers. Therefore, at any given time $t$ the number of possible diffusion events in the system is $n_D(t) = \sum_i (6-l^{(i)})$ and the number of possible aggregation events is $n_M(t) = 1/2\sum_i l^{(i)}$, where the factor $1/2$ is needed to correct for the double counting of the number of monomer pairs. We can compute the time of first aggregation of $N$ monomers using Poisson statistics, considering for simplicity the case in which the number of possible aggregation events $n_M$ would not depend on time. In this case, the probability of having an aggregation event within $\Delta t$ is $n_{M}k_M\Delta t$. We can then divide the time interval $t_0$ in $n$ elementary time subintervals $\Delta t=\frac{t_0}{n}$, the rate of aggregation event at $t_0$, i.e. the probability per unit time to have the first dimer formed after a time interval $t_0$ has elapsed, is given by the following expression: $$\tilde{P}_0(t_0)=\lim_{n\to\infty}\left(1-n_{M}k_M\frac{t_0}{n}\right)^{n-1}n_{M}k_M=n_Mk_Me^{-n_{M}k_Mt_0}.
\label{P0_t0}$$ As stressed previously, $n_M$ and $n_D$ are in principle fluctuating quantities and therefore Eq.\[P0\_t0\] is not valid. Yet, as shown in Fig. \[fig:ergodic\]: $n_M$ and $n_D$ are are both $i)$ stationary, $ii)$ ergodic, $iii)$ weakly fluctuating and $iv)$ linearly dependent on $N$, on average. Hence, the probability $\tilde{P}_0(t_0)$ for a monomer to form a dimer at $t_0$ can reasonably be approximated by its ensemble average $$P_0(t_0)\simeq \langle \tilde{P}_0(t_0)\rangle \simeq \langle n_{M}\rangle k_M e^{-f_{M}k_MNt_0}
\label{p_real_norm}$$ where we have replaced $n_M$ by its average value $\langle n_{M}\rangle$ and defined $f_{M}\equiv \frac{\langle n_{M}\rangle}{N}$. From Eq.(\[p\_real\_norm\]), we easily obtain the complementary cumulative distribution function $$S_0(t_0)=\int_{t_0}^{\infty} d\tau P_0(\tau)=e^{-f_{M}k_MNt_0},
\label{S0_real}$$ recovering Eq. \[eq:S0\]. To conclude our calculation, we still need to evaluate $f_M$. To this end, we perform a discrete enumeration of the possible configurations of a single monomer, in the spirit of cluster expansions for percolation models. In particular, the six relevant configurations for a single monomer in contact with other monomers are reported in Fig. \[fig:lattice\]. The weight $p_{l}$ of a configuration in which a monomer has $l$ occupied neighbors is assumed to be given by the binomial distribution $$p_{l} = \frac{6!}{l!(6-l)!}\rho^l(1-\rho)^{6-l}
\label{binomial}.$$ This single particle picture suggests that the average number of primary nucleation events per monomer $f_{M}$, corresponds to the average number of nearest neighbors $\langle l \rangle$, divided by a factor 2 since any nucleation event encompasses 2 particles. With a similar reasoning, we estimate $f_{D}=6 - \langle l \rangle$, i.e. the average number of empty directions. Then, from the binomial distribution (\[binomial\]) we get $f_{M}=\frac{6\rho}{2}$ and $f_{D}=6(1-\rho)$. Using these values in Eq. \[eq:S0\] and Eq.(\[eq:Slag\]), we obtain agreement with numerical simulations as illustrated in Figs. \[figS0\_16\] and \[figS0\_72\] (panels (a) and (b) respectively) for different values of $\rho$.
Finally, we calculate the averages of the first aggregation time and the half-time as $\langle t_0\rangle =\int_{0}^\infty dt_0\,t_0P_0(t_0)$ and $\langle t_{1/2}\rangle =\int_{0}^\infty dt_{1/2}\,t_{1/2}P(t_{1/2})$, where $P(t_{1/2})=-\frac{dS(t_{1/2})}{dt_{1/2}}$. The expression for $\langle t_0\rangle$ is given by $$\langle t_0\rangle= \frac{1}{3\rho k_M N}.
\label{mean_t0}$$ In Fig. \[figS0\_32\]b we show the perfect agreement of the theoretical estimate given by Eq.(\[mean\_t0\]) with the numerical values of the average time for the first primary nucleation event as a function of $1/N$, for several densities. Notice that no fitting parameters are involved. The average time of $t_{1/2}$ follows from $\langle t_{1/2}\rangle=\langle t_0\rangle+\langle \tau\rangle$: the inset of Fig. \[fig:3\]c confirms the agreement between the theoretical estimates (dashed lines) and the numerical values (symbols).
Statistical analysis of seed detection tests {#statistical-analysis-of-seed-detection-tests .unnumbered}
============================================
While the fluctuations we observe are intrinsic to the random nature of nucleation events, the ones usually encountered in bulk experiments are likely due to contamination or differences in initial conditions [@Giehm2010; @Uversky2001; @Grey2015]. In those bulk systems ($\mu$l and larger), the number of protein molecules involved in the aggregation process is extremely large, even at low concentrations, so that we can exclude intrinsic kinetic fluctuations. For instance, a volume of 100$\mu$l at a concentration of 1$\mu$M still contains $10^{14}$ monomers, leading to a large number (hundreds to thousands) of nucleation events per second for a realistic value of the nucleation rate [@Cohen2013; @Buell2014a]. However, if the relevant volumes are made small enough (pico- to nanolitres), the stochastic nature of primary nucleation can be directly observed. This has been exploited by aggregation experiments performed inside single microdroplets, where individual nucleation events could be observed, due to their amplification by secondary processes [@Knowles2011]. In these experiments, the average half-time is found to scale with volume in a similar manner to what is shown in the inset of Fig. \[fig:3\], thus in agreement with our simulations.
We are now in a position to use our model to design a test [*in silico*]{} to detect the presence of pre-formed polymers, that act as seeds and nucleation sites for the secondary nucleation process, and that are thus amplified. As illustrated in Fig. \[fig:6\]a, the test considers a set of small volume samples containing protein solutions with a given concentration at time $t_0=0$. The aim of the test is to detect the samples containing at least one seed (case B in Fig. \[fig:6\]a). In an ideal experiment, the size of the microdroplets would be adjusted such that most droplets contain no seeds, some contain one seed, and the proportion of droplets containing more than one seed is negligible. In practice, these conditions can be easily adjusted experimentally by progressively reducing droplet volumes until only a small fraction of them display aggregates. After a fixed time $t_1 \sim \langle \tau \rangle$, one can observe which samples contain macroscopic, detectable amounts of aggregates, enabling exact quantification of the number of aggregates present in the initial sample. Ideally the test should be positive only when at least one seed was initially present, but given the large fluctuations intrinsic to the nucleation processes we demonstrated above, as well as the competition with *de novo* nucleation, there is a chance for false tests. In particular, a false positive test occurs when an unseeded sample is found to contain aggregates, while a false negative test corresponds to the case in which a seeded sample does not produce detectable amounts of aggregates within the time scale of the experiment.
In Fig. \[fig:6\]b we report the complementary cumulative distribution of aggregation half-times $t_{1/2}$ as a function of the primary nucleation rate $k_{\rm M}$ for samples with or without seeds, in this case a single pre-formed trimer. For small values of $k_{\rm M}$, seeded and unseeded samples yield distinct results, as also illustrated by the average half-times reported in Fig. \[fig:6\]c. As the value of $k_{\rm M}$ increases, however, the distributions become closer in the two cases. In Fig. \[fig:6\]d, we quantify the fraction of false positive and false negative tests for two different testing times (e.g. $t_1=500$ and $t_1=600$). As expected, for large $k_{\rm M}$ errors are very likely and the test would not be reliable. For intermediate values of $k_{\rm M}$, one can try to adjust $t_1$ to reduce possible errors with a caveat: decreasing $t_1$ reduces false positive errors, but at the same time increases false negatives (Fig. \[fig:6\]d). It is, however, possible to optimize $t_1$ so that both types of errors are minimized. In an experimental realisation of such a setup, the most important system parameter that needs to be optimized for any given protein is the ratio of secondary nucleation rate to primary nucleation rate. Due to potentially different dependencies of these two rates on the monomer concentration [@Meisl2014], pH [@Buell2014] and potentially other factors, such as temperature, salt concentration etc., it is possible to fine tune this ratio and adjust it to a value that allows for an easy discrimination between droplets that do contain a seed aggregate and those that do not.
Conclusions {#conclusions .unnumbered}
===========
In conclusion, we study protein polymerization in a three dimensional computational model and elucidate the role of protein diffusion in the polymerization process. Most theoretical studies of protein aggregation neglect completely the role of diffusion and any other spatial effect. When the polymer diffusion and elongation rates are large enough we recover the standard polymerization curves that can also be obtained from mean-field analytical treatments and that can be used to fit, for example, kinetic data of amyloid $\beta$ aggregation [@Cohen2013]. It would be interesting to explore if for small diffusion rates and small densities mean-field kinetics would eventually fail to describe the results, but this is a challenging computational task. At low densities, diffusion could play an important since a critical timescale would be set by the time needed by two monomers to meet before aggregating. This time-scale can be estimated considering the time for a monomer to cover a distance $x_D \sim \rho^{-1/3}$, yielding $t_D\sim x_D^2/D \sim D^{-1} \rho^{-2/3}$. This timescale is not relevant for our simulations since at the relatively high densities we study a considerable fraction of monomers are close to at least another monomer (see $n_M$ in Fig. \[fig:ergodic\]a). Consequently, the distribution of the first aggregation time does not depend on the diffusion rate $k_D$ (see Eq. \[eq:S0\]). The half-time distribution, however, depends on diffusion even in this regime (Fig. \[fig:2\]).
Our simulations show intrinsic sample-to-sample fluctuations that become very large in the limit of small volumes and low aggregation rates. We show that the corresponding half-time distributions are described by Poisson statistics and display size dependence. As a consequence of this, the average half-times scale as the inverse of the sample volume, in agreement with insulin aggregation experiments performed in microdroplets [@Knowles2011] and with calculations based on a master-equation approach [@Szavits-Nossan2014]. We use this result to design and validate [*in silico*]{} a pre-clinical screening test based on a subdivision of the macroscopic sample volume that will ultimately allow the determination of the exact number of aggregates that was initially present. This is the first step to develop microarray-based [*in vitro*]{} tests for early diagnosis of neurodegenerative diseases.
Acknowledgements: {#acknowledgements .unnumbered}
=================
GC,ZB, ATand SZ are supported by ERC Advanced Grant 291002 SIZEFFECTS. CAMLP thanks the visiting professor program of Aalto University where part of this work was completed. SZ acknowledges support from the Academy of Finland FiDiPro program, project 13282993. AKB thanks Magdalene College, Cambridge and the Leverhulme Trust for support.
[27]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****, ()](\doibase
10.1016/0092-8674(93)90635-4) [****, ()](\doibase
10.1073/pnas.1315346111) [****, ()](\doibase 10.1074/jbc.M114.585703) [****, ()](\doibase 10.1038/nchembio.1750) [****, ()](\doibase
10.1021/ja505502e)[****, ()](\doibase 10.1073/pnas.0703306104)@noop [****, ()]{}[****, ()](\doibase 10.1038/nature14547)[****, ()](\doibase 10.1038/nprot.2012.067)[****, ()](\doibase
10.1016/j.celrep.2014.02.031)[****, ()](\doibase
10.1021/ja408765u)[****, ()](\doibase
10.1002/anie.200906653)[****, ()](\doibase 10.1073/pnas.1105555108)[****, ()](\doibase 10.1126/science.1178250)[****, ()](\doibase
http://dx.doi.org/10.1063/1.3608917)[****, ()](\doibase
10.1103/PhysRevLett.113.098101)[****, ()](\doibase 10.1038/ncomms4620)[****, ()](\doibase 10.1002/anie.201108040)[****, ()](\doibase
10.1063/1.3608916), ed.,@noop [**]{} (, )[****, ()](\doibase 10.1016/0021-9991(76)90041-3)@noop [**]{} (,)@noop [**]{}, edited by (, )[****, ()](\doibase 10.1016/j.ab.2010.02.001)[****, ()](\doibase
10.1074/jbc.M010907200)[****, ()](\doibase 10.1042/bse0560011)[****, ()](\doibase 10.1073/pnas.1401564111)
![\[fig:1\] a) Schematic of the protein aggregation model describing the main processes involved: primary nucleation occurring with rate $k_{\rm M}$ (and correspondingly $k_{\rm n}$ in mean-field), polymer elongation with rate $k_{\rm H}$ ($k_{\rm +}$ in mean-field), secondary nucleation with rate $k_{\rm S}$ ($k_{\rm 2}$ in mean-field) and diffusion with rate $k_{\rm D}$ which is not described by mean-field theory. b) Simulations showing the dependence on the minimal polymer size $n^*$ needed to catalyze secondary nucleation of the aggregation curve, the polymer mass fraction $M/M_{\rm tot}$, obtained for $k_{\rm M}=4\cdot 10^{-4}$, $k_D=10^{-2}$, $k_{\rm S}=1$, $k_{\rm H}=10^4$ and $\rho=0.16$. The dashed line is the curve obtained in the limit $n^* \to \infty$, or equivalently in absence of secondary nucleation.](Fig1.pdf){width="\columnwidth"}
![\[fig:2\] [**Protein aggregation depends on monomer diffusion:**]{} a) Simulations showing the diffusion dependence of the aggregation curve, the polymer mass fraction $M/M_{\rm tot}$, obtained for $k_{\rm M}=4 \cdot 10^{-4}$, $k_{\rm S}=k_{\rm H}=1$, $N=2450$ and $\rho=0.048$. The curves are well fit by mean-field theory (full lines) with effective parameters that depend on $k_{\rm D}$. b) Simulations of the density dependence of the polymer mass fraction for $N=2450$, $k_{\rm M}=4 \cdot 10^{-4}$, $k_{\rm H}=1$ and $k_{\rm D}=10^{-2}$. Fits by mean-field theory are plotted as lines with effective parameters reported in the panels c) and d). Time is measured in units of $1/k_{\rm S}$. c)-d):The effective mean-field parameters $\sqrt{k_+k_2}$ and $\sqrt{k_+k_n}$ obtained by fitting simulations performed for $k_{\rm M}=4\cdot10^{-4}$, $k_{\rm S}=k_{\rm H}=1$ as a function of the concentration $\rho$ and the diffusion rate $k_{\rm D}$. ](Fig2.pdf){width="\columnwidth"}
![\[fig:3\] [Half-time sample-to-sample fluctuations are due to extreme value statistics]{}. Different replicates of the simulations display wide fluctuations in half-times especially for small numbers of monomers. a) Simulation results obtained for $N=10000$ monomers at $\rho=0.32$, $k_{\rm M}=4 \cdot 10^{-6}$, $k_{\rm S}=1$, $k_{\rm H}=10^4$, $k_{\rm D}=10^{-2}$. The graph shows that the half-time $t_{1/2}$ is very close to the nucleation time $t_0$ at which the curves depart from zero. b) Same as panel a) but with $N=1250$. c) The complementary cumulative distributions of half-times obtained from simulations for different values of $N$ are in agreement with the theory described in the text. The inset shows that the average half-times for different concentrations $\rho$ as a function of $N$. The general trend is in agreement with experiments [@Knowles2011]. Time is measured in units of $1/k_{\rm S}$.](Fig3.pdf){width="5.2in"}
![ a) The complementary cumulative distribution functions $S(t_0)$ for four different monomer numbers $N$ and density $\rho=0.32$. The symbols correspond to the numerical simulations while lines correspond to the theoretical predictions obtained from Eq. (6). b) The average time $\langle t_0 \rangle$ as a function of $1/N$ for four different number densities. The theoretical predictions (dashed lines) are obtained from Eq.(13). Here $k_D=10^{-2}$ and $k_M=4 \cdot 10^{-6}$ []{data-label="figS0_32"}](figS0_32-t0ave.pdf){width="4.5in"}
![The complementary cumulative distribution functions $S(t_0)$ (a) and $S(t_{1/2})$ (b) for four different monomer numbers $N$ and density $\rho=0.16$. The symbols correspond to the numerical simulations while the lines represent the theoretical predictions obtained from Eq. (6) and Eq. (7). Here $k_D=10^{-2}$ and $k_M=4 \cdot 10^{-6}$. []{data-label="figS0_16"}](figS0_16.pdf){width="4.5in"}
![The complementary cumulative distribution functions $S(t_0)$ (a) and $S(t_{1/2})$ (b) for four different monomer numbers $N$ and density $\rho=0.72$. The symbols correspond to the numerical simulations while he lines represent the theoretical predictions obtained from Eq.(6) and Eq.(7). Here $k_D=10^{-2}$ and $k_M=4 \cdot 10^{-6}$. []{data-label="figS0_72"}](figS0_72.pdf){width="4.5in"}
![ a) The mean delay time $\langle \tau\rangle$, obtained from numerical simulations, as a function of the number density $\rho$ for four different monomer numbers $N$. For any $N$ and $\rho$ the averages are calculated over the different numerical simulations outcomes. The distributions of delay times $\tau$ as a function of the number of monomers $N$ for b) $\rho=0.16$ and c) $\rho=0.49$. Here $k_D=10^{-2}$ and $k_M=4 \cdot 10^{-6}$.[]{data-label="figSalpha"}](fig_tau.pdf){width="4.5in"}
![A schematic representation of the possibilities of diffusion (dashed arrow) and aggregation (double arrow) for a monomer (red circle) placed in the center of cubic lattice unit cell. The monomer partners for the dimerization (from $l=1$ to $l=6$) are colored in grey while the empty sites are represented by white circles. The aggregation and diffusion rate are, respectively, $k_M$ amd $k_D$. []{data-label="fig:lattice"}](Fig4.pdf){width="4.5in"}
![ a) The number of possible primary nucleation and diffusion events ($n_M$ and $n_D$, respectively) are a linear function of the number of monomers $N$. b) The corresponding frequencies $f_M$ and $f_D$ fluctuate very little in time. Here $k_D=10^{-2}$, $k_M=4 \cdot 10^{-6}$, $N=2450$ and $\rho=0.16$. []{data-label="fig:ergodic"}](Fig5.pdf){width="\columnwidth"}
![\[fig:6\] [Intrinsic fluctuations rule errors in the detection of protein aggregation prone samples]{}. a) A test to detect seeds for protein aggregation is based on small volume sampling of protein solutions at time $t_0=0$ and on the assumption that only seeded samples (e. g. sample B) would form aggregate at time $t_1$. b) Simulations allow to compute the distribution of half-times for samples with and without a seed as a function of the rate of primary nucleation $k_M$. Data are obtained sampling over $n=1200$ independent realization. c) Average half-time ($\pm$ standard deviation) as a function of the rate of primary nucleation $k_M$. d) Fraction of false positives (FP) and false negatives (FN) for testing times $t_1=600$ and $t_1=500$. Time is measured in units of $1/k_{\rm S}$, $N=1250$. ](Fig6.pdf){width="\columnwidth"}
|
---
abstract: |
A method for estimating the cross-correlation $C_{xy}(\tau)$ of long-range correlated series $x(t)$ and $y(t)$, at varying lags $\tau$ and scales $n$, is proposed. For fractional Brownian motions with Hurst exponents $H_1$ and $H_2$, the asymptotic expression of $C_{xy}(\tau)$ depends only on the lag $\tau$ (wide-sense stationarity) and scales as a power of $n$ with exponent ${H_1+H_2}$ for $\tau\rightarrow 0$. The method is illustrated on (*i*) financial series, to show the leverage effect; (*ii*) genomic sequences, to estimate the correlations between structural parameters along the chromosomes.
**Keywords**: persistence (experiment), sequence analysis (experiment), scaling in socio-economic systems, stochastic processes
address: |
Physics Department, Politecnico di Torino,\
Corso Duca degli Abruzzi 24, 10129 Torino, Italy
author:
- Sergio Arianos and Anna Carbone
title: 'Cross-correlation of long-range correlated series'
---
Introduction and overview
=========================
Interdependent behaviour and causality in coupled complex systems continue to attract considerable interest in fields as diverse as solid state science, biology, physiology, climatology [@Rosenblum; @Zhou; @Oberholzer; @Dhamala; @Verdes; @Palus; @Kreuz; @Du]. Coupling and synchronization effects have been observed for example in cardiorespiratory interactions, in neural signals, in glacial variability and in Milankovitch forcing [@Tass; @Huybers; @Ashkenazy]. In finance, the *leverage effect* quantifies the cause-effect relation between return $r(t)$ and volatility $\sigma_T(t+\tau)$ and eventually financial risk estimates [@Black; @Schwert; @Haugen; @Glosten; @Wu1; @Figlewski; @Bouchaud; @Perello; @Qiu; @Ahlgren; @Varga; @Montero]. In DNA sequences, causal connections among structural and compositional properties such as intrinsic curvature, flexibility, stacking energy, nucleotide composition are sought to unravel the mechanisms underlying biological processes in cells [@Moukhtar; @Allen; @Pedersen].
Many issues still remain unsolved mostly due to problems with the accuracy and resolution of coupling estimates in long-range correlated signals. Such signals do not show the wide-sense-stationarity needed to yield statistically meaningful information when cross-correlations and cross-spectra are estimated. In [@Jun; @Podobnik], a function $F_{xy}(n)$, based on the detrended fluctuation analysis - a measure of autocorrelation of a series at different scales $n$ - has been proposed to estimate the cross-correlation of two series $x(t)$ and $y(t)$. However, the function $F_{xy}(n)$ is independent of the lag $\tau$, since it is a straightforward generalization of the detrended fluctuation analysis, which is a *positive-defined* measure of autocorrelation for long-range correlated series. Therefore, $F_{xy}(n)$ holds only for $\tau=0$. Different from autocorrelation, the cross-correlation of two long-range correlated signals is a *non-positive-defined function of $\tau$*, since the coupling could be delayed and have any sign.
In this work, a method to estimate the cross-correlation function $C_{xy}(\tau)$ between two long-range correlated signals at different scales $n$ and lags $\tau$ is developed. The asymptotic expression of $C_{xy}(\tau)$ is worked out for fractional Brownian motions $B_H(t)$, $H$ being the Hurst exponent, whose interest follows from their widespread use for modeling long-range correlated processes in different areas [@Mandelbrot]. Finally, the method is used to investigate the coupling between (*i*) returns and volatility of the DAX stock index and (*ii*) structural properties, such as deformability, stacking energy, position preference and propeller twist, of the Escherichia Coli chromosome.
The proposed method operates: (i) on the integrated rather than on the increment series, thus yielding the cross-correlation at varying windows $n$, as opposed to the standard cross-correlation; (ii) as a sliding product of two series, thus yielding the cross-correlation as a function of the lag $\tau$, as opposed to the method proposed in [@Jun; @Podobnik]. The features (i) and (ii) imply higher accuracy, $n$-windowed resolution while capturing the cross-correlation at varying lags $\tau$.
Method
======
The *cross-correlation* $C_{xy}(t,\,\tau)$ of two nonstationary stochastic processes $x(t)$ and $y(t)$ is defined as: $$\label{crosscovariance}
C_{xy}(t,\tau)\equiv
\Big\langle[x(t)-\eta_x(t)][y^\ast(t+\tau)-\eta_y^\ast(t+\tau)]\Big\rangle$$ where $\eta_x(t)$ and $\eta_y^\ast(t+\tau)$ indicate time-dependent means of $x(t)$ and $y^\ast(t+\tau)$, the symbol $\ast$ indicates the complex conjugate and the brackets $<>$ indicate the ensemble average over the joint domain of $x(t)$ and $y^\ast(t+\tau)$. This relationship holds for space dependent sequences, as for example the chromosomes, by replacing time with space coordinate. Eq. (\[crosscovariance\]) yields sound information provided the two quantities in square parentheses are jointly stationary and thus $C_{xy}(t,\,\tau)\equiv C_{xy}(\tau)$ is a function only of the lag $\tau$.
In this work, we propose to estimate the cross-correlation of two nonstationary signals by choosing for $\eta_x(t)$ and $\eta_y^\ast(t+\tau)$ in Eq. (\[crosscovariance\]), respectively the functions: $$\label{xtil}
\widetilde{x}_n(t) = \frac{1}{n}\sum_{k=0}^nx(t-k)$$ and $$\label{ytil}
\widetilde{y}_n^*(t+\tau) = \frac{1}{n}\sum_{k=0}^ny^*(t+\tau-k)$$
Wide-sense stationarity
-----------------------
The wide-sense stationarity of Eq. (\[crosscovariance\]) can be demonstrated for fractional Brownian motions. By taking $x(t)=B_{H_1}(t)$, $y(t)=B_{H_2}(t)$, $\eta_x(t)$ and $\eta_y^\ast(t+\tau)$ calculated according to Eqs. (\[xtil\],\[ytil\]), $C_{xy}(t,\,\tau)$ writes: $$\begin{aligned}
\label{dcaB0} C_{xy}(t,\,\tau)
=\Big\langle\big[B_{H_1}(t)-\widetilde{B}_{H_1}(t)\big]\big[B_{H_2}^*(t+\tau)-\widetilde{B}_{H_2}^*(t+\tau)\big]\Big\rangle
\;\;\;.\end{aligned}$$ When writing $x(t)=B_{H_1}(t)$ and $y(t)=B_{H_2}(t)$, we assume the same underlying generating noise $dB(t)$ to produce a sample of $x$ and $y$. Eq. (\[dcaB0\]) is calculated in the limit of large $n$ (calculation details are reported in the Appendix). One obtains:
$$\begin{aligned}
\label{theta} C_{xy}(\hat{\tau}) & =
n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}\nonumber\\
&+\frac{(1+\hat{\tau})^{1+H_1+H_2}+(1-\hat{\tau})^{1+H_1+H_2}}{1+H_1+H_2}\nonumber\\
&-\frac{(1-\hat{\tau})^{2+H_1+H_2}-2\hat{\tau}^{2+H_1+H_2}+(1+\hat{\tau})^{2+H_1+H_2}}{(1+H_1+H_2)(2+H_1+H_2)}
\Big]\;\;\;,\end{aligned}$$
where $\hat{\tau} = \tau/n $ is the *scaled lag* and $D_{H_1,\,H_2}$ is defined in the Appendix. Eq. (\[theta\]) is independent of $t$, since the terms in square parentheses depend only on $\hat{\tau} = \tau/n$, and thus Eq. (\[crosscovariance\]) is made wide-sense stationary. It is worthy of note that, in Eq. (\[theta\]), the coupling between $B_{H_1}(t)$ and $B_{H_2}(t)$ reduces to the sum of the exponents $H_1 + H_2$. Eq. (\[theta\]), for $\tau=0$, reduces to: $$\begin{aligned}
\label{zero} C_{xy}(0) \propto n^{H_1+H_2} \;\;\;,\end{aligned}$$ indicating that the coupling between $B_{H_1}(t)$ and $B_{H_2}(t)$ scales as the product of $n^{H_1}$ and $n^{H_2}$. The property of the variance of fractional Brownian motion $B_{H}(t)$ to scale as $n^{2H}$ is recovered from the Eq. (\[zero\]) for $x=y$ and $H_1=H_2=H$, i.e.: $$\begin{aligned}
\label{auto} C_{xx}(0) \propto n^{2H} \;\;\;.\end{aligned}$$ Eq. (\[auto\]) has been studied in [@Carbone1; @Carbone2; @Carbone3; @Carbone4; @Carbone5].\
Examples
========
Financial series
----------------
The leverage effect is a *stylized fact* of finance. The level of volatility is related to whether returns are negative or positive. Volatility rises when a stock’s price drops and falls when the stock goes up [@Black]. Furthermore, the impact of negative returns on volatility seems much stronger than the impact of positive returns (*down market effect*) [@Wu1; @Figlewski]. To illustrate these effects, we analyze the correlation between returns and volatility of the DAX stock index $P(t)$, sampled every minute from 2-Jan-1997 to 22-Mar-2004, shown in Fig. \[fig:figure1\] (a). The returns and volatility are defined respectively as: $r(t)=\ln P(t+t')- \ln P(t)$ and $ \sigma_T(t)=\sqrt{{\sum_{t=1}^{T}\big[r(t)-\overline{r(t)}_T\big]^2}/{(T-1)}}
\;.$
![\[fig:figure1\] DAX stock index: (a) prices; (b) returns with $t'=1h$; (c) volatility with $T=300h$; (d) volatility with $T=660h$.](Figure1.JPG){width="8cm"}
Fig. \[fig:figure1\] (b) shows the returns for $t'=1 h$. The volatility series are shown in Figs. \[fig:figure1\] (c,d) respectively for $T=300h$ and $T=660h$. The Hurst exponents, calculated by the slope of the log-log plot of Eq. (\[auto\]) as a function of $n$, are $H=0.50$ (return), $H=0.77$ (volatility $T=300h$) and $H=0.80$ (volatility $T=660h$). Fig. \[fig:figure2\] shows the log-log plots of $C_{xx}(0)$ for the returns (squares) and volatility with $T=660$ (triangles). The scaling-law exhibited by the DAX series guarantees that its behaviour is a fractional Brownian motion. The function $C_{xy}(0)$ with $x=r(t)$ and $y=\sigma_T(t)$ with $T=660h$ is also plotted at varying $n$ in Fig. \[fig:figure2\] (circles). From the slope of the log-log plot of $C_{xy}(0)$ vs $n$, one obtains $H=0.65$, i.e. the average between $H_1$ and $H_2$ as expected from Eq. (\[zero\]).\
Next, the cross-correlation is considered as a function of $\tau$. The plots of $C_{xy}(\tau)$ for $x=r(t)$ and $y=\sigma_T(t)$ with $T=300h$ and $T=660h$ are shown respectively in Fig. \[fig:figure3\] (a,b) at different windows $n$.
![\[fig:figure2\] Log-log plot of $C_{xx}(0)$ for the DAX return (squares) and volatility (triangles) and of $C_{xy}(0)$ with $x=r(t)$ and $y=\sigma_T(t)$ (circles). Red lines are linear fits. The power-law behaviour is consistent with Eqs. (\[zero\],\[auto\]).](Figure2.JPG){width="8cm"}
![\[fig:figure3\] Cross-correlation $C_{xy}(\tau)$ with $x=r(t)$ and $y=\sigma_T(t)$ with (a) $T=300h$ and (b) $T=660h$; (c) with $x=r(t)$ and $y=\sigma_T(t)^2$ with $T=660h$. $n$ ranges from 100 to 500 with step 100.](Figure3.JPG){width="8cm"}
![\[fig:figure4\] Plot of the function $C_{xy}(\tau) n^{-(H_1+H_2)}$ with $x=r(t)$ and $y=\sigma_T(t)$ with $T=300h$. $H_1=0.5$ and $H_2=0.77$ $n$ ranges from 100 to 500 with step 100. One can note that the five curves collapse, within the numerical errors of the parameters entering the auto- and cross-corerlation functions. This is in accord with the invariance of the product $C_{xy}(\tau) n^{-(H_1+H_2)}$ with the window $n$.](Figure4.JPG){width="8cm"}
![\[fig:figure5\] Leverage function with volatility windows $T$ = $100h$, $300h$, $660h$, $1000h$. The value of $n$ is $400$ equal for all the curves.](Figure5.JPG){width="8cm"}
The function $C_{xy}(\tau)$ for $x=r(t)$ and $y=\sigma_T(t+\tau)^2$, is shown in Fig. \[fig:figure3\](c). The cross-correlation takes negative values at small $\tau$ and reaches the minimum at about 10-12 days. This indicates that the volatility increases with negative returns (i.e. with price drops). Then $C_{xy}(\tau)$ changes sign relaxing asymptotically to zero from positive values at large $\tau$. The positive values of $C_{xy}(\tau)$ indicate that the volatility decreases when the returns become positive (i.e. when price rises) and are related to the restored equilibrium within the market (*positive rebound days*). It is worthy of remark that the (positive) maximum of the cross-correlation is always smaller than the (negative) minimum. This is the stylized fact known as *down market effect*. A relevant feature exhibited by the curves in Figs. \[fig:figure3\] (a-c) is that the zeroes and the extremes of $C_{xy}(\tau)$ occur at the same values of $\tau$, which is consistent with wide-sense-stationarity for all the values of $n$. A further check of wide sense stationarity is provided by the plot of the function $C_{xy}(\tau)n^{-(H_1+H_2)}$. In Fig. \[fig:figure4\], $C_{xy}(\tau)n^{-(H_1+H_2)}$ is plotted with $x=r(t)$ and $y=\sigma_T(t)$ with $T=300h$, $H_1=0.5$ and $H_2=0.77$, $n$ ranges from 100 to 500 with step 100. One can note that the five curves collapse in accord with the invariance of the product $C_{xy}(\tau) n^{-(H_1+H_2)}$ with $n$.
In Fig. \[fig:figure5\], the leverage correlation function $\mathcal{L}(\tau)=\langle \sigma_T(t+\tau)^2 r(t) \rangle/
\langle r(t)^2\rangle^2$ according to the definition put forward in [@Bouchaud], is plotted for different volatility windows $T$. The function $\langle \sigma_T(t+\tau)^2 r(t) \rangle$ has been calculated by means of Eq.(\[crosscovariance\]). The negative values of cross-correlation (at smaller $\tau$) and the following values (*positive rebound days*) at larger $\tau$ can be clearly observed for several volatility windows $T$. The function $\mathcal{L}(\tau)$ for the DAX stock index, estimated by means of the standard cross-correlation function, is shown in Figs. 1,2 of Ref. [@Qiu]. By comparing the curves shown in Fig. \[fig:figure5\] to those of Ref. [@Qiu], one can note the higher resolution related to the possibility to detect the correlation at smaller lags (note the $\tau$ unit is hours, while in Ref.[@Bouchaud; @Perello; @Qiu; @Ahlgren] is days) and at varying windows $n$, implying the possibility to estimate the degree of cross-correlation at different frequencies. As a final remark, we mention that the cross correlation function between a fractional Brownian motion and its own width can be computed analytically in the large $n$ limit, following the derivation in the Appendix for two general fBm’s. The width of a fBm is one possible definition for the volatility, therefore the derivation in the Appendix provides a straightforward estimate of the leverage function.
Genomic Sequences
-----------------
Several studies are being addressed to quantify cross-correlations among nucleotide position, intrinsic curvature and flexibility of the DNA helix, that may ultimately shed light on biological processes, such as protein targeting and transcriptional regulation [@Moukhtar; @Allen; @Pedersen]. One problem to overcome is the comparison of DNA fragments with di- and trinucleotide scales, hence the need of using high-precision numerical techniques. We consider deformability, stacking energy, propeller twist and position preference sequences of the Escherichia Coli chromosome. The sequences, with details about the methods used to synthetize/measure the structural properties, are available at the CBS database - Center for Biological Sequence Analysis of the Technical University of Denmark (). In order to apply the proposed method, the average value is subtracted from the data, that are subsequently integrated to obtain the paths shown in Fig. \[fig:figure6\]. The series are $4938919 bp$ long and have Hurst exponents: $H=0.70$ (deformability), $H=0.65$ (position preference), $H=0.73$ (stacking energy), $H=0.70$ (propeller twist).
The cross-correlation functions $C_{xy}(\tau)$ between deformability, stacking energy, propeller twist and position preference are shown in Fig. \[fig:figure7\] (a-e). There is in general a remarkable cross-correlation along the DNA chain indicating the existence of interrelated patches of the structural and compositional parameters. The high correlation level between DNA flexibility measures and protein complexes indicates that the conformation adopted by the DNA bound to a protein depends on the inherent structural features of the DNA. It is worthy to remark that the present method provides the dependence of the coupling along the DNA chain rather than simply the values of the linear correlation coefficient $r$. In Table 4 of Ref. [@Pedersen] one can find the following values of the correlation obtained by either numerical analysis or experimental measurements (in parentheses) over DNA fragments : (a) $r=-0.80~(-0.86)$; (b) $r=0.06~(0.00)$; (c) $r=-0.15~(-0.22)$; (d) $r=-0.74~(-0.82)$; (e) $r=-0.80~(-0.87)$. Moreover, also for the genomic sequences the function $C_{xy}(\tau) n^{-(H_1+H_2)}$ is independent of $n$ within the numerical errors of the parameters entering the auto- and cross-correlation functions. In Fig. \[fig:figure8\], $C_{xy}(\tau) n^{-(H_1+H_2)}$ is shown for $x(t)$ the deformability, $y(t)$ the stacking energy, $H_1=0.7$ and $H_2=0.73$. $n$ ranges from 100 to 500 with step 100.
![\[fig:figure6\] Structural sequences of the Escherichia Coli chromosome.](Figure6.JPG){width="8cm"}
![\[fig:figure7\] Cross-correlation $C_{xy}(\tau)$ between (a) deformability and stacking energy; (b) position preference and deformability (c) propeller twist and position preference; (d) propeller twist and stacking energy; (e) propeller twist and deformability. $n$ ranges from 100 to 500 with step 100.](Figure7.JPG){width="8cm"}
![\[fig:figure8\] Plot of the function $C_{xy}(\tau) n^{-(H_1+H_2)}$ with $x(t)$ the deformability, $y(t)$ the stacking energy, $H_1=0.7$ and $H_2=0.73$. $n$ ranges from 100 to 500 with step 100. One can note that the five curves collapse, within the numerical errors of the parameters entering the auto- and cross-corerlation functions. This is in in accord with the invariance of the product $C_{xy}(\tau) n^{-(H_1+H_2)}$ with the window $n$.](Figure8.JPG){width="8cm"}
Conclusions
===========
A high-resolution, lag-dependent non-parametric technique based on Eqs. (\[crosscovariance\]-\[ytil\]) to measure cross-correlation in long range-correlated series has been developed. The technique has been implemented on (*i*) financial returns and volatilities and (*ii*) structural properties of genomic sequences [@note]. The results clearly show the existence of coupling regimes characterized by positive-negative feedback between the systems at different lags $\tau$ and windows $n$. We point out that - in principle - other methods might be generalized in order to yield estimates of the cross-correlation between long-range correlated series at varying $\tau$ and $n$. However, techniques operating over the series by means of a box division, such as DFA and R/S method, are *a-priori* excluded. The box division causes discontinuities in the sliding product of the two series at the extremes of each box, and ultimately incorrect estimates of the cross-correlation. The present method is not affected by this drawback, since Eqs. (\[crosscovariance\]-\[ytil\]) do not require a box division.
Details of the calculation:
===========================
Let us start from Eq. (\[dcaB0\]):
$$\begin{aligned}
\label{dcaB0_A} C_{xy}(t,\,\tau)
=\Big\langle\big[B_{H_1}(t)-\widetilde{B}_{H_1}(t)\big]\big[B_{H_2}^*(t+\tau)-\widetilde{B}_{H_2}^*(t+\tau)\big]\Big\rangle
\;\;\;,\end{aligned}$$
that, after multiplying the terms in parentheses, becomes:
$$\begin{aligned}
\label{dca}
\nonumber
C_{xy}(t,\,t+\tau) &=\Big\langle[B_{H_1}(t)B_{H_2}^*(t+\tau)-B_{H_1}(t)\widetilde{B}_{H_2}^*(t+\tau) \\
& -\widetilde{B}_{H_1}(t)B_{H_2}^*(t+\tau)+\widetilde{B}_{H_1}(t)\widetilde{B}_{H_2}^*(t+\tau)]\Big\rangle \;\;\;.\end{aligned}$$
In general, the moving average may be referred to any point of the moving window, a feature expressed by replacing Eqs. (\[xtil\],\[ytil\]) with $$\tilde{x}_n(t)= \frac{1}{n}\sum_{k=-\theta n}^{n-\theta n}x(t-k) \label{tx} \qquad \quad
\tilde{y}_n(t+\tau)=\frac{1}{n}\sum_{k=-\theta n}^{n-\theta n}y(t+\tau-k) \label{ty}$$ with $0\le\theta\le 1$. In the limit of $n\to\infty$, the sums can be replaced by integrals, so that: $$\label{txy}
\tilde{x}(t)=\int_{-\theta}^{1-\theta}x(\hat{t}-\hat{k}) \qquad \quad \tilde{y}(t+\tau)=\int_{-\theta}^{1-\theta}y(\hat{t}+\hat{\tau}-\hat{k})$$ where $t=n\hat{t}$, $ \tau=n\hat{\tau}$, $k=n\hat{k}$. For the sake of simplicity, the analytical derivation will be done by using the harmonizable representation of the fractional Brownian motion [@Benassi; @Cohen; @Dobric]: $$\label{harmo}
B_H(t)\equiv\int_{-\infty}^{+\infty}\frac{e^{it\xi}-1}{|\xi|^{H+\frac{1}{2}}}d\bar{B}(\xi)\;,$$ where $d\bar{B}(\xi)$ is a representation of $dB(t)$ in the $\xi$ domain. In the following we will consider the case of $t>0$ and $t+\tau>0$. By using Eq. (\[harmo\]), the cross-correlation of two fbms $B_{H_1}(t)$ and $B_{H_2}(t+\tau)$ can be written as: $$\label{xy}
\hspace{-15mm}\langle B_{H_1}(t)B_{H_2}^*(t+\tau)\rangle=\Big\langle\int_{-\infty}^{+\infty}\frac{e^{it\xi}-1}{|\xi|^{H_1+\frac{1}{2}}}\,d\bar{B}(\xi)
\,\int_{-\infty}^{+\infty}\frac{e^{-i(t+\tau)\eta}-1}{|\eta|^{H_2+\frac{1}{2}}}\,d\bar{B}(\eta)\Big\rangle\;.$$ Since $d\bar{B}$ is Gaussian, the following property holds for any $f,\,g\,\in\,L^2(\mathbb{R})$ : $$\label{gaussian}
\Big\langle\int_{-\infty}^{+\infty}f(\xi)d\bar{B}(\xi)\,\left(\int_{-\infty}^{+\infty}g(\eta)d\bar{B}(\eta)\right)^*\Big\rangle=\int_{-\infty}^{+\infty}f(\xi)g^*(\xi)\,d\xi$$ By using Eq. (\[gaussian\]), after some algebra Eq. (\[xy\]) writes: $$\label{teo2}
\hspace{-15mm} \langle B_{H_1}(t)B_{H_2}^*(t+\tau)\rangle =D_{H_1,\,H_2}\Big(t^{H_1+H_2}+(t+\tau)^{H_1+H_2}-|\tau|^{H_1+H_2}\Big)\;,\\$$ where $D_{H_1,\,H_2}$ is a normalization factor which depends on $H_1$ and $H_2$. In the harmonizable representation of fBm, $D_{H_1,\,H_2}$ takes the following form [@Ayache]: $$D_{H_1,\,H_2}=D_{H_1+H_2}=-\frac{2}{\pi}\cos\left[\frac{(H_1+H_2)\pi}{2}\right]\Gamma[-(H_1+H_2)]$$ normalized such that $D_{H_1,\,H_2}=1$ when $H_1=H_2=\frac{1}{2}$. Different representations of the fBm lead to different values of the coefficient $D_{H_1,\,H_2}$ [@Dobric; @Stoev].
Eq. (\[teo2\]) can be used to calculate each of the four terms in the right hand side of Eq. (\[dca\]). The mean value of each term in Eq. (\[dca\]) is obtained from the general formula in Eq. (\[teo2\]); thus, substituting the right hand side of Eq. (\[teo2\]) and Eq. (\[txy\]) into each term in Eq. (\[dca\]) we obtain: $$\begin{aligned}
\label{dca1}
\hspace{-25mm}&C_{xy} (\hat{t},\,\hat{\tau},\, \theta)=D_{H_1,\,H_2}n^{H_1+H_2}\Big[\Big(\hat{t}^{H_1+H_2}+(\hat{t}+\hat{\tau})^{H_1+H_2}-|\hat{\tau}|^{H_1+H_2}\Big)\nonumber \\
\hspace{-25mm}& -\Big(\hat{t}^{H_1+H_2}+\int_{\hat{h}=-\theta}^{1-\theta}|\hat{t}-\hat{h}+\hat{\tau}|^{H_1+H_2}d\hat{h}-\int_{\hat{h}=-\theta}^{1-\theta}|\hat{t}-\hat{h}|^{H_1+H_2}d\hat{h}\Big)\nonumber \\
\hspace{-25mm}& -\Big(\int_{\hat{k}=-\theta}^{1-\theta}|\hat{t}-\hat{k}|^{H_1+H_2}d\hat{k}+(\hat{t}+\hat{\tau})^{H_1+H_2}-\int_{\hat{k}=-\theta}^{1-\theta}|\hat{t}+\hat{k}|^{H_1+H_2}d\hat{k}\Big) \nonumber \\
\hspace{-25mm}& +\Big(\int_{\hat{k}=-\theta}^{1-\theta}|\hat{t}-\hat{k}|^{H_1+H_2}d\hat{k}+\int_{\hat{h}=-\theta}^{1-\theta}|\hat{t}-\hat{h}+\hat{\tau}|^{H_1+H_2}d\hat{h}\nonumber \\
\hspace{-25mm}& - \int_{\hat{h}=-\theta}^{1-\theta}\int_{\hat{k}=-\theta}^{1-\theta }|\hat{\tau}-\hat{h}-\hat{k}|^{H_1+H_2}d\hat{h}\,d\hat{k}\Big)\Big]\end{aligned}$$ where each term in round parentheses corresponds to each of the four terms in Eq. (\[dca\]). Summing the terms in Eq. (\[dca1\]), one can notice that time $t$ cancels out, thus one finally obtains: $$\begin{aligned}
\label{integral}
\hspace{-25mm}C_{xy}(\hat{\tau},\,\theta) &= n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}+\int_{-\theta}^{1-\theta}|\hat{\tau}-\hat{h}|^{H_1+H_2}\,d\hat{h}\nonumber \\
\hspace{-25mm} &+\int_{-\theta}^{1-\theta}|\hat{\tau}+\hat{k}|^{H_1+H_2}\,d\hat{k}
-\int_{-\theta}^{1-\theta}|\hat{\tau}-\hat{h}+\hat{k}|^{H_1+H_2}\,d\hat{h}\,d\hat{k}\;\Big]\;, \nonumber \\\hspace{-25mm}&
\end{aligned}$$
Consistently with the large $n$ limit, we take $\tau<n$, namely $\hat{\tau}<1$. The integral (\[integral\]) admits four different solutions, depending on the values taken by the parameters $\hat{\tau}$ and $\theta$. Let us consider each case separately.
#### Case 1: $\hat{\tau}<\theta$ and $\hat{\tau}+\theta<1$ {#case-1-hattautheta-and-hattautheta1 .unnumbered}
$$\begin{aligned}
\label{case1}
\nonumber
\hspace{-25mm}C_{xy}(\hat{\tau},\,\theta)&= n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}-\frac{(1-\hat{\tau})^{2+H_1+H_2}-2\hat{\tau}^{2+H_1+H_2}+(1+\hat{\tau})^{2+H_1+H_2}}{(1+H_1+H_2)(2+H_1+H_2)} \nonumber\\
\hspace{-25mm}& \nonumber \\ \nonumber
\hspace{-25mm}&+\frac{(1+\hat{\tau}-\theta)^{1+H_1+H_2}+(\theta-\hat{\tau})^{1+H_1+H_2}+(1-\hat{\tau}-\theta)^{1+H_1+H_2}+(\hat{\tau}+\theta)^{1+H_1+H_2}}{1+H_1+H_2}\Big]
\nonumber \\\hspace{-25mm}&\end{aligned}$$
#### Case 2: $\hat{\tau}<\theta$ and $\hat{\tau}+\theta>1$ {#case-2-hattautheta-and-hattautheta1 .unnumbered}
$$\begin{aligned}
\label{case2}
\nonumber
\hspace{-25mm}C_{xy}(\hat{\tau},\,\theta)&= n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}-\frac{(1-\hat{\tau})^{2+H_1+H_2}-2\hat{\tau}^{2+H_1+H_2}+(1+\hat{\tau})^{2+H_1+H_2}}{(1+H_1+H_2)(2+H_1+H_2)} \nonumber\\
\hspace{-25mm}& \nonumber \\ \nonumber
\hspace{-25mm}&+\frac{(1+\hat{\tau}-\theta)^{1+H_1+H_2}+(\theta-\hat{\tau})^{1+H_1+H_2}-(\hat{\tau}+\theta-1)^{1+H_1+H_2}+(\hat{\tau}+\theta)^{1+H_1+H_2}}{1+H_1+H_2}\Big]
\nonumber \\\hspace{-25mm}&\end{aligned}$$
#### Case 3: $\hat{\tau}>\theta$ and $\hat{\tau}+\theta<1$ {#case-3-hattautheta-and-hattautheta1 .unnumbered}
$$\begin{aligned}
\label{case3}
\nonumber
\hspace{-25mm}C_{xy}(\hat{\tau},\,\theta)&= n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}-\frac{(1-\hat{\tau})^{2+H_1+H_2}-2\hat{\tau}^{2+H_1+H_2}+(1+\hat{\tau})^{2+H_1+H_2}}{(1+H_1+H_2)(2+H_1+H_2)} \nonumber\\
\hspace{-25mm}& \nonumber \\
\nonumber
\hspace{-25mm}&+\frac{(1+\hat{\tau}-\theta)^{1+H_1+H_2}-(\hat{\tau}-\theta)^{1+H_1+H_2}+(1-\hat{\tau}-\theta)^{1+H_1+H_2}+(\hat{\tau}+\theta)^{1+H_1+H_2}}{1+H_1+H_2}\Big]
\nonumber \\\hspace{-25mm}&\end{aligned}$$
It is easy to see that this case includes the Eq. (2.5) treated in the paper.
#### Case 4: $\hat{\tau}>\theta$ and $\hat{\tau}+\theta>1$ {#case-4-hattautheta-and-hattautheta1 .unnumbered}
$$\begin{aligned}
\label{case4}
\nonumber
\hspace{-25mm}C_{xy}(\hat{\tau},\,\theta)&= n^{H_1+H_2}D_{H_1,\,H_2}\Big[-\hat{\tau}^{H_1+H_2}-\frac{(1-\hat{\tau})^{2+H_1+H_2}-2\hat{\tau}^{2+H_1+H_2}+(1+\hat{\tau})^{2+H_1+H_2}}{(1+H_1+H_2)(2+H_1+H_2)} \nonumber\\
\hspace{-25mm}& \nonumber \\ \nonumber
\hspace{-25mm}&+\frac{(1+\hat{\tau}-\theta)^{1+H_1+H_2}-(\hat{\tau}-\theta)^{1+H_1+H_2}-(\hat{\tau}+\theta-1)^{1+H_1+H_2}+(\hat{\tau}+\theta)^{1+H_1+H_2}}{1+H_1+H_2}\Big]
\nonumber \\\hspace{-25mm}&\end{aligned}$$
References {#references .unnumbered}
==========
[10]{} M. Rosenblum and A. Pikovsky, (2007) Phys. Rev. Lett. [**98**]{}, 064101. T. Zhou, L. Chen and K. Aihara, (2005) Phys. Rev. Lett. [**95**]{}, 178103. S. Oberholzer et al. (2006) Phys. Rev. Lett. [**96**]{}, 046804. M. Dhamala, G. Rangarajan, (2008) M. Ding, Phys. Rev. Lett. [**100**]{}, 018701. P. F. Verdes, (2005) Phys. Rev. E [**72**]{}, 026222. M. Palus and M. Vejmelka, (2007) Phys. Rev. E [**75**]{}, 056211. T. Kreuz et al. (2007) Physica D [**225**]{}, 29 . Lu-Chun Du and Dong-Cheng Mei, (2008) J. Stat. Mech. P11020. P. Tass et al. (1998) Phys. Rev. Lett. [**81**]{}, 3291. P. Huybers, (2006) W. Curry, Nature [**441**]{}, 7091. Y. Ashkenazy, (2006) Climate Dynamics [**27**]{}, 421. F. Black, (1976) J. of Fin. Econ. [**3**]{}, 167. W. Schwert, J. of Finance (1989) [**44**]{}, 1115. R. Haugen, E. Talmor, W. Torous, (1991) J. of Finance [**44**]{}, 1115. L. Glosten, J. Ravi and D. Runkle, (1992) J. of Finance [**48**]{}, 1779. G. Bekaert, G. Wu, (2000) The Review of Financial Studies [**13**]{}, 1. S. Figlewski, X. Wang (2000) *Is the ’Leverage Effect’ a Leverage Effect?* , Working Paper, Stern School of Business, New York. J. P. Bouchaud, A. Matacz and M. Potters, (2001) Phys. Rev. Lett. [**87**]{}, 228701. J. Perello and J. Masoliver, (2003) Phys. Rev. E [**67**]{}, 037102. T. Qiu, B. Zheng, F. Ren and S. Trimper, (2006) Phys. Rev. E [**73**]{}, 065103(R). R. Donangelo, M. H. Jensen, I. Simonsen, K. Sneppen, (2006) J. Stat. Mech. L11001. I Varga-Haszonits and I Kondor, (2008) J. Stat. Mech. P12007. M. Montero, (2007) J. Stat. Mech. P04002. J. Moukhtar, E. Fontaine, C. Faivre-Moskalenko and A. Arneodo, (2007) Phys. Rev. Lett. [**98**]{}, 178101. T.E. Allen, N.D. Price, A. Joyce and B.O. Palsson, (2006) PLoS Computational Biology, [**2**]{}, e2. A.G. Pedersen, L.J. Jensen, S. Brunk, H.H. Staerfeld and D.W. Ussery, (2000) J. Mol. Biol. [**299**]{}, 907. W.C. Jun, G. Oh and S. Kim, (2006) Phys. Rev. E [**73**]{}, 066128. B. Podobnik, H.E. Stanley, (2008) Phys. Rev. Lett. [**100**]{}, 084102. B. B. Mandelbrot, J. W. Van Ness, (1968) SIAM Rev. [**4**]{}, 422. A. Carbone, G. Castelli, H. E. Stanley, Phys. Rev. E [**69**]{}, 026105 (2004). A. Carbone, Phys. Rev. E [**76**]{}, 056703 (2007). S. Arianos and A. Carbone, Physica A [**382**]{}, 9 (2007). A. Carbone and H. E. Stanley, Physica A [**384**]{}, 21 (2007). A. Carbone and H. E. Stanley, Physica A [**340**]{}, 544 (2004). The MATLAB and C++ codes implementing the proposed method, the DAX and E-COLI sequences used in this work are downloadable at:
A. Benassi, S. Jaffard, D. Roux, Rev. Mat. Iber. [**13**]{}, 19, (1997).
S. Cohen, Fractals: Theory and Applications in Engineering. M. Dekking, J. Lévy Véhel, E. Lutton and C. Tricot (Eds.). Springer Verlag, 1999.
A. Ayache, S. Cohen, J. Levy Vehel, Proceedings of the conference ICASSP, Istanbul June 2000.
V. Dobric, F. M. Ojeda, IMS Lecture Notes-Monograph Series, *High Dimensional Probability*, [**51**]{}, 77, (2006).
S. A. Stoev, M. S. Taqqu, Stochastic Processes and their Applications [**116**]{}, 200 (2006).
|
---
abstract: 'We use a series of high-resolution N-body simulations of a ‘Milky-Way’ halo, coupled to semi-analytic techniques, to study the formation of our own Galaxy and of its stellar halo. Our model Milky Way galaxy is a relatively young system whose physical properties are in quite good agreement with observational determinations. In our model, the stellar halo is mainly formed from a few massive satellites accreted early on during the galaxy’s lifetime. The stars in the halo do not exhibit any metallicity gradient, but higher metallicity stars are more centrally concentrated than stars with lower abundances. This is due to the fact that the most massive satellites contributing to the stellar halo are also more metal rich, and dynamical friction drags them closer to the inner regions of the host halo.'
title: 'The Galaxy and its stellar halo - insights from a hybrid cosmological approach'
---
Introduction
============
Our own galaxy - the Milky Way - is a fairly large spiral galaxy consisting of four main stellar components: (1) the thin disk, that contains most of the stars with a wide range of ages and on high angular momentum orbits; (2) the thick disk, that contains about 10-20 per cent of the mass in the thin disk and whose stars are on average older and have lower metallicity than those in the thin disk; (3) the bulge, which contains old and metal rich stars on low angular momentum orbits; and (4) the stellar halo which contains only a few per cent of the total stellar mass and whose stars are old and metal poor and reside on low angular momentum orbits.
While the Milky Way is only one Galaxy, it is the one that we can study in unique detail. Over the past years, accurate measurements of ages, metallicities and kinematics have been collected for a large number of individual stars, and much larger datasets will become available in the next future thanks to a number of ongoing and planned astrometric, photometric and spectroscopic surveys. This wealth of detailed and high-quality observational data provides an important benchmark for current theories of galaxy formation and evolution.
In the following, we outline the main results of a recent study of the formation of the Milky Way and of its stellar halo in the context of a hybrid cosmological approach which combines high-resolution simulations of a ‘Milky Way’ halo with semi-analytic methods. We refer to [@DeLucia_Helmi_2008 De Lucia & Helmi (2008)] for a more detailed description of our method and of our results.
The simulations and the galaxy formation model
==============================================
We use the re-simulations of a ‘Milky-Way’ halo (the GA series) described in [@Stoehr_etal_2002 Stoehr et al. (2002)] and [@Stoehr_etal_2003 Stoehr et al. (2003)], with an underlying flat $\Lambda$-dominated CDM cosmological model. The candidate halo for re-simulations was selected from an intermediate-resolution simulation (particle mass $\sim 10^8\,{\rm M}_{\odot}$) as a relatively isolated halo which suffered its last major merger at $z>2$. The same halo was then re-simulated at four progressively higher resolution simulations with particle mass $\sim 1.7\times10^8$ (GA0), $\sim
1.8\times 10^7$ (GA1), $\sim 1.9\times 10^6$ (GA2), and $\sim 2.1\times
10^5\,{\rm M}_{\odot}$ (GA3). Simulation data were stored in 108 outputs from $z=37.6$ to $z=0$, and for each simulation output we constructed group catalogues (using a standard friends-of-friends algorithm) and substructure catalogues (using the [SUBFIND]{} algorithm developed by [@Springel_etal_2001 Springel et al. 2001]). Substructure catalogues were then used to construct merger history trees for all self-bound haloes as described in [@Springel_etal_2005 Springel et al. (2005)] and [@DeLucia_Blaizot_2007 De Lucia & Blaizot (2007)]. Finally, these merger trees were used as input for our semi-analytic model of galaxy formation.
Physical properties and metallicity distributions
=================================================
![Evolution of the dark matter mass (panel a), total stellar mass (panel b), spheroid mass (panel c), and cold gas mass (panel d) for the model Milky Way in the four simulations used in our study (different colours).[]{data-label="fig1"}](massgrowth.ps){width="4.8in"}
Fig.\[fig1\] shows the evolution of different mass components for the model Milky Way galaxies in the four simulations used in our study (lines of different colours). The histories shown in the different panels have been obtained by linking the galaxy at each time-step to the progenitor with the largest stellar mass. Fig.\[fig1\] shows that approximately half of the final mass in the dark matter halo is already in place (in the main progenitor) at $z\sim 1.2$ (panel a) while about half of the final total stellar mass is only in place at $z\sim 0.8$ (panel b). About 20 per cent of this stellar mass is already in a spheroidal component (panel c). The mass of the spheroidal component grows in discrete steps as a consequence of our assumption that it grows during mergers and disk instability episodes, and approximately half of its final mass is already in a spheroidal component at $z\sim 2.5$. In contrast, the cold gas mass varies much more gradually.
Interestingly, the model produces consistent evolutions for all four simulations used in our study, despite the large increase in numerical resolution. Some panels (e.g. panel b) do not show perfect convergence, due to the lack of complete convergence in the N-body simulations (see panel a). Fig.\[fig1\] also shows that the total stellar mass of our model Galaxy ($6\times 10^{10}\,{\rm M}_{\odot}$) is is very good agreement with the estimated value $\sim 5-8\times 10^{10}\,{\rm M}_{\odot}$. The mass of the spheroidal component is instead slightly lower than the observed value (assumed to be about 25 per cent of the disk stellar mass), while our fiducial model gives a gas mass which is about twice the estimated value.
![Metallicity distribution for stars in the disk (blue histogram in the left panel) and spheroid (red histogram in the right panel) of the model Milky Way from the highest resolution simulation in our study. The solid black histogram in the left panel shows the metallicity distribution for all stars in the model galaxy, while the dashed black histogram in the right panel shows the metallicity distribution of stars in the spheroidal component for our fiducial model if spheroid growth through disk instability is suppressed. The solid orange histograms show observational measurements by Wyse & Gilmore (1995, left panel) and Zoccali et al. (2003, right panel). The dashed orange histogram in the left panel has been obtained converting the \[Fe/H\] scale of the original distribution by Wyse & Gilmore into an \[O/H\] scale by using the observed \[O/H\]-\[Fe/H\] relation for thin disk stars by Bensby, Feltzing & Lundström (2004).[]{data-label="fig2"}](metdistr.ps){width="5.2in"}
Fig.\[fig2\] shows the metallicity distributions of the stars in the disk and spheroid of our model Milky Way from the highest resolution simulation used in our study. The left panel shows the metallicity distribution of all stars (black) and of the stars in the disk (blue) compared to the observational measurements by [@Wyse_Gilmore_1995 Wyse & Gilmore (1995)]. The right panel shows the metallicity distribution of the spheroid stars in our fiducial model (red) and in a model where the disk instability channel is switched off (dashed black). Model results are compared to observational measurements by [@Zoccali_etal_2003 Zoccali et al. (2003)]. The metallicity distribution of disk stars in our model peaks at approximately the same value as observed, but it exhibits a deficiency of low metallicity stars. When comparing model results and observational measurements, however, two factors should be considered: (1) the observational measurements have some uncertainties ($\sim
0.2$ dex) which tend to broaden the true underlying distributions; (2) the observational measurements provide [*iron*]{} distributions, and iron is not well described by our model that adopts an instantaneous recycling approximation. In order to show the importance of this second caveat we have converted the measured \[Fe/H\] into \[O/H\] using a linear relation, obtained by fitting data for thin disk stars from [@Bensby_etal_2004 Bensby, Feltzing & Lundström (2004)]. The result of this conversion is shown by the dashed orange histogram in the left panel of Fig. \[fig2\]. The observed \[O/H\] metallicity distribution is now much closer to the modelled log\[Z/Z$_\odot$\] distribution. The same caveats applies to the comparison shown in the right panel, which indicates that our model spheroid is significantly less metal rich than the observed Galactic bulge.
The stellar halo
================
In order to study the structure and metallicity distribution of the stellar halo, we assume that it builds up from the cores of the satellite galaxies that merged with the Milky Way over its lifetime. The stars that end up in the stellar halo are identified by tracing back all galaxies that merged with the Milky Way progenitor, until they are central galaxies of their own halo. We select then a fixed fraction (10 per cent for the results shown in the following) of the most bound particles of their parent haloes, and tag them with the mean metallicity of the central galaxies (for details, see De Lucia & Helmi 2008).
![Stellar mass (left panel) of the galaxies contributing to the stellar halo, as a function of the lookback time of galaxy’s merger. The right panel shows the number of particles associated to the dark matter haloes at the time of accretion. Red symbols correspond to objects with more than 500 particles. Open symbols correspond to red filled circles but are plotted as a function of the time of accretion.[]{data-label="fig3"}](accretedgalaxies.ps){width="5.2in"}
Fig.\[fig4\] shows the stellar mass of the galaxies contributing to the stellar halo as a function of the lookback time of the galaxy’s merger (left panel), and the number of particles associated to the dark matter haloes at the time of accretion (right panel). Most of the accreted galaxies lie in quite small haloes and only a handful of them are attached to relatively more massive systems, which are accreted early on during the galaxy’s lifetime. These are the galaxies that contribute most to the build-up of the stellar halo. The results illustrated in Fig.\[fig4\] are in good agreement with those by [@Font_etal_2006 Font et al. (2006)] who combined mass accretion histories of galaxy-size haloes with a chemical evolution model for each accreted satellites to study the formation of the stellar halo.
![Left panel: Mean (filled circles) and median (empty circles) metallicity of star particles as a function of the distance from the most bound particle in the Milky Way halo, for the simulations GA2 (blue) and GA3 (red). Dashed lines correspond to the 15th and 85th percentiles of the distribution. Right panel: Projected density profile of the stellar halo (solid black line) and of the dark matter halo (dashed black line) for the simulation GA3. The solid green and orange lines show the projected density profiles for star particles with metallicity smaller and larger than $0.4\,Z_{\odot}$ respectively.[]{data-label="fig4"}](halogradient.ps "fig:"){width="2.6in"} ![Left panel: Mean (filled circles) and median (empty circles) metallicity of star particles as a function of the distance from the most bound particle in the Milky Way halo, for the simulations GA2 (blue) and GA3 (red). Dashed lines correspond to the 15th and 85th percentiles of the distribution. Right panel: Projected density profile of the stellar halo (solid black line) and of the dark matter halo (dashed black line) for the simulation GA3. The solid green and orange lines show the projected density profiles for star particles with metallicity smaller and larger than $0.4\,Z_{\odot}$ respectively.[]{data-label="fig4"}](haloprofile.ps "fig:"){width="2.6in"}
Fig.\[fig4\] shows the metallicity of star particles as a function of the distance from the most bound particle in the Milky Way halo for the simulations GA2 (blue) and GA3 (red). For both simulations, the mean metallicity decreases from ${\rm Log[Z/Z}_{\odot}{\rm]}\sim -0.4$ at the centre to $\sim -0.8$ at a distance of $\sim 40\,{\rm kpc}$. The median and upper 85th percentile of both distributions are approximately flat around $\sim -0.5$. Note that the metallicity of our stellar halo is higher than what is known for the Galactic halo near the Sun. We note also that both distributions are dominated in number by star particles associated to one or a few accreted galaxies with relatively high metallicity (hence the flat behaviour of the median and upper percentile of the distribution). The lower percentile declines with increasing distance from the centre, suggesting that the inner region is largely dominated by high-metallicity stars while the contribution from lower metallicity stars becomes more important moving to the outer regions. This is shown more explicitely in the right panel of Fig.\[fig4\] which shows the projected density profile of the stellar halo (black) for the simulation GA3. The solid orange and green lines in this panel show the projected profiles of star particles with metallicity larger and smaller than $0.4\,Z_{\odot}$ respectively. High metallicity stars are more centrally concentrated than stars with lower abundances, suggesting that the probability of observing low-metallicity stars increases at larger distances from the Galactic centre ($\gtrsim 10-20$ kpc), where the contribution from the inner more metal-rich stars is less dominant. Interestingly, this result appears to be in qualitative agreement with recent measurements by [@Carollo_etal_2007 Carollo et al. (2007)].
In our model, the ‘dual’ nature of the stellar halo originates from a correlation between the stellar metallicity and the stellar mass of accreted galaxies. Since the most massive galaxies decay through dynamical friction to the inner regions of the halo, this is where higher metallicity stars will be found preferentially.
Conclusions
===========
We have combined high-resolution resimulations of a ‘Milky Way’ halo with semi-analytic techniques to study the formation of our own Galaxy and of its stellar halo. The galaxy formation model used in our study has been used in a number of previous studies and has been shown to provide a reasonable agreement with a large number of observational data both in the local Universe and at higher redshifts (De Lucia & Helmi 2008 and references therein). Our study demonstrates that the same model is able to reproduce quite well the observed physical properties of our own Galaxy. The agreement is not perfect: our model Galaxy contains about twice the gas observed in the Milky Way, and the model bulge is slightly less massive and substantially less metal rich than the Galactic bulge. A detailed comparison between model results and observational measurements of metallicity distributions is complicated by the use of an instantaneous recycling approximation which is not appropriate for the iron-peak elements, mainly produced by supernovae Type Ia. Relaxing of this approximation in future work, will allow us to carry out a more detailed comparison with observed chemical compositions, and to establish similarities and differences between present-day satellites and the building blocks of the stellar halo.
Our model stellar halo is made up of very old stars (older than $\sim 11$ Gyr) with low metallicity, although higher than what is known for the stellar halo of our Galaxy. Most of the stars in the halo are contributed by a few relatively massive satellites accreted early on during the galaxy’s lifetime. The building blocks of the stellar halo lie on a well defined mass-metallicity relation. Since the most massive galaxies are dragged closer to the inner regions of the halo by dynamical friction, this produces a stronger concentration of more metal rich stars, in qualitative agreement with recent observational measurements. The numerical resolution of our simulations is too low for the study of spatially and kinematically coherent structures in model stellar halo. Higher resolution simulations are needed for this kind of study.
2004, *A&A*, 415, 155
2007, *Nature*, 450, 1020
2007, *MNRAS*, 375, 2
2008, *MNRAS* submitted, arXiv:0804.2465
2006, *ApJ*, 638, 585
2001, *MNRAS*, 328, 726
2005, *Nature*, 435, 629
2002, *MNRAS*, 335, L84
2003, *MNRAS*, 345, 1313
1995, *AJ*, 110, 2771
2003, *A&A*, 399, 931
|
---
author:
- '**[ Nabil L. Youssef$^{\,1}$ and S. G. Elgendi$^{2}$]{}**'
title: '**[Computing nullity and kernel vectors using NF-package: Counterexamples]{}**'
---
[$^{1}$Department of Mathematics, Faculty of Science,\
Cairo University, Giza, Egypt]{}\
[$^{2}$Department of Mathematics, Faculty of Science,\
Benha University, Benha, Egypt]{}
E-mails: nlyoussef@sci.cu.edu.eg, nlyoussef2003@yahoo.fr\
salah.ali@fsci.bu.edu.eg, salahelgendi@yahoo.com
****
Abstract
A computational technique for calculating nullity vectors and kernel vectors, using the new Finsler package, is introduced. As an application, three interesting counterexamples are given. The first counterexample shows that the two distributions $\mathrm{Ker}_R$ and ${\mathcal{N}}_R$ do not coincide. The second shows that the nullity distribution ${\mathcal{N}}_{P^\circ}$ is not completely integrable. The third shows that the nullity distribution ${\mathcal{N}}_\mathfrak{R}$ is not a sub-distribution of the nullity distribution ${\mathcal{N}}_{R^\circ}$.
[**Keywords:**]{} Maple program, New Finsler package, Nullity distribution, Kernel distribution.\
[**MSC 2010:**]{} 53C60, 53B40, 58B20, 68U05, 83-08.
**Introduction**
In the applicable examples of Finsler geometry in mathematics, physics and the other branches of science, the calculations are often very tedious to perform. This takes a lot of effort and time. So, we have to find an alternative method to do these calculations. One of the benefits of using computer is the manipulation of the complicated calculations. This enables to study various examples in different dimensions in various applications (cf., for example, [@r101], [@Rutz2], [@shen2005], [@r93], [@gamal],[@Portugal1],[@wanas]). The FINSLER package [@Rutz3] included in [@hbfinsler1] and the new Finsler package [@CFG] are good illustrations of using computer in the applications of Finsler geometry.\
In this paper, we use the new Finsler (NF-) package [@CFG] to introduce a computational technique to calculate the components of nullity vectors and kernel vectors. As an application of this method, we construct three interesting counterexamples. The first shows that the kernel distribution $\mathrm{Ker}_R$ and the nullity distribution ${\mathcal{N}}_R$ associated with the h-curvature $R$ of Cartan connection do not coincide, in accordance with [@ND-Zadeh]. The second proves that the nullity distribution ${\mathcal{N}}_{P^\circ}$ associated with the hv-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$ of Berwald connection is not completely integrable. Finally, the third counterexample shows that the nullity distribution ${\mathcal{N}}_\mathfrak{R}$ associated with the curvature $\mathfrak{R}$ of Barthel connection is not a sub-distribution of the nullity distribution ${\mathcal{N}}_{R^\circ}$ associated with the h-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$ of Berwald connection.\
Following the Klein-Grifone approach to Finsler geometry ([@r21], [@r22], [@r27]), let $(M,F)$ be a Finsler space, where $F$ is a Finsler structure defined on an $n$-dimensional smooth manifold $M$. Let H(TM) (resp. V(TM)) be the horizontal (resp. vertical) sub-bundle of the bundle TTM. We use the notations $R$ and $P$ for the h-curvature and hv-curvature of Cartan connection respectively. We also use the notations ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$ and ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$ for the h-curvature and hv-curvature of Berwald connection respectively. Finally, $\mathfrak{R} $ will denote the curvature of the Cartan non-linear connection (Barthel connection).
In this section, we use the New Finsler (NF-) package [@CFG], which is an extended and modified version of [@Rutz3], to introduce a computational method for the calculation of nullity vectors and kernel vectors.
\[nr\] Let $R$ be the h-curvature tensor of Cartan connection. The nullity space of $R$ at a point $z\in TM$ is the subspace of $H_z(TM)$ defined by $$\mathcal{N}_R(z):=\{X\in H_z(TM) : \, R(X,Y)Z=0, \, \,\forall\, Y,Z\in H_z(TM)\}.$$ The dimension of $\mathcal{N}_R(z)$, denoted by $\mu_R(z)$, is the index of nullity of $R$ at $z$.
If $\mu_R(z)$ is constant, the map $\mathcal{N}_R:z\mapsto \mathcal{N}_R(z) $ defines a distribution $\mathcal{N}_R$ of rank $\mu_R$ called nullity distribution of $R$.
Any vector field belonging to the nullity distribution is called a nullity vector field.
\[ker\] The kernel space $\mathrm{Ker}_{R}(z)$ of the h-curvature ${R}$ at a point $z\in TM$ is the subspace of $H_z(TM)$ defined by $$\mathrm{Ker}_{R}(z)=\{X\in H_z(TM): \, {R}(Y,Z)X=0, \, \forall\, Y,Z\in H_z(TM)\}.$$
As in Definition \[nr\], the map $z\mapsto \mathrm{Ker}_{R}(z) $ defines a distribution called the kernel distribution of $R$. Any vector field belonging to the kernel distribution is called a kernel vector field.
To calculate the nullity vectors and kernel vectors using the NF-package, let us recall some instructions to make the use of this package easier. When we write, for example, N\[i,-j\] we mean $N^i_j$, i.e., positive (resp. negative) index means that it is contravariant (resp. covariant). To lower or raise an index by the metric or the inverse metric, just change its sign from positive to negative or vice versa. The command *tdiff*(N\[i,-j\], X\[k\])means ${\partial}_k N^i_j$, the command *tddiff*(N\[i,-j\], Y\[k\])means $\dot{\partial}_k N^i_j$ and the command *Hdiff*(N\[i,-j\], X\[k\])means $\delta_k N^i_j$. To introduce the definition of a tensor, we use the command *definetensor*and to display its components, we use the command *show*as will be seen soon.\
Now, let $Z\in{\mathcal{N}}_R$ be a nullity vector. Then, $Z$ can be written locally in the form $Z=Z^ih_i$, where $Z^i$ are the components of the nullity vector $Z$ with respect to the basis $\{h_i\}$ of the horizontal space, where $h_i:=\frac{\partial}{\partial x^i}-N^j_i\frac{\partial}{\partial y^j}$ and $N^j_i$ are the coefficients of Barthel connection; $i, j=1,...,n$. The equation ${R}(Z,X)Y=0$, $\forall\, X, Y\in H(TM)$, is written locally in the form $$Z^j{R}^i_{hjk}=0.$$ To derive the resulting system from $Z^j{R}^i_{hjk}=0$, we first compute the components $R^i_{hjk}$ using the NF-package. Then, we define a new tensor by the command *definetensor*as follows:
Putting $RCZ[h,-i,-k]=0$, we obtain a homogenous system of algebraic equations. Solving this system, we get the components $Z^i$.
*[It should be noted that we must not use the notation $X=X^ih_i$ nor the notation $Y=Y^ih_i$ for nullity vectors because $RC[h,-i,-j,-k]*X[j]$ and $RC[h,-i,-j,-k]*Y[j]$ mean to Maple $x^jR^h_{ijk}$ and $y^jR^h_{ijk}$ respectively, which both are not the correct expressions for nullity vectors. ]{}*
In a similar way, we compute the components of a kernel vector. Let $W=W^ih_i\in \mathrm{Ker}_R$, then $R(X,Y)W=0$, $\forall\, X, Y\in H(TM)$. This locally gives the homogenous system of algebraic equations: $$W^h{R}^i_{hjk}=0.$$ Then by the NF-package, we can define
Putting $RCW[h,-j,-k]=0$ and solving the resulting system, we get the components $W^i$ of the kernel vector $W$.
In this section, we provide three interesting counterexamples. We perform the computations using the above mentioned technique and the NF-package. We also make use of the technique of simplification of tesor expressions [@CFG].
The nullity distributions associated with Cartan connection are studied in [@ND-cartan]. The following example shows that *the nullity space ${\mathcal{N}}_R$ of the h-curvature $R$ of Cartan connection and the kernel $\mathrm{Ker}_R$ do not coincide.*
**Example 1**
Let $M=\{(x^1,...,x^4)\in \mathbb{R}^4|\, x^2>0\}$, $U=\{(x^1,...,x^4;y^1,...,y^4)\in \mathbb{R}^4 \times \mathbb{R}^4: \, y^2\neq~0~, y^4\neq 0\}\subset TM$. Let $F$ be defined on $U$ by $$F := \, ({{{{\it x2}}^{2}{{\it y1}}^{4}+{{\it y2}}^{4}+{{\it y3}}^{4}+{{\it y4}}^{4}}})^{1/4}.$$
By Maple program and NF-package we can perform the following calculations.
[ **Barthel connection**]{}
[ **h-curvature R of Cartan connection** ]{}
[ **$R$-Nullity vectors**]{}
Putting ${\it RCW}^{\it h }_{\it ij }=0$, then we have a system of algebraic equations. The NF-package yields the following solution: $W^1=W^2=0, W^3=s, W^4=t,; \,s,t\in \mathbb{R}$. Then, any nullity vector $W$ has the form $$\label{null.1}
W=sh_3+th_4.$$
[ **$R$-Kernel vectors**]{}
Putting ${\it RCZ}^{\it h }_{\it ij }=0$, we obtain a system of algebraic equations. The NF-package yields the solution: $Z^1=\frac{sy_1}{y_2}$, $Z^2=s$, $Z^3=t$ and $Z^4=\frac{s(x_2y_1^4+y_2^4+2y_3^4+2y_4^4)-ty_2y_3^3}{y_2y_4^3}$.
Then, any kernel vector $Z$ should have the form $$\label{ker1}Z=s\left(\frac{y_1}{y_2}h_1+h_2+\frac{x_2y_1^4+y_2^4+2y_3^4+2y_4^4}{y_2y_4^3}h_4\right)+t\left(h_3-\frac{y_3^3}{y_4^3}h_4\right).$$ (for simplicity, we have written $x_i$ and $y_i$ instead of $x^i$ and $y^i$ respectively)
Comparing (\[null.1\]) and (\[ker1\]), we find no values for $s$ and $t$ which make $Z=W$. Consequently, ${\mathcal{N}}_R$ and $\mathrm{Ker}_R$ can not coincide.
In [@Nabil.1] Youssef proved that the nullity distribution ${\mathcal{N}}_{R^\circ}$ associated with the h-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$ of Berwald connection is completely integrable. He conjectured that the nullity distribution ${\mathcal{N}}_{P^\circ}$ of the hv-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$ of Berwald connection is not completely integrable. In the next example, *we show that his conjecture is true*.
**Example 2**
Let $M=\mathbb{R}^3$, $U=\{(x^1,x^2,x^3;y^1,y^2,y^3)\in \mathbb{R}^3 \times \mathbb{R}^3: \,y^1\neq 0\}\subset TM$. Let $F$ be defined on $U$ by $$F := \,{{\rm e}^{-{\it x1}}} \left( {{\it y2}}^{3}+{{\rm e}^{-{\it x1x3}}}{\it y3}\,{{\it y1}}^{2} \right)^{1/3}.$$
By Maple program and NF-package, we can perform the following calculations.
[ **Barthel connection**]{}
[ **hv-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$ of Berwald connection**]{}
[ **${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$-Nullity vectors**]{}
Putting ${\it PBW}^{{\it h} }_{{\it ij} }=0$, we get a system of algebraic equations. We have two cases:
The first case is $y2=0$ and the solution in this case is $W^1=s$, $W^2=0$ and $W^3=t$. Hence, any ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$-nullity vector is written in the form $W=sh_1+ th_3$. Take two nullity vectors $X,Y \in {\mathcal{N}}_{P^\circ}$ such that $X=h_1$ and $Y=h_3$. Their Lie bracket $[X,Y]=-\frac{y_1}{2}\frac{\partial}{\partial y_1}+y_3\frac{\partial}{\partial y_3}$, which is vertical.
The second case is $y2\neq 0$ and the solution in this case is $W^1=s$, $W^2=\frac{y_2}{y_1}s$ and $W^3=t$. Then any ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$P$}}$-nullity vector is written in the form $W=s(h_1+\frac{y_2}{y_1}h_2)+ th_3$. Let $X$ and $Y$ be the two nullity vectors in ${\mathcal{N}}_{P^\circ}$ given by $X=h_1+\frac{y_2}{y_1}h_2$ and $Y=h_3$. By computing their Lie bracket, we find that $[X,Y]=-\frac{y_1}{2}\frac{\partial}{\partial y_1}+y_3\frac{\partial}{\partial y_3}$, which is vertical.
Consequently, in both cases the Lie bracket $[X,Y]$ does not belong to ${\mathcal{N}}_{P^\circ}$.\
Let ${\mathcal{N}}_{R^\circ}$ and ${\mathcal{N}}_\mathfrak{R}$ be the nullity distributions associated with the h-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$ of Berwald connection and the curvature $\mathfrak{R}$ of the Barthel connection respectively. In [@Nabil.2], Youssef proved that ${\mathcal{N}}_{R^\circ} \subseteq {\mathcal{N}}_\mathfrak{R}$. The following example shows that *the converse is not true: that is ${\mathcal{N}}_{R^\circ}$ is a proper sub-distribution of ${\mathcal{N}}_\mathfrak{R}$*.
**Example 3**
Let $M= \mathbb{R}^4$, $U=\{(x^1,\cdots,x^4;y^1,\cdots,y^4)\in \mathbb{R}^4 \times \mathbb{R}^4: \, y^2\neq 0, \,y^4\neq 0 \}\subset TM$. Let $F$ be defined on $U$ by $$F := \left(\,{{\rm e}^{-{\it x2}}}{\it y1}\,\sqrt [3]{{{\it y2}}^{3}+{{\it y3}}^{3}+{{\it y4}}^{3}}\right)^{1/2}.$$ By Maple program and NF-package, we can perform the following calculations.
[ **Barthel connection**]{}
[ **Curvature $\mathfrak{R}$ of the Barthel connection** ]{}
[ **$\mathfrak{R}$-nullity vectors**]{}
Putting ${\it RGZ}^{{\it h} }_{{\it i} }=0$, we get a system of algebraic equations. In the case where $y2^3+y3^3+y4^3=0$, we get the solution $Z^1=t_1$, $Z^2=t_2$ and $Z^3=Z^4=0$ where $t_1,t_2\in \mathbb{R}$. Then, $$\label{nullg}
Z=t_1h_1+t_2h_2.$$
[ **h-curvature ${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$ of Berwald connection:**]{}
[ **${\raisebox{10pt}{\tiny{$\circ$}}{\kern-7.5pt}\mbox{$R$}}$-nullity vectors**]{}
Putting ${\it RBW}^{{\it h} }_{{\it ij} }=0$, we obtain a system of algebraic equations. This system has the solution $W^1=t, \, t\in\mathbb{R}$ and $W^2=W^3=W^4=0$. Then, $$\label{nullb}
W=th_1.$$ Consequently, (\[nullg\]) and (\[nullb\]) lead to ${\mathcal{N}}_\mathfrak{R} \not\subset {\mathcal{N}}_{R^\circ}$.
In this paper, we have mainly achieved two objectives:
$\bullet$ A computational technique for calculating the nullity and kernel vectors, based on the NF-package, has been introduced.
$\bullet$ Using this technique, three counterexamples have been presented: the first shows that the two distributions $\mathrm{Ker}_R$ and ${\mathcal{N}}_R$ do not coincide. The second proves that the nullity distribution ${\mathcal{N}}_{P^\circ}$ is not completely integrable. The third shows that the nullity distribution ${\mathcal{N}}_\mathfrak{R}$ is not a sub-distribution of ${\mathcal{N}}_{R^\circ}$.
[10]{}
P. L. Antonelli (Ed.), *Handbook of Finsler geometry I, II*, Kluwer Acad. publ., 2003.
P. L. Antonelli, R. Ingarden and M. Matsumoto, *The theory of sprays and Finsler spaces with applications in physics and biology*, Kluwer Acad. Publ., 1993.
P. L. Antonelli, S. F. Rutz and K. T. Fonseca, *The mathematical theory of endosymbiosis, II: Models of the Fungal Fusion hypothesis*, Nonlinear Anal., Real World Appl., 13 (2012) 2096.
S. S. Chern and Z. Shen, *Riemann-Finsler Geometry*, Singapore: World Scientific, 2005.
J. Grifone, *Structure presque-tangente et connexions, <span style="font-variant:small-caps;">I</span>*, Ann. Inst. Fourier, Grenoble, **[22, 1]{}** (1972), 287–334. J. Grifone, *Structure presque-tangente et connexions, <span style="font-variant:small-caps;">II</span>*, Ann. Inst. Fourier, Grenoble, [**22, 3**]{} (1972), 291–338.
J. Klein and A. Voutier, *Formes extérieures génératrices de sprays*, Ann. Inst. Fourier, Grenoble, [**18, 1**]{} (1968), 241–260.
R. Miron and M. Anastasiei, *The geometry of Lagrange spaces: Theory and applications*, Kluwer Acad. Publ., 1994.
G. G. L. Nashed, *Reissner-nordstr$\ddot{o}$m solutions and energy in teleparallel theory*, Mod. Phys. Lett. A, **21** (2006), 2241–2250.
R. Portugal, S. L. Sautu, *Applications of Maple to General Relativity*, Comput. Phys. Commun., **105** (1997), 233–253.
S. F. Rutz and R. Portugal, *FINSLER: A computer algebra package for Finsler geometries*, Nonlinear Analysis, **47** (2001), 6121–6134.
M. I. Wanas, *On the relation between mass and chage: a pure geometric approach*, Int. J. Geom. Meth. Mod. Phys., **4** (2007), 373–388.
Nabil L. Youssef, *Distribution de nullité du tensor de courbure d’une connexion*, C. R. Acad. Sci. Paris, Sér. A, **290** (1980), 653–656.
Nabil L. Youssef, *Sur les tenseurs de courbure de la connexion de Berwald et ses distributions de nullité.* Tensor, N. S., **36** (1982), 275-–280.
Nabil L. Youssef and S. G. Elgendi, *A note on Sur le noyau de l’opérateur de courbure d’une variété finslérienne, C. R. Acad. Sci. Paris, sér. A, t. 272 (1971), 807-810*, C. R. Math., Ser. I, **351** (2013), 829–832. ArXiv: 1305.4498 \[math. DG\].
Nabil L. Youssef and S. G. Elgendi, *New Finsler package*, Comput. Phys. Commun., **185** (2014) 986–997. ArXiv: 1306.0875 \[math. DG\].
Nabil L. Youssef, A. Soleiman and S. G. Elgendi, *Nullity distributions associated to Cartan connection*, Ind. J. Pure Appl. Math., **45**(2) (2014), 213–238. ArXiv: 1210.8359 \[math. DG\].
|
---
abstract: 'We address the ultimate charge detection scheme with a quantum point contact. It is shown that a [*superposed input state*]{} is necessary to exploit the full sensitivity of a quantum point contact detector. The coherence of the input state provides an improvement in charge sensitivity, and this improvement is a result of the fundamental property of the scattering matrix. Further, a quantum-limited (maximally efficient) detection is possible by controlling the interference between the two output waves. Our scheme provides the ultimate sensitivity and efficiency of charge detection with a generic quantum point contact.'
author:
- 'Kang-Ho Lee'
- Kicheon Kang
title: Ultimate charge sensitivity and efficiency of a quantum point contact with a superposed input state
---
Detection of single electrons [@Field93; @Devoret00; @Lu; @Sprinzak02] is an essential ingredient for realizing quantum information processing with charge qubits. A quantum point contact (QPC) is widely used as a charge detector, with a sensitivity that extends down to the level of single electrons. It also plays an important role in investigating fundamental issues in quantum theory, such as quantum mechanical complementarity [@Buks98; @Chang08]. It has been well understood that the phase, as well as the transmission probability, of a QPC can be utilized for charge sensing [@Stodolsky99; @Sprinzak00]. The sensitivity can be described in terms of the controlled dephasing rate of the qubit induced by the interaction with the QPC detector. This is because the dephasing rate is equivalent to the rate of the (qubit) information transfer to the detector. The dephasing rate is a function of the two independent variables, $\Delta T$ and $\Delta\phi$, the sensitivities of the transmission probability and of the phase difference between the transmitted and the reflected waves, respectively [@Aleiner97; @Levinson97; @Gurvitz97; @Stodolsky99; @Hackenbroich01]. Further, it has been shown that the phase-sensitive term is dominant in a generic QPC which reduces considerably the efficiency of charge detection (typically, below 5$\%$) [@Chang08; @Kang05; @Kang07]. In this context, utilizing the phase degree of freedom is important and useful for a quantum information architecture.
On the other hand, it is worth considering the following unnoticed, but fundamental, property of a QPC in the context of charge sensing. A single channel QPC is described by a $2\times2$ scattering matrix, which has SU(2) symmetry. Neglecting the physically irrelevant global phase factor, the S-matrix has three independent physical variables for charge detection. However, all the existing experiments and theoretical proposals are based on utilizing only one or two ($\Delta T$ and $\Delta\phi$) of these variables [@Aleiner97; @Buks98; @Chang08; @Sprinzak00; @Levinson97; @Gurvitz97; @Stodolsky99; @Hackenbroich01]. Therefore, this provides an interesting question: can we exploit the three independent variables for charge detection?
In this Letter, we show that this is indeed possible. In addition to the well-known sensitivities $\Delta T$ and $\Delta\phi$, another hidden phase variable exists, and can indeed be used. We propose a scheme that utilizes this hidden variable (as well as the two other variables) by using a “superposed input state". The hidden phase variable appears in the expression of the dephasing rate, if the input electron is in the state of a coherent superposition of two input ports. Naturally, it provides an advantage in high-sensitivity charge detection as well as deeper understanding of the quantum mechanical complementarity realized in a QPC detector. The setup proposed here provides the maximum sensitivity with a generic QPC. Further, we show that the system can be tuned for a quantum limited detection of the charge state. One of the most remarkable features in quantum measurement is the trade-off between information transfer of the state of the [*system*]{} into the measurement [*apparatus*]{} and the back-action dephasing of the system [@Korotkov01; @Pilgram; @Clerk; @Averin05]. The “potential" measurement sensitivity of a measurement apparatus is reflected in the dephasing rate induced by the apparatus. For an actual measurement, the information stored in the potential sensitivity should be transformed to an actual sensitivity. In general, the potential sensitivity may not be fully exploited in an actual measurement. The quantum-limited detection is a fully efficient measurement where the potential sensitivity is fully transformed to the actual sensitivity. For a practical quantum information processing, both the sensitivity and the efficiency are important: Dephasing rate (sensitivity) is the speed of the information transfer, and the efficiency is the ratio of the actual measurement rate to the dephasing rate [@Korotkov01; @Pilgram; @Averin05; @Khym06].
Let us consider a QPC charge detector, which monitors the state of a charge qubit (being 0 or 1) through mutual capacitive interactions (Fig. 1). Controlled dephasing induced by a charge detection can be implemented, for instance, by constructing interferometers which include a quantum dot [@Buks98; @Chang08] or double quantum dots [@Sprinzak00]. We assume that the QPC circuit has only a single transverse channel at zero temperature. Generalization to finite temperature and multichannel is straightforward. The interaction between the qubit and the QPC detector is described as a continuous weak measurement [@Korotkov01; @Hackenbroich01]. The sensitivity of a possible measurement is encoded in the scattering matrix of the QPC, which depends on the qubit state $j$ ($=0$ or $1$): $${S_j}=
\left(\begin{array}{ccc}
r_j & t'_j \\
t_j & r'_j \\
\end{array} \right)~.
\label{eq:Sj}$$ The scattering matrix transforms the input states $\alpha$ and $\beta$ into the output $\gamma$ and $\delta$ as $$\left(\begin{array}{c}
c_\gamma \\ c_\delta
\end{array} \right)
= S_j
\left(\begin{array}{c}
c_\alpha \\ c_\beta
\end{array} \right) ,$$ where $c_l$ is the annihilation operator of an electron at lead $l (\in \alpha,\beta,\gamma,\delta)$. For a single scattering event in the QPC detector, the initial state of the system before scattering can be represented as a product state of the two subsystems: $$|\Psi_0\rangle=(a_{0}|0\rangle+a_{1}|1\rangle)\otimes|\chi_{in}\rangle ,$$ where $a_{0}|0\rangle+a_{1}|1\rangle$ is the initial state of the charge qubit, and $|\chi_{in}\rangle$ is the input state of the QPC detector.
Our strategy here is to introduce a [*superposed input*]{} state from the two input sources $\alpha$ and $\beta$: $$|\chi_{in}\rangle =
(\sqrt{p}c^{\dagger}_\alpha+\sqrt{1-p}e^{i\theta}c^{\dagger}_\beta)|F\rangle,
\label{eq:input}$$ instead of the conventional way of injecting the probe electrons from a single source. The parameters $p$ and $\theta$ determine the degree of splitting and the relative phase between the two input waves, respectively. In a real experiment, these parameters can be tuned by placing another QPC, before injecting electrons into the region of the interactions. $|F\rangle$ denotes the ground state (Fermi sea) of the electrodes.
Upon a scattering, the system is entangled as $$|\Psi\rangle = a_{0}|{0}\rangle|\chi_{0}\rangle
+ a_{1}|{1}\rangle|\chi_{1}\rangle ,$$ where the output state of the QPC detector $|\chi_{j}\rangle$ is given by
$$\begin{aligned}
|\chi_{j}\rangle &=& (\tilde{r_j}c_{\gamma}^{\dagger}+\tilde{t_j}c_{\delta}^{\dagger})|{F}\rangle~,
\label{eq:output} \\
\tilde{r_j} &=& \sqrt{p}r_j+\sqrt{1-p}e^{i\theta}t'_j~, \\
\tilde{t_j} &=& \sqrt{p}t_j+\sqrt{1-p}e^{i\theta}r'_j~.\end{aligned}$$
Charge sensitivity is reflected in the reduced density matrix of the qubit, $\rho=\rm{Tr}_{QPC}${$|\Psi\rangle\langle\Psi|$}. Upon a single scattering event, its off-diagonal element $\rho_{01}$ is reduced ($\rho_{01}\rightarrow
\lambda\rho_{01}$) by the coherence factor $\lambda$ $$\lambda = \langle\chi_1|\chi_0\rangle .
\label{eq:lambda}$$ We consider the continuous weak measurement limit, where the single scattering event provides only a slight modification of the qubit state ($\lambda\approx1$). The scattering through the QPC takes place on a time scale much shorter than the relevant time scale in the qubit. In our particular case of a QPC with the applied bias voltage $V$, this corresponds to $\Delta{t}\ll 1/\Gamma_d$, where $\Delta{t}\equiv h/eV$ is the average time interval [@Delta_t] between two successive scattering events, and $\Gamma_d$ is the dephasing rate. In this process, the magnitude of $\rho_{01}$ decays as $$|\rho_{01}|=e^{-\Gamma_dt}|\rho^{0}_{01}|$$ with the dephasing rate $\Gamma_d=-\frac{\ln{|\lambda|}}{\Delta{t}}$. In a conventional scheme with single input port ($p=0$ or $p=1$), the dephasing rate is determined by the charge sensitivities of the two independent parameters, namely $T_j=|t_j|^2$ and $\phi_j = \arg{(t_j/r_j)}$. This is because the qubit state information can be extracted either through the transmission probability (with a direct current measurement), or through the relative phase shift between the transmitted and the reflected output waves (by constructing an interferometer). On the other hand, our scheme provides an additional sensitivity on the parameter $\varphi_j\equiv \arg{(t_j/r_j')}$, and the dephasing rate is given as $$\begin{aligned}
\Gamma_d &=& \frac{1}{\Delta t} \left[
u_1(\Delta T)^2 + u_2(\Delta\phi)^2 + u_3(\Delta\varphi)^2
\right. \nonumber \\
&+& \left. u_4(\Delta T\Delta\phi) + u_5(\Delta T\Delta\varphi)
+ u_6(\Delta\phi\Delta\varphi)
\right] ,
\label{eq:Gamma_d}\end{aligned}$$ with parameter-dependent dimensionless coefficients $u_i$ ($i=1,2,\cdots,6$). $\Delta T$ is the sensitivity of the transmission probability, that is, $\Delta T=|t_1|^2-|t_0|^2$. The phase sensitivities are defined in the same way as $\Delta\phi=\phi_1-\phi_0$ and $\Delta\varphi=\varphi_1-\varphi_0$.
The key point of Eq. (\[eq:Gamma\_d\]) is that $\Gamma_d$ is a function of the three independent charge sensitivities, $\Delta T$, $\Delta\phi$, and $\Delta\varphi$, in contrast to the well-known expression of the dephasing rate having only two sensitivities, $\Delta T$ and $\Delta\phi$ [@Aleiner97; @Stodolsky99; @Hackenbroich01]. The physical meaning behind Eq. (\[eq:Gamma\_d\]) can be understood as follows. First, a single channel QPC is in general described by a SU(2) matrix which has three independent physical variables (just as in any spin-$1/2$ problem). The third hidden variable $\Delta\varphi$ appears due to the superposed input. Physically, $\varphi_j$ is the relative phase between the two amplitudes, $t_j$ and $r_j'$. These are the two amplitudes injected from the two different inputs and combined into a single output. Naturally, the sensitivity of this phase appears only by using a superposed input. In the limit of single input ($p=0$ or $p=1$), Eq. (\[eq:Gamma\_d\]) reduces to the existing result $$\Gamma_d\rightarrow \Gamma_d^0 = \frac{1}{\Delta t} \left[
\frac{(\Delta T)^2}{8T(1-T)} + \frac{1}{2}T(1-T)(\Delta\phi)^2
\right] ,$$ where $T=(|t_0|^2+|t_1|^2)/2$.
With the additional phase sensitivity $\Delta\varphi$, we can achieve an improvement of the overall sensitivity. In the following, we discuss how it can be done in a systematic way. For simplicity, we consider a low efficiency limit ($\Delta T\ll \Delta\phi, \Delta\varphi$), where the direct current measurement through the QPC extracts only a very small portion of the charge state information. This limit is meaningful because of the great potential for improvement of detection by controlling the interference. In addition, it has been argued [@Kang05; @Kang07] that a generic QPC would show a low efficiency, which has also been observed experimentally with its efficiency below $5\%$ [@Chang08].
In this limit ($\Delta T\rightarrow 0$), $\Gamma_d$ of Eq. (\[eq:Gamma\_d\]) is reduced to $\Gamma_d=\{u_2(\Delta\phi)^2 + u_3(\Delta\varphi)^2
+ u_6(\Delta\phi\Delta\varphi)\}/\Delta t$. This value of $\Gamma_d$ can be controlled by the two input parameters $p$ and $\theta$ of the input state (Eq. (\[eq:input\])). It is straightforward to find that the maximum dephasing rate (maximum sensitivity)
$$\Gamma_d^M = \frac{1}{2\Delta t} \left\{
\frac{1}{4} \left[ (\Delta\phi)^2+(\Delta\varphi)^2 \right]
+ (T-1/2)(\Delta\phi)(\Delta\varphi) \right\}
\label{eq:dmax}$$
is achieved for the particular input state $|\chi_{in}\rangle=|\chi_{in}^M\rangle$: $$|\chi_{in}^M\rangle
= \frac{1}{\sqrt{2}} (c_\alpha^\dagger-ie^{i\varphi_0} c_\beta^\dagger)
|F\rangle~ .
\label{eq:chi_in_m}$$ Notably, $\Gamma_d^M$ is always larger than $\Gamma_d^0$ (the dephasing rate of the qubit state when the conventional input ($p=0$ or $p=1$) is used): $ \Gamma_d^0 = \frac{1}{2\Delta t} T(1-T)(\Delta\phi)^2$ . The amount of the sensitivity enhancement is found to be $$\Delta \Gamma_d \equiv \Gamma_d^M-\Gamma_d^0
= \frac{1}{2\Delta t} \left[
(2T-1)\Delta\phi+\Delta\varphi \right]^2 .
\label{eq:DGamma}$$ That is, the sensitivity enhancement depends on the parameters $\Delta\phi, \Delta\varphi$ and $T$. Interestingly, a sensitivity enhancement is obtained even for $\Delta\varphi=0$, where the third variable $\varphi_j$ has no charge sensitivity. \[eq:max-sens\]
Since the variables $\Delta\phi$ and $\Delta\varphi$ can be determined experimentally, a systematic improvement of the sensitivity is possible. Later we will briefly discuss how it can be done experimentally. The relation between $\Delta\phi$ and $\Delta\varphi$ is not universal but depends on the details of the qubit-QPC interaction. Here we consider a simple potential shift model [@Kang07] where an extra charge of a qubit provides a uniform shift of the potential. This model is suitable for describing the low efficiency limit of ($\Delta T\rightarrow0$) charge detection [@Kang07]. In this model, one can find that $\Delta\varphi=-\Delta\phi$ [@Lee12]. Fig. 2 displays a plot of the dephasing rate $\Gamma_d$ as a function of $p$ and $\theta$ for this case ($\Delta\varphi=-\Delta\phi$). The maximum dephasing rate $\Gamma_d^M$ is achieved for $p=1/2$ and $\theta = \varphi_0-\pi/2$, that is, for $|\chi_{in}\rangle=|\chi_{in}^M\rangle$, which is consistent with Eq. (\[eq:max-sens\]). The setup of Fig. 1 is not enough for an actual measurement of the charge state. It can be overcome by putting a [*measurement*]{} QPC, (labeled as QPC$_m$), to compose an interference between the transmitted and the reflected waves (see Fig. 3). This scheme is particularly useful in the limit of low efficiency of the QPC interacting with the qubit. For a conventional input scheme of electrons ($p=0$ or $p=1$ limit in our setup), it has been theoretically shown in Ref. that the full amount of information can be extracted (=“quantum limited detection (QLD)") by controlling QPC$_m$. In the following, we show that a QLD is also possible in our scheme, with the improved sensitivity.
With a [*measurement*]{} QPC (QPC$_m$), the scattering matrix of the interacting QPC, $S_j$, of Eq. (\[eq:Sj\]) is transformed as $$S_j \longrightarrow S^m S_j ,$$ where $${S^m}=
\left(\begin{array}{ccc}
r^m & t'^m \\
t^m & r'^m \\
\end{array} \right)
\label{eq:Sm}$$ is the scattering matrix of QPC$_m$. The most interesting case is to inject the maximally sensitive input state, $|\chi_{in}^M\rangle$, of Eq. (\[eq:chi\_in\_m\]). For this particular input state, the probe electron state is transformed to the output
$$|\bar{\chi}_j\rangle = (\bar{r}_j c_\gamma^\dagger
+\bar{t}_j c_\delta^\dagger) |F\rangle ,$$
where $$\begin{aligned}
\bar{r}_j &=& \frac{1}{\sqrt{2}}
\{ r^m r_j + t'^m t_j -i e^{i\varphi_0}(r^m t'_j + t'^m r'_j) \}, \\
\bar{t}_j &=& \frac{1}{\sqrt{2}}
\{ t^m r_j + r'^m t_j - ie^{i\varphi_0}(t^m t'_j + r'^m r'_j) \}.\end{aligned}$$
Note that the dephasing rate of Eq. (\[eq:Gamma\_d\]) is invariant upon scattering at QPC$_m$, due to the unitarity of $S^m$. After passing through QPC$_m$, the output state $|\chi_j\rangle$ (Eq. (\[eq:output\])) is transformed to $S^m|\chi_j\rangle$. However, the scalar product ($\lambda$) of the two detector states (Eq. (\[eq:lambda\])) is invariant because $ \lambda\rightarrow\bar{\lambda}
= \langle\chi_1| S^{m\dagger} S^m |\chi_0\rangle
= \langle\chi_1|\chi_0\rangle = \lambda$. The QLD can be achieved from the condition $\Delta\bar{\phi} \equiv \bar{\phi}_1 - \bar{\phi}_0 = 0$, where $\bar{\phi}_j = \arg{\bar{t}_j/\bar{r}_j}$. This is the relation that the measurement rate reaches the dephasing rate [@Korotkov01; @Pilgram; @Averin05; @Khym06]. We find that this leads to the condition $$\Delta\bar{\phi} = \frac{1-2T^m}{1-4T^m(1-T^m)\sin^2\Theta}\Delta\phi = 0,$$ where $T^m=|t^m|^2$ and $\Theta=arg(t^m/r'^m)-arg(t_0/r_0)$. Therefore, the QLD can be easily achieved by tuning the transmission probability of the [*measurement*]{} QPC as $$T^m = 1/2.
\label{eq:qld}$$ The two conditions, Eq.(\[eq:chi\_in\_m\]) and Eq. (\[eq:qld\]), provide the [*ultimate sensitivity and efficiency*]{} that can be extracted from a generic single-channel QPC.
Finally, we briefly describe how this ultimate scheme of maximum sensitivity and efficiency can be experimentally realized. In practice, we need three quantum point contacts that form a double interference scheme (see Fig. 3), which is an extension of the electronic Mach-Zehnder interferometer [@Ji03]. The superposed input state is generated by QPC$_i$ (“[*input*]{} QPC"). The maximally sensitive input state $|\chi_{in}^M\rangle$ (Eq. (\[eq:chi\_in\_m\])) can be easily prepared by controlling QPC$_i$. This input state is interacting with the qubit at the “[*main*]{} QPC". The efficiency is independently controlled with QPC$_m$, the “[*measurement*]{} QPC". Further, the phase sensitivities $\Delta\phi$ and $\Delta\varphi$ (or equivalently, $\phi_j$ and $\varphi_j$ with the two charge states $j=0,1$) can be measured in the setup of Fig. 3 as follows. $\phi_j=\arg{t_j/r_j}$ is the relative phase between the two split waves (at the [*main*]{} QPC) of a single incident wave. This can be directly achieved by injecting a conventional input state with $p=1$ ($|\chi_{in}\rangle=c_\alpha^\dagger|F\rangle$) (or with $p=0$ ($|\chi_{in}\rangle=c_\beta^\dagger|F\rangle$)). The phase $\phi_j$ appears in the interference pattern at the output electrode, with the condition $0<T_m<1$. On the other hand, $\varphi_j=\arg{t_j/r_j'}$ corresponds to the relative phase of the two [*merged*]{} waves initially incident from the two separated inputs $\alpha$ and $\beta$. This phase shift can be extracted by tuning $0<p<1$ and $T_m=0$ (or $T_m=1$). This measurement of $\phi_j$ and $\varphi_j$ would allow a quantitative study of the controlled dephasing and measurement discussed in our proposal.
In conclusion, we have investigated the ultimate sensitivity and efficiency of a single-channel QPC as a charge detector. In contrast to the conventional charge detection schemes that utilize only one or two variables, we have shown that a QPC provides three independent physical variables for charge detection, due to the SU(2) symmetry of a scattering matrix. The hidden third information is revealed by injecting a superposed input state of the probe electrons. This work was supported by the National Research Foundation of Korea under Grant No. 2009-0084606, 2012R1A1A2003957, and by LG Yeonam Foundation.
[99]{} M. Field, C. G. Smith, M. Pepper, D. A. Ritchie, J. E. F. Frost, G. A. C. Jones, and D. G. Hasko, Phys. Rev. Lett. [**[70]{}**]{}, 1311 (1993). M. H. Devoret and R. J. Schoelkopf, Nature [**406**]{}, 1039 (2000). W. Lu, Z. Ji, L. Pfeiffer, K. W. West, and A. J. Rimberg, Nature [**423**]{}, 422 (2003). D. Sprinzak, Y. Ji, M. Heiblum, and D. Mahalu, and H. Shtrikman, , 176805 (2002). E. Buks, R. Schuster, M. Heiblum, D. Mahalu and V. Umansky, Nature [**391**]{}, 871 (1998). D.-I. Chang, G. L. Khym, K. Kang, Y. Chung, H.-J. Lee, M. Seo, M. Heiblum, D. Mahalu, V. Umansky, Nature Physics [**4**]{}, 205 (2008). L. Stodolsky, Phys. Lett B [**[459]{}**]{}, 193 (1999). D. Sprinzak, E. Buks, M. Heiblum and H. Shtrikman, Phys. Rev. Lett. [**84**]{}, 5820 (2000). I. L. Aleiner, N. S. Wingreen, and Y. Meir, Phys. Rev. Lett. [**[79]{}**]{}, 3740 (1997). Y. Levinson, Europhys. Lett. [**[39]{}**]{}, 299 (1997). S. A. Gurvitz, Phys. Rev. B [**[56]{}**]{}, 15215 (1997). G. Hackenbroich, Phys. Rep. [**343**]{}, 463 (2001). K. Kang, Phys. Rev. Lett. [**[95]{}**]{}, 206808 (2005). K. Kang and G. L. Khym, New J. Phys. [**[9]{}**]{}, 121 (2007).
A. N. Korotkov and D. V. Averin, Phys. Rev. B [**[64]{}**]{}, 165310 (2001).
S. Pilgram and M. Büttiker, Phys. Rev. Lett. [**[89]{}**]{}, 200401 (2002).
A. A. Clerk, S. M. Girvin, and A. D. Stone, Phys. Rev. B [**[67]{}**]{}, 165324 (2003). D. V. Averin and E. V. Sukhorukov, Phys. Rev. Lett. [**[95]{}**]{}, 126803 (2005). G. L. Khym and K. Kang, J. Phys. Soc. Jpn. [**[75]{}**]{}, 063707 (2006).
This time scale is based on a spinless model. If the spin degeneracy is taken into account, it becomes $\Delta t = h/(2eV)$.
K.-H. Lee, Ph. D thesis (Chonnam National University, 2012). Y. Ji , Y. Chung , D. Sprinzak, M. Heiblum, D. Mahalu, H. Shtrikman, Nature [**[422]{}**]{}, 415 (2003).
![\[fig1\] (a) Charge sensing scheme of a quantum point contact with a superposed input state, and (b) a possible realization with the quantum Hall edge state (Color online).](fig1.eps){width="3.5in"}
![\[fig2\] Dephasing rate ($\Gamma_d$) with the potential shift model ($\Delta\varphi=-\Delta\phi$) as a function of the two input parameters, $p$ and $\vartheta~(\vartheta\equiv\varphi_0-\theta-\pi/2)$ for (a) $T={1\over2}$, and for (b) $T={1\over4}$. 3D plots of the dephasing rate $\Gamma_d$ (in unit of $\Gamma_0\equiv (\Delta\varphi)^2/(2\Delta t)$) are given in the left panels. The right panels of (a) and (b) display the dephasing rate as a function of $p$ for three different values of the input phase $\vartheta=-\pi/2$ (red), $0$ (green), $\pi/2$ (blue), respectively (Color online). ](fig2.eps){width="3.5in"}
![\[fig3\] Schematic of a charge detection setup with full control of the sensitivity and the efficiency. The setup consists of the three QPCs, namely the [*input*]{} QPC (QPC$_i$), the main QPC, and the [*measurement*]{} QPC (QPC$_m$) (Color online). ](fig3.eps){width="3in"}
|
---
abstract: 'In this paper we show that polyomino ideal of a simple polyomino coincides with the toric ideal of a weakly chordal bipartite graph and hence it has a quadratic Gröbner basis with respect to a suitable monomial order.'
address:
- 'Ayesha Asloob Qureshi, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan'
- 'Takafumi Shibuta, Institute of Mathematics for Industry, Kyushu University, Fukuoka 819-0395, Japan'
- 'Akihiro Shikama, Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Toyonaka, Osaka 560-0043, Japan'
author:
- 'Ayesha Asloob Qureshi, Takafumi Shibuta and Akihiro Shikama'
title: Simple polyominoes are prime
---
[^1]
Introduction {#introduction .unnumbered}
============
Polyominoes are two dimensional objects which are originally rooted in recreational mathematics and combinatorics. They have been widely discussed in connection with tiling problems of the plane. Typically, a polyomino is plane figure obtained by joining squares of equal sizes, which are known as cells. In connection with commutative algebra, polyominoes were first discussed in [@Q] by assigning each polyomino the ideal of its inner 2-minors or the [*polyomino ideal*]{}. The study of ideal of $t$-minors of an $m \times n$ matrix is a classical subject in commutative algebra. The class of polyomino ideal widely generalizes the class of ideals of 2-minors of $m \times n $ matrix as well as the ideal of inner 2-minors attached to a two-sided ladder.
Let $\P$ be a polyomino and $K$ be a field. We denote by $I_{\P}$, the polyomino ideal attached to $\P$, in a suitable polynomial ring over $K$. The residue class ring defined by $I_{\Pc}$ is denoted by $K[\P]$. It is natural to investigate the algebraic properties of $I_{\P}$ depending on shape of $\P$. In [@Q], it was shown that for a convex polyomino, the residue ring $K[\P]$ is a normal Cohen-Macaualay domain. More generally, it was also shown that polyomino ideals attached to a row or column convex polyomino is also a prime ideal. Later in [@EHH], a classification of the convex polyominoes whose polyomino ideals are linearly related is given. For some special classes of polyominoes, the regularirty of polyomino ideal is discussed in [@ERQ].
In [@Q], it was conjectured that polyomino ideal attached to a simple polyomino is prime ideal. Roughly speaking, a simple polyomino is a polyomino without ’holes’. This conejcture was further studied in [@HQS], where authors introduced [*balanced*]{} polyominoes and proved that polyomino ideals attached to balanced polyominoes are prime. They expected that all simple polyominoes are balanced, which would then prove simple polyominoes are prime. This question was further discussed in [@HM], where authors proved that balanced and simple polyominoes are equivalent. Independent of the proofs given in [@HM], in this paper we show that simple polyominoes are prime by identifying the attached residue class ring $K[\P]$ with the edge rings of weakly chordal graphs. Moreover, from [@OH1], it is known that toric ideal of the edge ring of weakly chordal bipartite graph has a quadratic Gröbner basis with respect to a suitable monomial order, which implies that $K[\P]$ is Koszul.
Polyominoes and Polyomino ideals
================================
First we recall some definitions and notation from [@Q]. Given $a=(i,j)$ and $b=(k,l)$ in $\NN^2$ we write $a\leq b$ if $i\leq k$ and $j\leq l$. The set $[a,b]=\{c\in\NN^2\:\; a\leq c\leq b\}$ is called an [*interval*]{}. If $i <k$ and $j<l$, then the elements $a$ and $b$ are called [*diagonal*]{} corners and $(i,l)$ and $(k,j)$ are called [*anti-diagonal*]{} corners of $[a,b]$. An interval of the from $C=[a,a+(1,1)]$ is called a [*cell*]{} (with left lower corner $a$). The elements (corners) $a, a+(0,1), a+(1,0), a+(1,1)$ of $[a,a+(1,1)]$ are called the [*vertices*]{} of $C$. The sets $\{a,a+(1,0)\}, \{a,a+(0,1)\}, \{a+(1,0), a+(1,1)\}$ and $\{a+(0,1), a+(1,1)\}$ are called the [*edges*]{} of $C$. We denote the set of edge of $C$ by $E(C)$.
Let $\Pc$ be a finite collection of cells of $\NN^2$. The the vertex set of $\Pc$, denoted by $V(\Pc)$ is given by $V(\Pc)=\bigcup_{C \in \Pc} V(C)$. The edge set of $\Pc$, denoted by $E(\Pc)$ is given by $E(\Pc)=\bigcup_{C \in \Pc} E(C)$. Let $C$ and $D$ be two cells of $\Pc$. Then $C$ and $D$ are said to be [*connected*]{}, if there is a sequence of cells $\mathcal{C}:C= C_1, \ldots, C_m =D$ of $\Pc$ such that $C_i \cap C_{i+1}$ is an edge of $C_i$ for $i=1, \ldots, m-1$. If in addition, $C_i \neq C_j$ for all $i \neq j$, then $\mathcal{C}$ is called a [*path*]{} (connecting $C$ and $D$). The collection of cells $\Pc$ is called a [*polyomino*]{} if any two cells of $\Pc$ are connected, see Figure \[polyomino\].
Let $\Pc$ be a polyomino, and let $K$ be a field. We denote by $S$ the polynomial ring over $K$ with variables $x_{ij}$ with $(i,j)\in V(\Pc)$. Following [@Q] a $2$-minor $x_{ij}x_{kl}-x_{il}x_{kj}\in S$ is called an [*inner minor*]{} of $\Pc$ if all the cells $[(r,s),(r+1,s+1)]$ with $i\leq r\leq k-1$ and $j\leq s\leq l-1$ belong to $\Pc$. In that case the interval $[(i,j),(k,l)]$ is called an [*inner interval*]{} of $\Pc$. The ideal $I_\Pc\subset S$ generated by all inner minors of $\Pc$ is called the [*polyomino ideal*]{} of $\Pc$. We also set $K[\Pc]=S/I_\Pc$.
Let $\Pc$ be a polyomino. Following [@HQS], an interval $[a,b]$ with $a=(i,j)$ and $b=(k,l)$ is called a [*horizontal edge interval*]{} of $\Pc$ if $j=l$ and the sets $\{(r,j),(r+1,j\}$ for $r=i,\ldots,k-1$ are edges of cells of $\Pc$. If a horizontal edge interval of $\Pc$ is not strictly contained in any other horizontal edge interval of $\Pc$, then we call it [*maximal*]{} horizontal edge interval. Similarly one defines vertical edge intervals and maximal vertical edge intervals of $\Pc$.
Let $\{V_1,\ldots,V_m\}$ be the set of maximal vertical edge intervals and $\{H_1,\ldots,H_n\}$ be the set of maximal horizontal edge intervals of $\P$. We denote by $G(\P)$, the associated bipartite graph of $\P$ with vertex set $\{v_1,\ldots,v_m\} \bigsqcup \{ h_1,\ldots,h_n\}$ and the edge set defined as follows $$E(G(\P)) = \{\{v_i,h_j\} \mid V_i \cap H_j \in V( \P)\}.$$
The Figure \[interval\] shows a polyomino $\P$ with maximal vertical and maximal horizontal edge intervals labelled as $\{V_1, \ldots, V_4\}$ and $\{H_1, \ldots, H_4\}$ respectively, and Figure \[graph\] shows the associated bipartite graph $G(\P)$ of $\Pc$.
![associate bipartite graph of $\P$[]{data-label="graph"}](graph){width="80mm"}
Let $S$ be the polynomial ring over field $K$ with variables $x_{ij}$ with $(i,j) \in V(\P)$. Note that $|V_p \cap H_q| \leq 1$. If $V_p \cap H_q = \{(i,j)\}$, then we may write $x_{ij} = x_{V_p \cap H_q}$, when required. To each cycle $\C: v_{i_1},h_{j_1}, v_{i_2}, h_{j_2}, \ldots, v_{i_r}, h_{j_r}$ in $G(\P)$, we associate a binomial in $S$ given by $f_{\C}= x_{V_{i_1}\cap H_{j_1}} \ldots x_{V_{i_r}\cap H_{j_r}} - x_{V_{i_2}\cap H_{j_1}} \ldots x_{V_{i_1}\cap H_{j_r}}$.
We recall the definition of a cycle in $\P$ from [@HQS]. A sequence of vertices $\Cc_{\P}= a_1,a_2, \ldots, a_m$ in $V(\Pc)$ with $a_m = a_1$ and such that $a_i \neq a_j$ for all $1 \leq i < j \leq m-1$ is a called a [*cycle*]{} in $\Pc$ if the following conditions hold:
1. $[a_i, a_{i+1}] $ is a horizontal or vertical edge interval of $\Pc$ for all $i= 1, \ldots, m-1$;
2. for $i=1, \ldots, m$ one has: if $[a_i, a_{i+1}]$ is a horizontal edge interval of $\Pc$, then $[a_{i+1}, a_{i+2}]$ is a vertical edge interval of $\Pc$ and vice versa. Here, $a_{m+1} = a_2$.
We set $V(\C_{\Pc})= \{ a_1, \ldots, a_m\}$. Given a cycle $\Cc_{\P}$ in $\P$, we attach to $\Cc_{\P}$ the binomial $$f_{\C_{\P}} = \prod_{i=1}^{(m-1)/2} x_{a_{2i-1}} - \prod_{i=1}^{(m-1)/2} x_{a_{2i}}$$
Moreover, we call a cycle in $\P$ is [*primitive*]{} if each maximal interval of $\P$ contains at most two vertices of $\Cc_{\Pc}$.
Note that if $\C: v_{i_1},h_{j_1}, v_{i_2}, h_{j_2}, \ldots, v_{i_r}, h_{j_r}$ defines a cycle in $G(\P)$ then the sequence of vertices $\C_{\P}: V_{i_1}\cap H_{j_1}, V_{i_2}\cap H_{j_1}, V_{i_2}\cap H_{j_2}, \ldots, V_{i_r}\cap H_{j_r}, V_{i_1}\cap H_{j_r}$ is a primitive cycle in $\P$ and vice versa. Also, $f_{C}= f_{\C_{\P}} $.
We set $K[G(\P)]=K[v_ph_q \mid \{p,q\} \in E(G(\P))] \subset T= K[v_1, \ldots,v_m, h_1, \ldots, h_n]$. The subalgebra $K[G(\P)]$ is called the edge ring of $G(\P)$. Let $\varphi : S \rightarrow T$ be the surjective K-algebra homomorphism defined by $\varphi(x_{ij} ) = v_ph_q$, where $\{(i,j)\} = V_p \cap H_q$. We denote by $J_{\P}$, the toric ideal of $K[G(\P)]$. It is known from [@OH2], that $J_{\P}$ is generated by the binomials associated with cycles in $G(\P)$.
Simple polyominoes are prime
============================
Let $\Pc$ be a polyomino and let $[a,b]$ an interval with the property that $\Pc\subset [a,b]$. According to [@Q], a polyomino $\Pc$ is called [*simple*]{}, if for any cell $C$ not belonging to $\Pc$ there exists a path $C=C_1,C_2,\ldots,C_m=D$ with $C_i\not \in \Pc$ for $i=1,\ldots,m$ and such that $D$ is not a cell of $[a, b]$. For example, the polyomino illustrated in Figure \[polyomino\] is not simple but the one in Figure \[simple\] is simple. It is conjectured in [@Q] that $I_\Pc$ is a prime ideal if $\Pc$ is simple.
We recall from graph theory that a graph is called weakly chordal if every cycle of length greater than 4 has a chord. In order to prove following lemma, we define some notation. We call a cycle $\C_{\P}: V_{i_1}\cap H_{j_1}, V_{i_2}\cap H_{j_1}, V_{i_2}\cap H_{j_2}, \ldots, V_{i_r}\cap H_{j_r}, V_{i_1}\cap H_{j_r}$ in $\P$ has a self crossing if for some $i_p \in \{i_1, \ldots, i_{r-1}\}$ and $j_q \in \{j_1, \ldots, j_{s-1}\}$ there exist vertices $a=V_{i_p} \cap H_{i_p}, b=V_{i_p} \cap H_{i_{p+1}}, c=V_{i_s} \cap H_{i_s}, d=V_{i_{s+1}} \cap H_{i_s}$ such that there exists a vertex $e \notin \{a,b,c,d\}$ such that $e \in [a,b] \cap [c,d]$. In this situation $e = V_{i_p} \cap H_{i_s}$. If $\C$ is the associated cycle in $G(\P)$ then It also shows that $\{v_{i_p}, h_{i_s}\} \in E(G(\P))$ and it gives us a chord in $\C$.
Let $\C_{\P}:a_1, a_2, \ldots, a_r$ be a cycle in $\P$ which does not have any self crossing. Then we call the area bounded by edge intervals $[a_i, a_{i+1}]$ and $[a_r, a_1]$ for $i \in \{1, r-1\}$, the [*interior* ]{} of $\C_{\P}$. Moreover, we call a cell $C$ is an [*interior*]{} cell of $\C_\P$ if $C$ belongs to the interior of $\C_{\P}$.
Let $\P$ be a simple polyomino. Then the graph $G(\P)$ is weakly chordal.
Let $\C$ be a cycle of $G(\P)$ of length $2n$ with $n \geq 3$ and $\C_{\P}$ be the associated primitive cycle in $\P$. We may assume that $\C_{\P}$ does not have any self crossing. Otherwise, by following the definition of self crossing, we know that $\C$ has a chord.
Let $\C: v_{i_1},h_{j_1}, v_{i_2}, h_{j_2}, \ldots, v_{i_r}, h_{j_r}$ and $\C_{\P}: V_{i_1}\cap H_{j_1}, V_{i_2}\cap H_{j_1}, V_{i_2}\cap H_{j_2}, \ldots, V_{i_r}\cap H_{j_r}, V_{i_1}\cap H_{j_r}$. We may write $a_1= V_{i_1}\cap H_{j_1}, a_2 =V_{i_2}\cap H_{j_1}, a_3=V_{i_2}\cap H_{j_2}, \ldots, a_{2r-1}=V_{i_r}\cap H_{j_r}, a_{2r}= V_{i_1}\cap H_{j_r}$. Also, we may assume that $a_1$ and $a_2$ belongs to the same maximal horizontal edge interval. Then $a_{2r}$ and $a_1$ belongs to the same maximal vertical edge interval.
First, we show that every interior cell of $\C_\P$ belongs to $\P$. Suppose that we have an interior cell $C$ of $\C_{\P}$ which does not belong to $\P$. Let $\mathcal{J}$ be any interval such that $\P \subset \mathcal{J}$. Then, by using the definition of simple polyomino, we obtain a path of cells $C=C_1, C_2, \ldots, C_t$ with $C_i \notin P$, $i=1, \ldots t$ and $C_t$ is a boundary cell in $\mathcal{J}$. It shows that $V(C_1) \cup V(C_2) \cup \ldots \cup V( C_t)$ intersects atleast one of $[a_i, a_{i+1}]$ for $i\in \{1, \ldots, r-1\}$ or $[a_r, a_1]$, which is not possible because $\C_{\P}$ is a cycle in $\P$. Hence $C \in \P$. It shows that an interval in interior of $\C_{\P}$ is an inner interval of $\P$.
Let $\mathcal{I}$ be the maximal inner interval of $\C_\P$ to which $a_1$ and $a_2$ belongs and let $b,c$ the corner vertices of $\mathcal{I}$. We may assume that $a_1$ and $c$ are the diagonal corner and $a_2$ and $b$ are the anti-diagonal corner of $\mathcal{I}$. If $b,c \in V(\C_{\P})$ then primitivity of $\C$ implies that $\C$ is a cycle of length 4. We may assume that $b \notin V(\C_{\P})$. Let $H'$ be the maximal horizontal edge interval which contains $b$ and $c$. The maximality of $\mathcal{I}$ implies that $H' \cap V(\C_{\P}) \neq \emptyset$. For example, see Figure \[something2\]. Therefore, $\{v_{i_1}, h'\}$ is a chord in $\C$, as desired.
Let $\P$ be a simple polyomino. Then $I_\P = J_\P$.
First we show that $I_{\P} \subset J_{\P}$. Let $f=x_{ij} x_{kl} - x_{il}x_{kj} \in I_{\P}$. Then there exist maximal vertical edge intervals $V_p$ and $ V_q$ and maximal horizontal edge intervals $H_m$ and $H_n$ of $\P$ such that $(i,j), (i,l) \in V_p$, $(k,j), (k,l) \in V_q$ and $(i,j), (k,j) \in H_m$, $(i,l), (k,l) \in H_n$. It gives that $\phi(x_{ij} x_{kl}) = v_ph_mh_nv_q= \phi (x_{il} x_{kj})$. This shows $f \in J_P$.
Next, we show that $J_{\P} \subset I_{\P}$. It is known from [@OH1] and [@OH2] that toric ideal of weakly chordal bipartite graph is minimally generated by quadratic binomials associated with cycles of length 4. It suffices to show that $f_{\C} \in I_P$ where $\C$ is a cycle of length 4 in $G(\P)$.
Let $\Ic$ be an interval such that $\P \subset \I$. Let $\C:h_1, v_1, h_2, v_2$.. Then, $\C_{\P}: a_{11}=H_1 \cap V_1, a_{21}=H_2 \cap V_1$, $a_{22}=H_2 \cap V_2$ and $a_{12}=H_1 \cap V_2$ is the associated cycle in $\P$ which also determine an interval in $\I$. Let $a_{11}$ and $a_{22}$ be the diagonal corners of this interval. We need to show that $[a_{11}, a_{22}]$ is an inner interval in $\P$. Assume that $[a_{11}, a_{22}]$ is not an inner interval of $\P$, that is, there exist a cell $C \in [a_{11}, a_{22}]$ which does not belong to $\P$. Using the fact that $\P$ is a simple polyomino, we obtain a path of cells $C= C_1, C_2, \ldots, C_r$ with $C_i \notin \P$, $i=1, \ldots, r$ and $C_r$ is a cell in $\I$. Then, $V(C_1 \cup \ldots \cup C_r) $ intersects atleast one of the maximal intervals $H_1, H_2, V_1, V_2$, say $H_1$, which contradicts the fact that $H_1$ is an interval in $\P$. Hence, $[a_{11}, a_{22}]$ is an inner interval of $\P$ and $f_{\C} \in I_{\P}$.
Let $\P$ be a simple polyomino. Then $K[\Pc]$ is Koszul and a normal Cohen–Macaulay domain.
From [@OH1], we know that $J_{\Pc}=I_{\Pc}$ has squarefree quadratic Gröbner basis with respect to a suitable monomial order. Hence $K[\Pc]$ is Koszul. By theorem of Sturmfels [@St], one obtains that $K[\Pc]$ is normal and then following a theorem of Hochster [@BH Theorem 6.3.5], we obtain that $K[\Pc]$ is Cohen–-Macaulay.
A polyomino ideal may be prime even if the polyomino is not simple. The polyomino ideal attached to the polyomino in Figure \[prime\] is prime. However, the polyomino ideal attached to the polyomino attached in Figure \[notprime\] is not prime. It would be interesting to know a complete characterization of polyominoes whose attached polyomino ideals are prime, but it is not easy to answer. However, as a first step, it is already an interesting question to classify polyominoes with only “one hole” such that their associated polyomino ideal is prime.
![polyomino with “one hole”[]{data-label="notprime"}](notprime){width="4cm"}
[10]{}
W. Bruns, J. Herzog, Cohen–Macaulay rings, Cambridge University Press, London, Cambridge, New York, (1993)
V. Ene, J. Herzog and T. Hibi, Linearly related polyominoes, to appear in J. Alg Comb. V. Ene, A. A. Qureshi and A. Rauf, Regularity of join-meet ideals of distributive lattices, Electron J. Combin. [**20**]{} (3) (2013), \#P20.
J. Herzog, A. A. Qureshi and A. Shikama, Gröbner basis of balanced polyominoes, to appear in Math. Nachr. J. Herzog and S. S. Madani, The coordinate ring of a simple polyomino, arxiv:1408.4275v1
H. Ohsugi and T. Hibi, Koszul bipartite graphs, Adv. Appl. Math., [**22**]{}, 25–-28, (1999).
H. Ohsugi and T. Hibi, Toric ideals generated by quadratic binomials, Journal of Algebra, [**218**]{}, 509–-527, (1999).
A. A. Qureshi, Ideals generated by 2-minors, collections of cells and stack polyominoes, J. Algebra, [**357**]{}, 279–303, (2012).
B. Sturmfels, Gröbner Bases and Convex Polytopes, Amer. Math. Soc., Providence, RI, (1995)
[^1]:
|
---
abstract: 'Robust anomaly detection is a requirement for monitoring complex modern systems with applications such as cyber-security, fraud prevention, and maintenance. These systems generate multiple correlated time series that are highly seasonal and noisy. This paper presents a novel unsupervised deep learning architecture for multivariate time series anomaly detection, called Robust Seasonal Multivariate Generative Adversarial Network (RSM-GAN). It extends recent advancements in GANs with adoption of convolutional-LSTM layers and an attention mechanism to produce state-of-the-art performance. We conduct extensive experiments to demonstrate the strength of our architecture in adjusting for complex seasonality patterns and handling severe levels of training data contamination. We also propose a novel anomaly score assignment and causal inference framework. We compare RSM-GAN with existing classical and deep-learning based anomaly detection models, and the results show that our architecture is associated with the lowest false positive rate and improves precision by 30% and 16% in real-world and synthetic data, respectively. Furthermore, we report the superiority of RSM-GAN regarding accurate root cause identification and NAB scores in all data settings.'
author:
- |
Farzaneh khoshnevisan ^1^, Zhewen Fan ^2^\
^1^[North Carolina State University]{}\
^2^[Intuit Inc.]{}\
fkhoshn@ncsu.com, zhewen\_fan@intuit.com
bibliography:
- 'main.bib'
title: |
RSM-GAN: A Convolutional Recurrent GAN for Anomaly Detection in\
Contaminated Seasonal Multivariate Time Series
---
Introduction
============
Detecting anomalies in real-time data sources is becoming increasingly important thanks to the steady rise in the complexity of modern systems. Examples of these systems are an AWS Cloudwatch service that tracks metrics such as CPU Utilization and EC2 usage, or an enterprise data encryption process where multiple encryption keys coexist and are monitored. Anomaly detection (AD) applications include cyber-security, data quality maintenance, and fraud prevention. An effective AD algorithm needs to be accurate and timely to allow operators to take preventative and corrective measures before any catastrophic failure happens. Time series forecasting techniques such as Autoregressive Integrated Moving Average (ARIMA) [@arima] as well as Statistical Process Control (SPC) [@spc] were popular algorithms for such applications. However, a complex system often outputs multiple correlated information sources. These conventional AD techniques are not adequate to capture the inter-dependencies among multivariate time series (MTS) generated by the same system. As a result, many unsupervised density or distance-based models such as K-Nearest Neighbors [@knn] have been developed. However, these models usually ignore the temporal dependency and seasonality in time series. The importance of modeling temporal dependencies in time series has been well studied [@tsa], and failing to capture them results in model mis-specification and a high false positive rate (FPR) [@ts1]. Seasonality is hard to model due to its irregular and complex nature. Most algorithms such as [@arima] make a simplistic assumption that there exists only one seasonal component such as a weekly or monthly seasonality, while in real-world complex systems, multiple seasonal patterns can occur simultaneously. Not accurately accounting for seasonality in AD can also lead to false detection [@ts2].\
Recent advancement in computation has afforded rapid development in deep learning-based AD techniques. Auto-encoder based models coupled with Recurrent Neural Network (RNN) are well suited for capturing temporal and spatial dependencies and they detect anomalies by inspecting the reconstruction errors [@lstmed]. Generative Adversarial Networks (GANs) is another well-studied deep learning framework. The intuition behind using GANs for AD is to learn the data distribution, and in case of anomalies, the generator would fail to reconstruct input and produce large loss. GANs have enjoyed success in image AD [@ganomaly; @gan2; @gan3image], but have yet been applied to the MTS structure. Despite such advancements, none of the previous deep learning AD models addressed the seasonality problem. Furthermore, most deep learning model rely on the assumption that the training data is normal with no contamination. However, real-world data generated by a complex system often contain noise or undetected anomalies (contamination). Lastly, MTS anomaly detection task should not end at simply flagging the anomalous time points; a well-designed causal inference can help analysts narrow down the root cause(s) contributing to the irregularity, for them to take a more deliberate action.\
To address the aforementioned problems in MTS AD, we propose an unsupervised adversarial learning architecture fully adopted for MTS anomaly detection tasks, called Robust Seasonal Multivariate Generative Adversarial Networks (RSM-GAN). Motivated by [@ganomaly; @mscred], we first convert the raw MTS input into multi-channel correlation matrices with image-like structure, and employ convolutional and recurrent neural network (Convolutional-LSTM) layers to capture the spatial and temporal dependencies. Simultaneous training of an additional encoder addresses the issue of training data contamination. While training the GAN, we exploit Wasserstein loss with gradient penalty [@wgan-gp] to achieve stable training and during our experiment it reduces the convergence time by half. Additionally, we propose a smoothed attention mechanism to model multiple seasonality patterns in MTS. In testing phase, residual correlation matrices along with our proposed scoring and causal inference framework are utilized for real-time anomaly detection. We conduct extensive empirical studies on synthetic datasets as well as an encryption key dataset. The results show superiority of RSM-GAN for timely and precise detection of anomalies over state-of-the-art baseline models.\
The contributions of our work can be summarized as follows: (1) we propose a convolutional recurrent Wasserstein GAN architecture (RSM-GAN), and extend the scope of GAN-based AD from image to MTS tasks; (2) we model seasonality as part of the RSM-GAN architecture through a novel smoothed attention mechanism; (3) we apply an additional encoder to handle the contaminated training data; (4) we propose a scoring and causal inference framework to accurately and timely identify anomalies and to pinpoint unspecified numbers of root cause(s). The RSM-GAN framework enables system operators to react to abnormalities swiftly and in real-time manner, while giving them critical information about the root causes and severity of the anomalies.
Related Work
============
MTS anomaly detection has long been an active research area because of its critical importance in monitoring high risk tasks. There are $3$ main types of detection methods: 1) classical time series analysis (TSA) based methods; 2) classical machine learning based methods; and 3) deep learning based methods. The TSA-based models include Vector Autoregression (VAR) [@var], and latent state based models like Kalman Filters [@kalman]. These models are prone to mis-specification, and are sensitive to noisy training data. Classical machine learning methods can be further categorized into distance based methods such as the k-Nearest Neighbor (kNN) [@knn], classification based methods such as One-Class SVM [@one-svm], and ensemble methods such as Isolation Forest [@iforest]. These general purpose AD methods do not account for temporal dependencies nor the seasonality patterns that are ubiquitous in MTS, therefore, their performance is often lacking.
Deep learning models have garnered much attention in recent years, and there have been two main types of algorithms used in AD domain. One is autoencoder based [@autoencoder]. For example, [@autoencoder1] investigated the use of Gaussian classifiers with auto-encoders to model density distributions in multi-dimensional data. [@mscred] proposed a convolutional LSTM encoder-decoder structure to capture the temporal dependencies in time series, while assigning root causes for the anomalies. These models achieved better performance compared to the classical machine learning models. However, they do not model seasonality patterns, and they are built under the assumption that the training data do not contain contamination. Furthermore, they did not fully explore the power of a discriminator and a generator which has shown to have a superior performance in computer vision domain. This leads to the other type of deep learning algorithms: generative adversarial networks (GANs). Several recent studies demonstrated that the use of GANs has great promise to detect anomalies in images by mapping high-dimensional images to low dimensional latent space [@ganomaly; @gan3image; @gan4image]. However, these models have an unrealistic assumption that the training data is contamination free. A weak labeling to inspect this condition would make such algorithms not being fully unsuprvised. Further, applying GANs to data structures other than images is challenging and under-explored. To the best of our knowledge, we are among the first to extend the applications of GANs to MTS.
Methodology {#method}
===========
We define an MTS $X=(X_1^T,...,X_n^T)\in\mathbb{R}^{n\times T}$, where $n$ is the number of time series, and $T$ is the length of the historical training data. We aim to predict two AD outcomes: 1) the time points $t$ after $T$ that are associated with anomalies, and 2) time series $x_i, i \in \{1,..,n\}$ causing the anomalies. In the following section, we first describe how we reconstruct the input MTS to be consumed by a convolutional GAN. Then we introduce the RSM-GAN framework by decomposing it into three components: the architecture, the inner-structure, and the attention mechanism for seasonality adjustment. Finally, after the model is trained, we describe how we develop an anomaly scoring and casual inference procedure to identify anomalies and the root causes in the test data.
RSM-GAN Framework
-----------------
### MTS to Image Conversion
To extend GAN to MTS and to capture inter-correlation between multiple time series, we convert the MTS into an image-like structure through the construction of the so-called multi-channel correlation matrix (MCM), inspired by [@song2018deep; @mscred]. Specifically, we define multiple windows of different sizes $W=(w_1,...,w_c)$, and calculate the pairwise inner product (correlation) of time series within each window. For a specific time point $t$, we generate $c$ matrices (channels) of shape $n\times n$, where each element of matrix $S_t^w$ for a window of size $w$ is calculated by this formula: $$s_{ij}=\frac{\sum_{\delta=0}^{w}x_i^{t-\delta}\cdot x_j^{t-\delta}}{w}$$
In this work, we select windows $W=(5, 10, 30)$. This results in $3$ channels of $n\times n$ correlation matrices. To convert the span of MTS into this shape, we consider a step size $s_s=5$. Therefore, $X$ is transformed to $S=(S_1,...,S_M)\in\mathbb{R}^{M\times c \times n\times n}$, where $M=\lfloor\frac{T}{s_s}\rfloor$ steps presented by MCMs. Finally, to capture the temporal dependency between consecutive steps, we stack $h=4$ previous steps to the current step $t$ to prepare the input to the GAN-based model. Later, we extend MCM to also capture seasonality unique to MTS.
### RSM-GAN Architecture
The idea behind using a GAN to detect anomalies is intuitive. During training, a GAN learns the distribution of the input data. Then, if anomalies are present during testing, the networks would fail to reconstruct the input, thus produce large losses. A GAN also exploits the power of the discriminator to optimize the network more efficiently towards training the distribution of input. However, in most GAN literature the training data is explicitly assumed to be normal with no contamination. [@encoder] have shown in a study that simultaneous training of an encoder with GAN improves the robustness of the model against contamination. To this end, we adopt an encoder-decoder-encoder structure [@ganomaly], with the additional encoder, to capture the training data distribution in both original and latent space. It improves the robustness of the model to training noise, because the joint encoder forces similar inputs to lie close to each other also in the latent space. Specifically, in Figure \[fig:GAN\] the generator $G$ has autoencoder structure that the encoder ($G_E$) and decoder ($G_D$) interact with each other to minimize the reconstruction or contextual loss: the $l_2$ distance between input $x$ and reconstructed input $x'$. Furthermore, an additional encoder $E$ is trained jointly with the generator to minimize the latent loss: the $l_2$ distance between latent vector $z$ and reconstructed latent vector $z'$. Finally, the discriminator $D$ is tasked to distinguish between the original input $x$ and the generated input $G(x)$. Following the recent improvements on GAN-based image AD [@gan2; @gan3image], we use feature matching loss for the adversarial training. Feature matching exploits the internal representation of the input $x$ induced by an intermediate layer in $D$. Assuming that the function $f(\cdot)$ will produce such representation, the discriminator aims to maximize the distance between $f(x)$ and $f(x')$ to effectively distinguish between original and generated inputs. At the same time, the generator battles against the adversarial loss to confuse the discriminator. With multiple loss components and training objectives, we employ the Wasserstein GAN with gradient penalty (WGAN-GP) [@wgan-gp] to 1) enhance the training stability, and 2) converge faster and more optimally. The final objective functions for the generator and discriminator (critic) are as following: $$L_D = \max_{w \in W} \mathbb{E}_{x\sim p_x}[f_w(x)] - \mathbb{E}_{x\sim p_x} [f_w(G(x))]$$
$$\begin{aligned}
L_G = \min_{G}\min_{E} \Big( & w_1\mathbb{E}_{x\sim p_x} \parallel x-G(x) \parallel_2 + \\
& w_2 \mathbb{E}_{x\sim p_x} \parallel G_E(x)-E(G(x)) \parallel_2 + \\
& w_3 \mathbb{E}_{x\sim p_x}[f_w(G(x))]\Big)
\end{aligned}$$
Where $w_1$, $w_2$, and $w_3$ are weights controlling the effect of each loss on the total objective. We employ Adam optimizer [@adam] to optimize the above losses.
![GAN architecture with loss definitions[]{data-label="fig:GAN"}](Figures/GAN_architecture.png){width="0.88\columnwidth"}
In the next section, we describe how we design the internal structure of each network in RSM-GAN to capture the spatial as well temporal dependencies in our input data.
### Internal Encoder and Decoder Structure
In addition to the convolutional layers in the encoders, we add RNN layers to jointly capture the spatial patterns and temporality of our MCM input by using convolutional-LSTM (convLSTM) gates. We apply convLSTM to every convolutional layer due to its optimal mapping to the latent space [@mscred]. The convolutional decoder applies multiple deconvolutional layers in reverse order to reconstruct MCM at current time step. Starting from the last convLSTM output, decoder applies deconvolutional layer and concatenates the output with convLSTM output of the previous step. The concatenation output is further an input to the next deconvolutional layer, and so on. Figure \[fig:ED\_internal\] illustrates the detailed inner-structure of the encoder and decoder networks.
![Inner structure of convolutional recurrent encoders and convolutional decoder (with $n=10$) [@mscred][]{data-label="fig:ED_internal"}](Figures/ED_internal.png){width="50.00000%"}
The second encoder $E$ follows the same structure as the generator’s encoder $G_E$ to reconstruct latent space $z'$. Input to the discriminator is the original or generated MCM of each time step. Therefore, internal structure of the discriminator consist of three simple convolutional layers, with the last layer representing $f(\cdot)$.
### Seasonality Adjustment via Attention Mechanism
The construction of the initial MCM does not take into consideration of the seasonality. We propose to first stack previous seasonal data points to the input data, and then let the convLSTM model temporal dependencies through attention mechanism. Specifically, in addition to $h$ previous immediate steps, we add $m_i$ previous seasonal steps per seasonal pattern $i$. To illustrate, assume the input has both the daily and weekly seasonality. For a certain time $t$, we stack MCMs of up to $m_1$ days ago at the same time, and up to $m_2$ weeks ago at the same time. Additionally, to account for the fact that seasonal patterns are often not exact, we smooth the seasonal steps by averaging over steps in a neighboring window of $30$ minutes.\
Moreover, even though the $h$ previous steps are closer to the current time step, but the previous seasonal steps might be a better indicator to reconstruct the current step. Therefore, we further apply an attention mechanism to the convLSTM layers, and let the model decide the importance of all prior steps based on the similarity rather than recency. Attention weights are calculated based on the similarity of the hidden state representations in the last layer, by the following formula: $$\mathcal{H'}_t = \sum_{i\in (t-N,t)} \alpha_i \mathcal{H}_i,
\alpha_i=\mathrm{softmax}\Big(\frac{Vec(\mathcal{H}_t)^T Vec(\mathcal{H}_i)}{\mathcal{X}}\Big)$$ Where $N=h+\Sigma m_i$, $Vec(\cdot)$ denotes the vector, and $\mathcal{X}=5$ is the rescaling factor. Figure \[fig:attention\] presents the structure of the described smoothed attention mechanism. Finally, to make our model even more adaptable to real-world datasets that often exhibit holiday effects, we multiply the attention weight $\alpha_i$ by a binary bit $b_i \in \{0,1\}$, where $b_i=0$ in case of holidays or other exceptional behavior in previous steps. This way, we eliminate the effect of undesired steps from the current step.
![Smoothed attention mechanism[]{data-label="fig:attention"}](Figures/attn.png){width="0.9\columnwidth"}
Testing Phase
-------------
### Anomaly Score Assignment
After training the RSM-GAN, an anomaly score is assigned based on the residual MCM of the first channel (context) and latent vector of the RSM-GAN. We define broken tiles as the elements of the contextual or latent residual matrix that have error value of greater than $\theta_b$. [@mscred] defined a scoring method based on the number of broken tiles in contextual or latent residual matrix (context$_{b}$ and latent$_{b}$). However, this method is more sensitive to severe anomalies, and lowering the threshold results in high FPR. We propose a root cause-based counting procedure. Since each row/column in the contextual residual matrix is associated with a time series, the ones with the largest number of broken tiles are contributing the most to the anomalies. Therefore, by defining a threshold $\theta_h \leq \theta_b$, we only count the number of broken tiles in rows/columns with more than half broken. We name our new scoring method context$_{h}$. The above thresholds $\theta = \beta \times \eta_{.996}(E_{train})$, which is calculated based on $99.6^{th}$ percentile of error in the training residual matrices, and the best $\beta$ is captured by grid search on the validation set.
### Root Cause Framework
Large errors associated with elements of rows/columns of the residual MCM are indicative of anomalous behavior in those time series. To identify those abnormal time series, we need a root-cause scoring system to assign a score to each time series based on severity of its associating errors. We present $3$ different methods: 1) number of broken tiles (using the optimized $\theta$ from previous step), 2)the weighted sum of broken tiles based on their absolute error, and 3) the sum of absolute errors. Furthermore, the number of root causes, $k$, is unknown in real-life applications. [@mscred] used an arbitrary number of 3. Here, we propose to use an elbow method [@elbow] to find the optimal $k$ number of time series from the root cause scores. In this approach, by sorting the scores and plotting the curve, we aim to find the point where the amount of errors become very small and close to each other. Basically, for each point $n_i$ on the score curve, we find the point with maximum distance from a vector that connects the first and last scores. Time series associated with the scores greater than this point are identified as root causes.
Experimental Setup
==================
Data
----
To evaluate different aspects of RSM-GAN, we conduct a comprehensive set of experiments by generating synthetic time series with multiple settings, along with a real-world encryption key dataset.
### Synthetic Data
To simulate data with different seasonality and contamination, we first generate sinusoidal-based waves of length $T$ and periodicity $F$: $$S(t, F) = \left\{
\begin{array}{ll}
\sin[(t-t_0)/F]+0.3\times\epsilon_t & s_{rand}=0 \\
\cos[(t-t_0)/F]+0.3\times\epsilon_t & s_{rand}=1
\end{array}
\right.$$ Where $s_{rand}$ is 0 or 1, $t_0 \in [10,100]$ is shift in phase and they are randomly selected for each time series. $\epsilon_t \sim \mathcal{N}(0,1)$ is the random noise, and $F \in [60,100]$ is the periodicity or seasonality. Ten time series with 2 months worth of data by minute sampling frequency are generated, or $T=80,640$. Each time series with combined seasonality is generated by: $$S(t) = S(t, F_{rand})+ S(t, F_{day}) + S(t,F_{week})$$ Where $F_{day} = \frac{2\pi}{60\times24}$ and $F_{week} = \frac{2\pi}{60\times24\times7}$. To simulate anomalies with varying duration and intensity, we shock time series with a random duration ($[5,60]$ minutes), direction, and number of root causes ($[2,6]$). Each experiment is conducted with different seasonality patterns and contamination settings.
### Encryption Key Data
Our encryption-key dataset contains $7$ time series generated from a project’s encryption process. Each time series represents the number of requests for a specific encryption key per minute. The dataset contains $4$ months of data or $T=156,465$. Four anomalies with various length and scales are identified in the test sequence by a security expert, and we randomly injected $5$ additional anomalies into both the train and test sequences.
Baseline Models
---------------
Three baseline models are used for comparison. Two are classical machine learning models, i.e., One-class SVM (OC-SVM) [@one-svm] and Isolation Forest [@iforest]. We also compare our model performance with that of MSCRED [@mscred] with the same input as ours. MSCRED is run in a sufficient number of epochs and its best performance is reported.
Evaluation Metrics
------------------
In addition to precision, recall, false positive rate, and F1 score, we include the **Numenta Anomaly Benchmark (NAB)** score [@numenta]. NAB is a standard open source framework for evaluating real-time AD algorithms. The NAB assigns score to each positive detection based on their relative position to the anomaly window by a scaled *sigmoid* function (between -1 and 1). Specifically, it assigns a positive score to the earliest detection within anomaly window (1 to the beginning of the window) and negative score to detections after the window (false positives). Additionally, it assigns a negative score (-1) to the missed anomalies. NAB score is more comprehensive than standard metrics because it also rewards timely detection. Early detection is critical for high-stake AD tasks such as cyber-attack monitoring. Also, it penalizes false positives as they get farther from the true anomaly window due to high cost of the manual inspection of the system. In our experiments, the first half of the time series are used for training the model and the remainder for evaluation. RSM-GAN is implemented in Tensorflow and trained in 300 epochs, in batches of size $32$, on an AWS Sagemaker instance with four $16$GB GPUs. All the results on synthetic data are produced by an average over five runs.
Result and Discussion
=====================
Anomaly Score Assignment
------------------------
We first evaluate our new score assignment method context$_h$ against the $3$ other methods described before. Table \[tab:scores\] reports the performance of RSM-GAN on synthetic MTS with no contamination and seasonality, using different scoring methods. The reported threshold is the optimum threshold obtained by the grid search. As we can see, our proposed context$_{h}$ method results in more precise predictions and has the highest NAB score. Specifically, context$_{h}$ improves the precision and FPR by $6.2\%$ and $0.08\%$ compared to the context$_{b}$ method. Scoring based on the latent residual loss results in the lowest performance. Also, combining the methods by calculating a weighted sum of context and latent-based scores does not help improving the performance. Further, this performance comparison maintains the same for other more complex synthetic settings. Thus, **context$_{h}$** will be the scoring method reported in the subsequent sections.
[width=0.96,center]{}
**Score** **Threshold** **Precision** **Recall** **F1** **FPR** **NAB Score**
--------------- --------------- --------------- ------------ ----------- ------------ ---------------
latent$_{b}$ 0.0099 0.648 0.819 0.723 0.0040 0.460
context$_{b}$ 0.0019 0.784 **0.958** 0.862 0.0023 0.813
context$_{h}$ 0.00026 **0.846** 0.916 **0.880** **0.0015** **0.859**
combined - 0.767 0.916 0.835 0.0025 0.721
: Model performance with different anomaly score assignment methods
\[tab:scores\]
Root Cause Identification Assessment
------------------------------------
RSM-GAN detects root causes using context-based residual matrix. In this section, we compare the results of MSCRED and RSM-GAN using $3$ root cause scoring methods. Root causes are identified based on the average of errors per time series in an anomaly window and by applying the aforementioned elbow-based identification method. Precision, recall and F1 scores are averaged over all detected anomalies.
[width=0.9,center]{}
**Model** **Scoring** **Precision** **Recall** **F1**
----------- ----------------------- --------------- ------------ ------------
Number of broken (NB) **0.5154** **0.7933** **0.6249**
Weighted broken (WB) 0.5071 0.6866 0.5834
Absolute error (AE) 0.5504 0.7066 0.6188
Number of broken (NB) 0.4960 0.8500 0.6264
Weighted broken (WB) **0.6883** **0.8666** **0.7672**
Absolute error (AE) **0.6883** **0.8666** **0.7672**
: Root cause identification performance with different root cause scoring methods
\[tab:rootcause\]
[width=0.73,center]{}
**Contamination** **Model** **Precision** **Recall** **F1** **FPR** **NAB Score** **Root Cause Recall**
------------------- ------------------ --------------- ------------ ------------ ------------ --------------- -----------------------
OC-SVM 0.1581 **1.0000** 0.2730 0.0473 -8.4370 -
Isolation Forest 0.0326 **1.0000** 0.0631 0.2640 -51.4998 -
MSCRED 0.8000 0.8450 0.8219 0.0018 0.7495 **0.7533**
RSM-GAN **0.8461** 0.9166 **0.8800** **0.0015** **0.8598** 0.6333
OC-SVM 0.2810 **1.0000** 0.4387 0.0218 -3.3411 -
Isolation Forest 0.3134 **1.0000** 0.4772 0.0187 -2.7199 -
MSCRED 0.6949 0.6029 0.6457 0.0023 0.2721 0.5483
RSM-GAN **0.8906** 0.7500 **0.8143** **0.0009** **0.8865** **0.7700**
OC-SVM 0.4611 **1.0000** 0.6311 0.0113 -1.2351 -
Isolation Forest 0.6311 **1.0000** 0.7739 0.0056 -0.1250 -
MSCRED 0.6548 0.7143 0.6832 0.0036 0.2712 0.6217
RSM-GAN **0.8553** 0.8442 **0.8497** **0.0014** **0.8511** **0.8083**
OC-SVM 0.5691 **1.0000** 0.7254 0.0102 -0.3365 -
Isolation Forest 0.8425 **1.0000** **0.9145** 0.0025 0.6667 -
MSCRED 0.5493 0.7290 0.6265 0.0080 0.0202 0.6611
RSM-GAN **0.8692** 0.8774 0.8732 **0.0018** **0.8872** **0.8133**
\[tab:contamination\]
The synthetic data used in this experiment has two combined seasonal patterns and ten anomalies in the training data. Table \[tab:rootcause\] shows root cause identification performance of RSM-GAN and MSCRED. Overall, RSM-GAN outperforms MSCRED. As the results suggest, the NB method performs the best for MSCRED. However, for RSM-GAN the WB and AE methods leads to the best performance. Since the same result holds for other settings, we report NB for MSCRED and AE for RSM-GAN in subsequent sections.
Contamination Tolerance Assessment
----------------------------------
In this section, we assess the robustness of RSM-GAN with different levels of contamination in training data, and the results are compared to the baseline models. In this experiment, the level of contamination starts with no contamination and at each subsequent level, we add $5$ more random anomalies with varying duration to the training data. The percentages presented in the Contamination column in Table \[tab:contamination\] shows the proportions of the anomalous time points in train/test time span.
Results in Table \[tab:contamination\] suggest that our proposed model outperforms all baseline models at all contamination levels for all metrics except of the recall. Note that the %100 recall for classic baseline models is at the expense of FPR as high as $26.4\%$, especially for the less severe contamination. Furthermore, comparison of the NAB scores shows that our model has more timely detections and false positives are within a window of the anomalies.
Lastly, as we can see, the MSCRED performance drops drastically as the contamination level increases. This is because the encoder-decoder structure of this model cannot handle high levels of contamination while training.
Seasonality Adjustment Assessment
---------------------------------
In this section, we assess the performance of our proposed attention mechanism for capturing seasonality in MTS. In many of the real-world AD applications, time series might contain a single or multiple seasonal patterns (daily/weekly/monthly/etc.), with the effect of special events, like holidays. The performance of RSM-GAN is assessed in different seasonality settings. In the first three experiments, synthetic MTS (2 months, sampled per minute) are generated with no training data contamination and no seasonality, then daily and weekly seasonality patterns are added one by one. In the last experiment, we simulate $3$ years of hourly data, and add special patterns for the time steps related to the US holidays in both the train and test sets. The test set of each experiment is contaminated with $10$ random anomalies.
Comparing the results in Table \[tab:seasonality\], RSM-GAN shows consistent performance thanks to the attention mechanism capturing the seasonality patterns. All the other baseline models, especially MSCRED’s performance deteriorated with increased complexity of the seasonal patterns. Precision is the main metric that drops drastically for the baseline models as we add more seasonality. This is because they do not account for changes due to seasonality and identify them as anomalies, which also led to high FPR.
[width=0.73,center]{}
**Seasonality** **Model** **Precision** **Recall** **F1** **FPR** **NAB Score** **Root Cause Recall**
----------------- ------------------ --------------- ------------ ------------ ------------ --------------- -----------------------
OC-SVM 0.4579 0.9819 0.6245 0.0097 -8.6320 -
Isolation Forest 0.0325 **1.0000** 0.0630 0.2646 -51.606 -
MSCRED 0.8000 0.8451 0.8219 0.0019 0.7495 **0.7533**
RSM-GAN **0.8462** 0.9167 **0.8800** **0.0015** **0.8598** 0.6333
OC-SVM 0.1770 **1.0000** 0.3008 0.0532 -9.5465 -
Isolation Forest 0.1387 **1.0000** 0.2436 0.0710 -13.107 -
MSCRED 0.7347 0.7912 0.7619 0.0033 0.3775 **0.7467**
RSM-GAN **0.9012** 0.7935 **0.8439** **0.0010** **0.5175** 0.6717
OC-SVM 0.1883 **0.9487** 0.3142 0.0400 -6.9745 -
Isolation Forest 0.1783 **0.9487** 0.3002 0.0428 -7.5278 -
MSCRED 0.6548 0.7143 0.6832 0.0036 0.2712 **0.6217**
RSM-GAN **0.9000** 0.6750 **0.7714** **0.0008** **0.5461** 0.4650
OC-SVM 0.2361 **0.9444** 0.3778 0.0425 -1.7362 -
Isolation Forest 0.2783 0.8889 0.4238 0.0321 -1.0773 -
MSCRED 0.0860 0.7059 0.1534 0.0983 -5.1340 0.6067
RSM-GAN **0.6522** 0.8108 **0.7229** **0.0063** **0.5617** **0.8667**
\[tab:seasonality\]
![Performance comparison on synthetic data with weekly and monthly seasonality and holiday effect[]{data-label="fig:weekmo-holiday"}](Figures/weekmo_holiday.png){width="\columnwidth"}
In the last experiment in Table \[tab:seasonality\], all of the abnormalities injected to holidays are wrongly flagged by the baseline models as anomalies, since no holiday adjustment is incorporated in these models. This resulted in low precision and high FPR for those models. In RSM-GAN, multiplying the binary vectors of holidays with the attention weights enables it to account for the holidays, which leads to the best performance in almost all metrics. Figure \[fig:weekmo-holiday\] shows the ground truth anomaly labels (bottom), and the anomaly scores assigned by each model to each time step while testing. It is evident that our model accurately accounts for the holidays (18 Feb, 19 May, 4 Jul) and has much lower FPR.
[width=0.7,center]{}
**Dataset** **Model** **Precision** **Recall** **F1** **FPR** **NAB Score** **Root Cause Recall**
------------- ------------------ --------------- ------------ ------------ ------------ --------------- -----------------------
OC-SVM 0.1532 0.2977 0.2023 0.0063 -17.4715 -
Isolation Forest 0.3861 **0.4649** 0.4219 0.0028 -6.9343 -
MSCRED 0.1963 0.2442 0.2176 0.0055 -1.1047 0.4709
RSM-GAN **0.6852** 0.4405 **0.5362** **0.0011** **0.2992** **0.5093**
OC-SVM 0.6772 0.9185 0.7772 0.0038 -2.7621 -
Isolation Forest 0.7293 **0.9610** 0.8221 0.0033 -2.2490 -
MSCRED 0.6228 0.7403 0.6746 0.0043 0.2753 0.6600
RSM-GAN **0.8884** 0.8438 **0.8649** **0.0010** **0.8986** **0.7870**
\[tab:realworld\]
![Anomaly score assignment of different algorithms. The bottom plot is the ground truth labels.[]{data-label="fig:final_plot"}](Figures/idps_final.png){width="\columnwidth"}
![Anomaly score assignment of different algorithms. The bottom plot is the ground truth labels.[]{data-label="fig:final_plot"}](Figures/synth_final.png){width="\columnwidth"}
Performance on Real-world dataset
---------------------------------
This section evaluates our model on a real-world encryption key dataset. A cursory look shows that this dataset is noisy, and contains both daily and weekly seasonality. To be comprehensive, we also create a synthetic dataset with similar patterns, i.e., daily and weekly seasonality as well as medium contamination (10 anomalies) in the training set. From Table \[tab:realworld\], we make the following observations: 1) RSM-GAN consistently outperforms all the baseline models in terms of detection and root cause identification recall for both the encryption key and the synthetic dataset. 2) Not surprisingly, for all the models, performance on the synthetic data is better than that of encryption key data. It is due to the excessive irregularities and noise in the encryption key data, and errors arising from ground truth labeling by experts. 3) The plots in Figure \[fig:final\_plot\] illustrates the anomaly scores assigned to each time point in test dataset by each algorithm. As we can see, even though isolation forest has the highest recall rate, it also detects many false positives not related to the actual anomaly windows, leading to negative NAB scores. As mentioned before, irrelevant false positives are costly in real-world applications. 4) By comparing our model to MSCRED in Figure \[fig:final\_real\] and Figure \[fig:final\_synth\], we can see that MSCRED not only has much higher FPR, but it also fails to capture some anomalies. We conjecture it is because MSCRED’s encoder-decoder structure is not as robust to the training data contamination, nor does it model the seasonality patterns.
Conclusion
==========
In this work, we presented the challenges in MTS anomaly detection and proposed a novel GAN-based MTS anomaly detection framework (RSM-GAN) to solve those challenges. RSM-GAN takes advantage of the adversarial learning to accurately capture the temporal and spatial dependencies in the data, while simultaneously deploying an additional encoder to handle even severe levels of training data contamination. The novel attention mechanism in the recurrent layer of RSM-GAN enables the model to handle complex seasonal patterns often found in the real-world data. Furthermore, training stability and optimal convergence of the GAN is attained through the use of Wasserstein GAN with gradient penalty. We conducted extensive empirical studies and results show that our architecture together with a new score and causal inference framework lead to an exceptional performance over state-of-the-art baseline models on both synthetic and real-world datasets.
|
---
abstract: 'In nearly antiferromagnetic (AF) metals such as high-$T_{\rm c}$ superconductors (HTSC’s), a single nonmagnetic impurity frequently causes nontrivial widespread change of the electronic states. To elucidate this long-standing issue, we study a Hubbard model with a strong onsite impurity potential based on an improved fluctuation-exchange (FLEX) approximation, which we call the $GV^I$-FLEX method. This model corresponds to the HTSC with dilute nonmagnetic impurity concentration. We find that (i) both local and staggered susceptibilities are strongly enhanced around the impurity. By this reason, (ii) the quasiparticle lifetime as well as the local density of states (DOS) are strongly suppressed in a wide area around the impurity (like a Swiss cheese hole), which causes the “huge residual resistivity” beyond the s-wave unitary scattering limit. We stress that the excess quasiparticle damping rate caused by impurities has strong $\k$-dependence due to non-s-wave scatterings induced by many-body effects, so the structure of the “hot spot/cold spot” in the host system persists against impurity doping. This result could be examined by the ARPES measurements. In addition, (iii) only a few percent of impurities can causes a “Kondo-like” upturn of resistivity ($d\rho/dT<0$) at low $T$ when the system is very close to the AF quantum critical point (QCP). The results (i)-(iii) obtained in the present study, which cannot be derived by the simple FLEX approximation, naturally explains the main impurity effects in HTSC’s. We also discuss the impurity effect in heavy fermion systems and organic superconductors.'
address: ' Department of Physics, Nagoya University, Furo-cho, Nagoya 464-8602, Japan. '
author:
- 'Hiroshi [Kontani]{} and Masanori [Ohno]{}'
title: |
Effect of Nonmagnetic Impurity in Nearly Antiferromagnetic Fermi Liquid:\
Magnetic Correlations and Transport Phenomena
---
Introduction {#sec:intro}
============
In strongly correlated electron systems, the presence of nonmagnetic impurities with low concentration can cause drastic changes of electronic properties of the bulk system. Thus, the impurity effect is a useful probe to investigate the electronic states of the host system. In under-doped high-$T_{\rm c}$ superconductors (HTSC’s), nonmagnetic impurities (such as Zn) causes a huge residual resistivity beyond the s-wave unitary scattering limit. Moreover, NMR measurements reveal that both the local and the staggered spin susceptibilities are strongly enhanced around the impurity. Until now, the whole understanding of these impurity effect in HTSC have not be achieved in terms of the Fermi liquid theory. By this reason, nontrivial impurity effects in under-doped HTSC’s are frequently considered as the evidence of the breakdown of the Fermi liquid state. However, similar impurity effects are observed in other strongly correlated metals such as heavy fermion (HF) systems or organic superconductors, near the magnetic quantum-critical-points (QCP). Therefore, we have to develop previous theories of impurity effect to see whether these experimental results can be explained in terms of the Fermi liquid theory or not.
In HTSC’s without impurities, various physical quantities in the normal state deviate from the conventional Fermi liquid behaviors in usual metals, which are called the non-Fermi liquid (NFL) behaviors. Famous examples of the NFL behaviors would be the Curie-Weiss like behavior of $1/T_1T$ and the $T$-linear resistivity above the pseudo-gap temperature, $T^\ast \sim 200$K. One of the most predominant candidates is the Fermi liquid state with strong antiferromagnetic (AF) fluctuations [@Yamada-rev; @Moriya; @Manske; @Pines; @Kontani-rev]. In fact, spin fluctuation theories like the SCR theory [@Moriya] and the fluctuation-exchange (FLEX) approximation [@Bickers; @Manske; @Monthoux-Scalapino; @Takimoto; @Koikegami; @Wermbter] have succeeded in explaining various NFL behaviors in a unified way. Recently, low-energy excitations in connection with HTSC’s were studied in ref. [@Manske-PRB]. These approximations satisfy the Mermin-Wagner theorem (see Appendix A). Moreover, pseudo-gap phenomena under $T^\ast$ are well reproduced by the FLEX+$T$-matrix approximation, where self-energy correction due to strong superconducting (SC) fluctuations are taken into account [@Yamada-rev; @Dahm-T; @Kontani-rev].
One of the most remarkable NFL behaviors in HTSC’s would be the anomalous transport phenomena. For example, the Hall coefficient $R_{\rm H}$ is proportional to $T^{-1}$ above $T^\ast$, and $|R_{\rm H}|\gg 1/ne$ ($n$ being the electron filling number) at low $T$ [@Satoh]. Moreover, the magnetoresistance $\Delta\rho/\rho_0$ is proportional to $R_{\rm H}^2/\rho_0^2 \propto T^{-4}$, which is called the modified Kohler’s rule [@Kimura]. They were frequently cited as strong objections against a simple Fermi liquid picture, because analyses based on the relaxation time approximation (RTA) do not work. However, recent theoretical works have revealed that the current vertex corrections (CVC’s), which are dropped in the RTA, play important roles in HTSC’s. Due to the CVC’s, anomalous behaviors of $R_{\rm H}$, $\Delta\rho/\rho$, thermoelectric power and Nernst coefficient are naturally explained [*in a unified way, based on the Fermi liquid theory*]{} [@Kontani-rev; @Kontani-Hall; @Kontani-MR; @Kontani-S; @Kontani-N].
In the present paper, we study the effect of a single nonmagnetic impurity on a Fermi liquid with strong AF fluctuations. For that purpose, we developed a useful method of calculating the real space structure of the self-energy and the susceptibility around the strong nonmagnetic impurity, on the basis of an improved FLEX approximation. When the AF fluctuations are strong, we find that both local and staggered spin susceptibilities are enhanced around the impurity. Moreover, the residual resistivity $\Delta\rho$ per impurity, which is determined by the nearly parallel shift of $\rho(T)$, can take a huge value beyond the s-wave unitary scattering limit. In addition, a “Kondo-like” insulating behavior ($d\rho/dT<0$) emerges in the close vicinity of the AF-QCP. These drastic impurity effects in nearly AF systems come from the fact that the electronic states are modified in a wide range around the impurity, whose radius is approximately AF correlation length, $\xi_{\rm AF}$. The present study provides a unified understanding for various experimental impurity effect in HTSC’s, [*as universal phenomena in nearly AF Fermi liquids*]{}. Any exotic mechanisms (breakdown of the Fermi liquid state) need not to be assumed to explain them.
We find that the simple FLEX approximation does not reproduce reliable electronic states around the impurity. For example, the spin susceptibility ${\hat \chi}^s$ is [*reduced*]{} around the impurity. This failure comes from the fact that the feedback effect on the vertex correction for ${\hat \chi}^s$ is neglected in the FLEX approximation. To overcome this difficulty, we propose a modified version of the FLEX approximation, which we call the $GV^I$-FLEX method.
In the present work, we want to calculate the impurity effect at sufficient low temperatures, say 50K. For this purpose, we have to work on a large real-space cluster with a impurity site, which should be at least 64$\times$64 to obtain the correct bulk electric state at lower temperatures. The FLEX approximation for such a large cluster is, however, almost impossible to perform numerically. Here, we invented the method to calculate the single impurity effect in a large cluster, using the fact that the the range of the modification of electronic states due to the impurity is at least $5\sim6$ except for in heavily under-doped systems.
Previous Theoretical Studies
----------------------------
Effect of a nonmagnetic impurity embedded in the Hubbard model or in the $t$-$J$ model had been studied theoretically by various methods. The $t$-$J$ cluster model ($\sim20$ sites) with a single nonmagnetic impurity was studied using the exact diagonalization method [@Ziegler; @Poilblanc]. They found that the AF correlation is enhanced around the impurity. The same result is realized in two-dimensional Heisenberg models with a vacant site because quantum fluctuations are reduced around the impurity [@Bulut89; @Sandvik]. The same mechanism will account for the enhancement of the AF correlation around an impurity in $t$-$J$ model. Moreover, in the $t$-$J$ model, an effective long-range impurity potential is induced due to the many-body effect. The authors discussed that non s-wave scattering given by the effective long-range potential could give rise to a huge residual resistivity per impurity, beyond the s-wave unitary scattering limit. It is noteworthy that an extended Gutzwiller approximation was applied for the impurity problem in the $t$-$J$ model [@Ogata].
Also, the Hubbard model with a single nonmagnetic impurity was studied using the random phase approximation (RPA) in refs. [@Bulut01; @Bulut00; @Ohashi]. By assuming an “extended impurity potential”, they explained that the local susceptibility is enhanced in proportion to $\chi_{\rm Q}$ $[\propto T^{-1}]$ of the host, reflecting the lack of translational invariance. Similar analysis based on a phenomenological AF fluctuation model was done [@Prelovsek]. The staggered susceptibilities is also enhanced due to the change of the local DOS (Friedel oscillation) [@Fujimoto]. However, the RPA could not explain the enhancement of local and staggered susceptibilities when the ($\delta$-functional) onsite impurity potential, which corresponds to Zn or Li substitution in HTSC’s, is assumed. Therefore, results given by the RPA are not universal in that they are very sensitive to the strength of extended impurity potential.
In the present work based on the $GV^I$-FLEX approximation, we show that $\delta$-functional onsite nonmagnetic impurity causes the enhancement of $\chi^s$ universally. As a result, onsite impurities cause a huge residual resistivity due to the nonlocal widespread change on the self-energy. We will see that the $GV^I$-method gives a unified understanding of the impurity problem in HTSC.
Experimental Results
--------------------
Here, we introduce several experimental results in HTSC’s and the related systems, which we focus on in the present work. In later sections, we will discuss the origin of these experimental facts, and show that they are qualitatively well explained in a unified way [*as the effect of nonmagnetic impurities or residual disorders*]{}, on the basis of the $GV^I$-method.
[**(a) Magnetic properties**]{}: In optimally or under-doped HTSC’s, a nonmagnetic impurity replacing a Cu site causes a localized momentum. A Curie like uniform spin susceptibility is induced by dilute doping of Zn in YBa$_2$Cu$_3$O$_{7-x}$ (YBCO). The Curie constant $C$ per Zn in under-doped compounds ($x\approx0.34$) is much larger than that in optimally doped ones ($x\approx0$) [@Alloul99-2]. They also reports an interesting relation $C\propto\Delta\rho$. Curie-like susceptibility was observed in Al-doped La$_{2-\delta}$Sr$_\delta$CuO$_4$ (LSCO) [@Ishida96]. In Zn-doped YBCO compounds, site-selective $^{89}$Y NMR measurements revealed that both the local spin susceptibility [@Alloul94; @Alloul00] and the staggered susceptibility [@Alloul00-2] are prominently enhanced around the Zn-site, within the radius of the AF correlation length $\xi_{\rm AF}$. The same result was obtained by the $^7$Li Knight shift measurement in Li-doped YBCO compounds [@Alloul99], and by the $^{63}$Cu NMR measurement in Zn-doped YBCO compounds [@Jullien00]. These NMR studies show that the impurities does not trap holes, contrary to the suggestion by refs. [@Ziegler; @Poilblanc]. This fact means that the impurity-induced local moments result from the change of the magnetic properties of itinerant electrons around the impurity sites.
[**(b) Resistivity**]{}: Fukuzumi et al. observed $\rho(T)$ in Zn-doped YBCO and LSCO for Zn concentration $n_{\rm imp}=0.02\sim 0.04$ [@Uchida]. In over-doped systems, the residual resistivity $\Delta\rho(T)$ per impurity, which is determined from nearly parallel shift of $\rho(T)$ by impurities, is consistent with the value for 2D electron gas; $\rho_{\rm imp}^0=(4\hbar/e^2)n_{\rm imp}/n$. However, $\Delta\rho \sim (4\hbar/e^2)n_{\rm imp}/|1-n|
\gg\rho_{\rm imp}^0$ in under-doped systems. This fact suggests that the scattering cross section of Zn anomalously increases in under-doped HTSC’s, as if an effective radius of impurity potential grows due to many-body effect. In addition, upturn of $\rho(T)$ ($d\rho/dT<0$) is observed below 50K in under-doped compounds.
Such a prominent enhancement of residual resistivity $\Delta\rho$ is also observed in HF compounds near the AF-QCP, which is realized under a critical pressure $P_{\rm c}$ [@Jaccard1; @Jaccard2]. Famous examples are CeCu$_5$Au ($P_{\rm c}\approx3.4$GPa) [@CeCuAu], CeRhIn$_5$ ($P_{\rm c}\approx2$GPa) and CeCoIn$_5$ ($P_{\rm c}\approx0$GPa) [@Ce115; @Ce115-2]. Note that the enhancement of $\Delta\rho$ has nothing to do with the increase of the renormalization factor $z$ ($\ll 1$) by applied pressure, because $\Delta\rho$ is independent of $z$ in the Fermi liquid theory. In addition, the residual resistivity of an organic superconductor $\kappa$-(BEDT-TTF)$_4$Hg$_{2.89}$Br$_8$ ($T_{\rm c}\approx4$K), which is close to the AF-QCP at ambient pressure, decreases to be about 10% of the original value by applying the pressure [@Taniguchi]. Such a drastic reduction of $\Delta\rho$ cannot be attributed to the change of the DOS by pressure.
Ando et al. have measured the resistivity in HTSC’s under high magnetic field ($\sim$60T), which totally suppresses the superconductivity [@Ando1; @Ando2]. In LSCO, they found that the insulating behavior emerges under the original $T_{\rm c}$ at ${\bf B}=0$. This upturn of $\rho$ occurs when $k_{\rm F}l \sim 13 \gg 1$ in the $ab$-plane ($l$ being the mean free path), so it has nothing to do with conventional localization in bad metals. Also, It will be independent of the opening of the pseudo-gap because the pseudo-gap temperature $T^\ast$ is much higher. Moreover, neither the weak localization or the Kondo effect due to magnetic impurities cannot be the origin of the upturn, because the (negative) magnetoresistance in the insulating region is independent of the field direction, and the insulating behavior persists under very high magnetic field.
Sekitani et al. also found similar insulating behavior ($d\rho/dT<0$) in under-doped electron-doped systems, M$_{2-\delta}$Ce$_\delta$CuO$_4$ (M=Nd, Pr, and La), in the normal state under high magnetic field [@Sekitani]. They discussed that the residual apical oxygens (about 1%), which works as impurity scattering potentials, give rise to the insulating behavior. They expected that the Kondo effect occurs. However, the ${\bf B}$-dependence of $\rho({\bf B},T)$ does not seem to be consistent with the Kondo effect. The upturn of $\rho_c$ along the $c$-axis is also observed in optimally doped Sm$_{2-\delta}$Ce$_\delta$CuO$_4$ ($\delta\approx0.14$) in ref. [@Shibauchi]; the upturn is robust against the strong magnetic field ($\sim45$T), both for ${\bf B}\parallel {\bf a}$ and ${\bf B}\parallel {\bf c}$. Recently, upturn of $\rho$ in optimally doped PCCO had been observed [@Greene-upturn].
[**(c) Local density of states**]{}:\
STM measurement [@Pan] revealed that a single nonmagnetic impurity in optimally doped Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ causes strong suppression of the superconducting state with radius $\sim15$Å. This “Swiss cheese structure” below $T_{\rm c}$ had been suggested by the $\mu$-SR measurement [@Uemura] as well as the specific heat measurement in Zn doped LSCO [@Ido2]. Reference [@Ido2] reports that the radius of Swiss cheese hole increases as the carrier doping decreases. The radius is approximately $\xi_{\rm AF}$, rather than the coherence length. This result suggests that the electronic properties in the Swiss cheese hole is strongly modified even above $T_{\rm c}$, because $\xi_{\rm AF}$ is a characteristic length scale in the normal state. Moreover, resent STM/STS measurements have revealed that the local density of states (DOS) in Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ is very inhomogeneous in the atomic scale [@Davis; @Ido]. The observed ununiformity originates from the weak scattering potentials from out-of-plane dopant atoms [@Davis]. These experimental observations suggest that the DOS and the electronic states in HTSC’s are quite sensitive to the disorder potential.
Formalism {#sec:formalism}
=========
In the present paper, we study a $(N\times N)$ square lattice Hubbard model with an impurity site: $$\begin{aligned}
H&=& H_0+H_{\rm imp} ,
\label{eqn:Ham}
\\
H_0&=& \sum_{\k\s}\e_\k c_{\k\s}^\dagger c_{\k\s}
+ U\sum_i n_{i \uparrow} n_{i \downarrow} ,
\\
H_{\rm imp}&=& I (n_{0 \uparrow} + n_{0 \downarrow}) ,
\label{eqn:Himp}\end{aligned}$$ where $H_0$ is the Hubbard model for the host system. In $H_{\rm imp}$, $I$ is the onsite nonmagnetic impurity potential at the origin (${\bf r}=0$). Because the translational invariance is violated in the case of $I\ne0$, the self-energy $\Sigma({\bf r},{\bf r}'; \e_n)$ and the Green function $G({\bf r},{\bf r}'; \e_n)$ cannot be functions of ${\bf r}-{\bf r}'$. In the present paper, we concentrate on the strong impurity potential case (unitary scattering case); $I=\infty$. We will take the $I=\infty$-limit in the course of the calculation.
We develop the method to study the impurity effect with strong potential on the basis of the FLEX approximation. When $I\ne0$, Green functions which compose the self-energy for $I=0$ get insertions of $I$’s in all possible manners. Then, the self-energy is divided into two terms: $$\begin{aligned}
\Sigma({\bf r}_i,{\bf r}_j;\e_n)=
\Sigma^{0}({\bf r}_i-{\bf r}_j;\e_n)+ \delta\Sigma({\bf r}_i,{\bf r}_j;\e_n),\end{aligned}$$ where $\Sigma^{0}$ is the self-energy for $I=0$, that is, the self-energy for the host system without impurity. $\delta\Sigma$ represents the cross terms between $I$ and $U$. Here, $\delta\Sigma$ does not contain any terms composed of only $I$’s. The full Green function for $H_0+H_{\rm imp}$ is composed of $\Sigma^0$, $\delta\Sigma$ and $I$. In the case of $I=\infty$, $\delta\Sigma({\bf r},0)=-\delta\Sigma(0,{\bf r})=-\Sigma^0({\bf r},0)$ because $\Sigma({\bf r},0)=\Sigma(0,{\bf r})=0$. The impurity potential also causes the nonlocal change in the self-energy, that is, $\delta\Sigma({\bf r}_i,{\bf r}_j)$ is finite even for ${\bf r}_i, {\bf r}_j \ne 0$. $\delta\Sigma({\bf r}_i,{\bf r}_j)$ will quickly converge to zero as ${\bf r}_i$ or ${\bf r}_j$ are away from the origin.
The Dyson equation for the real-space Green function in the matrix representation is given by $$\begin{aligned}
{\hat G}(\e_n)
&=& {\hat G^{00}}(\e_n)
+ {\hat G^{00}}(\e_n) ( {\hat \Sigma}(\e_n)+{\hat I} ) {\hat G}(\e_n)
\nonumber \\
&=& {\hat G^{0}}(\e_n)
+ {\hat G^{0}}(\e_n) (\delta{\hat \Sigma}(\e_n)+{\hat I} ) {\hat G}(\e_n) ,\end{aligned}$$ where $({\hat I})_{i,j}= I\delta_{i,0}\delta_{j,0}$ represents the impurity potential at the origin. $G^{00}({\bf r}_i-{\bf r}_j; \e_n)
=\frac1{N^2}\sum_\k (i\e_n+\mu-\e_\k)^{-1} e^{i\k \cdot ({\bf r}_i-{\bf r}_j)}$ is the non-interacting Green function, and ${\hat G^{0}}(\e_n)=([{\hat G^{00}}(\e_n)]^{-1}
-{\hat \Sigma^0}(\e_n))^{-1}$ is the interacting Green function without impurity ($I=0$).
The Dyson equation is also written as $$\begin{aligned}
{\hat G}(\e_n)
&=& {\hat G}^{I}(\e_n)
+ {\hat G}^{I}(\e_n) \delta{\hat \Sigma}(\e_n) {\hat G}(\e_n) ,
\label{eqn:Dyson-I} \\
{\hat G}^{I}(\e_n)
&=& {\hat G^{0}}(\e_n)+ {\hat G^{0}}(\e_n){\hat I}{\hat G}^{I}(\e_n) ,
\label{eqn:GI}\end{aligned}$$ where ${\hat G}^{I}$ represents the Green function composed of $I$ and $G^0$, that is, ${\hat G}^{I}={\hat G}|_{\delta\Sigma=0}$. Equations (\[eqn:Dyson-I\]) and (\[eqn:GI\]) are expressed in Fig.\[fig:Dyson\].
In the present work, we calculate the self-energy for $I\ne0$ based on the improved FLEX approximation, without taking any averaging with respect to the position of impurity. Hereafter, we propose the three versions of frameworks for calculating the self-energy as follows.
[**(I) $GV^0$-method**]{} : First, we introduce the “$GV^0$-FLEX approximation”, where the full Green function $G$ is obtained self-consistently whereas any impurity effects on the effective interaction $V^0$ are neglected. Here, the self-energy is given by $$\begin{aligned}
\Sigma_{\rm [GV0]}({\bf r}_i,{\bf r}_j;\e_n)
&=& T\sum_l G_{\rm [GV0]}({\bf r}_i,{\bf r}_j;\w_l+\e_n)
\nonumber \\
& &\times V^0({\bf r}_i,{\bf r}_j;\w_l) ,
\label{eqn:self-GV0} \end{aligned}$$ where $\e_n= (2n+1)\pi T$ and $\w_l= 2l\cdot\pi T$, respectively. $G_{\rm [GV0]}$ and $\delta\Sigma_{\rm [GV0]} \equiv \Sigma_{\rm [GV0]}-\Sigma^0$ satisfy the Dyson equation, (\[eqn:Dyson-I\]). $V^0$ and $\Sigma^0$ are given by the FLEX approximation for the host system. Hereafter, we call eq. (\[eqn:self-GV0\]) the $GV^0$-method for simplicity, because the self-energy is symbolically written as $G\circ V^0$. Here, $V^0$ and $\Sigma^0$ are given by $$\begin{aligned}
{V}^0({\bf r}_i,{\bf r}_j;\w_l)
&=& \frac1{N^2}\sum_\q {V}^0(\q,\w_l)e^{i\q\cdot({\bf r}_i-{\bf r}_j)},
\\
V^0(\q,\w_l)
&=& U^2 \left( \frac32 {\chi}_\q^{0s}(\w_l)
+\frac12 {\chi}_\q^{0c}(\w_l)
- {\Pi}_\q^0(\w_l) \right) \mbox{,}
\nonumber \\ \\
\chi_\q^{0s(c)}(\w_l)
&=& {\Pi}_\q^0(\w_l) \cdot \left\{ {1} -(+)
U{\Pi}_\q^0(\w_l) \right\}^{-1} \mbox{,} \\
\Pi_\q^0(\w_l)
&=& -T\sum_{\k, n} G_{\q+\k}^0(\w_l+\e_n) G_\k^0(\e_n) \mbox{,}
\\
\Sigma_\k^0(\e_n)&=& T\sum_{\q,l}G_{\k+\q}^0(\e_n+\w_l)V^0(\q,\w_l) ,\end{aligned}$$ where $G_\k^0(\e_n)=(i\e_n+\mu-\e_k-\Sigma_\k^0(\e_n))^{-1}$. $\chi_\q^{0s}$ and $\chi_\q^{0c}$ are spin and charge susceptibilities, respectively, given by the FLEX approximation for the host system.
Using the $GV^0$-method, We can calculate the nonlocal change in the self-energy induced around the impurity, $\delta\Sigma$. However, the nonlocal change in the spin susceptibility is not taken into account in the $GV^0$-method. To calculate this effect, we introduce other two methods as follows.
[**(II) $GV$-method**]{} : Next, we explain the “$GV$-method”, which is equal to the FLEX approximation in real space. In this method, both the self-energy and the effective interaction are obtained fully self-consistently. The self-energy in the $GV$-method is given by $$\begin{aligned}
\Sigma_{\rm [GV]}({\bf r}_i,{\bf r}_j;\e_n)
&=& T\sum_l G_{\rm [GV]}({\bf r}_i,{\bf r}_j;\w_l+\e_n) ,
\nonumber \\
& &\times V({\bf r}_i,{\bf r}_j;\w_l)
\label{eqn:self-GV} \\
{\hat V}(\w_l)
&=& U^2\left( \frac32{\hat \chi}^s(\w_l)
+ \frac12{\hat \chi}^c(\w_l) -{\hat \Pi}(\w_l) \right) ,
\nonumber \\
\label{eqn:self-V}\end{aligned}$$ where $G_{\rm [GV]}$ and $\delta\Sigma_{\rm [GV]}=\Sigma_{\rm [GV]}-\Sigma^0$ satisfy Dyson equation (\[eqn:Dyson-I\]). The spin and charge susceptibilities in the $GV$-method, ${\hat \chi}^{s}$ and ${\hat \chi}^{c}$, are given by $$\begin{aligned}
{\hat \chi}^{s(c)}
&=& {\hat \Pi} \left( 1 -(+) U{\hat \Pi} \right)^{-1} ,
\label{eqn:chisc-GV}
\\
{\Pi}({\bf r}_i,{\bf r}_j;\w_l)
&=& -T\sum_{\e_n} G_{\rm [GV]}({\bf r}_i,{\bf r}_j;\e_n+\w_l)
\nonumber \\
& &\times G_{\rm [GV]}({\bf r}_j,{\bf r}_i;\e_n) .
\label{eqn:Pi-GV}\end{aligned}$$ In the $GV$-method, the impurity effect on $V$ is fully taken into account in terms of the FLEX approximation. However, we find that the numerical results given by the $GV$-method are totally inconsistent with experimental facts. This is because the vertex corrections (VC’s) for the spin susceptibility, which are dropped in the $GV$-method, becomes significant in strongly correlated systems. We will discuss this point in §\[sec:VC\].
[**(III) $GV^I$-method**]{}: To overcome the difficulty inherent in the $GV$-method, we propose the “$GV^I$-method”, where the Green function is obtained self-consistently whereas the impurity effect on the effective interaction is calculated in a partially self-consistent way. We will show that the $GV^I$-method is the most superior among (I)-(III). Here, the self-energy is given by $$\begin{aligned}
\Sigma_{\rm [GVI]}({\bf r}_i,{\bf r}_j;\e_n)
&=& T\sum_l G_{\rm [GVI]}({\bf r}_i,{\bf r}_j;\w_l+\e_n)
\nonumber \\
& &\times V^I({\bf r}_i,{\bf r}_j;\w_l) ,
\label{eqn:self-GVI} \\
{\hat V}^I(\w_l)
&=& U^2\left( \frac32{\hat \chi}^{Is}(\w_l)
+ \frac12{\hat \chi}^{Ic}(\w_l) -{\hat \Pi}^I(\w_l) \right) ,
\nonumber \\
\label{eqn:self-VI}\end{aligned}$$ where $G_{\rm [GVI]}$ and $\delta\Sigma_{\rm [GVI]}=\Sigma_{\rm [GVI]}-\Sigma^0$ satisfy Dyson equation (\[eqn:Dyson-I\]). The spin and charge susceptibilities in the $GV^I$-method, ${\hat \chi}^{Is}$ and ${\hat \chi}^{Ic}$, are given by $$\begin{aligned}
{\hat \chi}^{Is(c)}
&=& {\hat \Pi} \left( 1 -(+) U{\hat \Pi^I} \right)^{-1} ,
\label{eqn:chisc-GVI} \\
{\Pi}^I({\bf r}_i,{\bf r}_j;\w_l)
&=& -T\sum_{\e_n} G^I({\bf r}_i,{\bf r}_j;\e_n+\w_l)
\nonumber \\
& &\times G^I({\bf r}_j,{\bf r}_i;\e_n) .
\label{eqn:Pi-GVI}\end{aligned}$$ In ${\hat \chi}^{Is}$ and ${\hat \chi}^{Ic}$ in the $GV^I$-method, the self-energy correction given by the cross terms between $I$ and $U$ \[$\delta\Sigma$\] is not taken into account, whereas it is taken in the $GV$-method. Nonetheless, results given by the $GV^I$-method are completely different from results by the $GV$-method, and the former results are well consistent with experiments. For example, ${\hat \chi}^{Is}$ given in eq.(\[eqn:chisc-GVI\]) is strongly enhanced around the impurity, whereas ${\hat \chi}^{s}$ given in eq.(\[eqn:chisc-GV\]) is suppressed by $\delta\Sigma$. In §\[sec:VC\], we will show that the latter result is an artifact of the $GV$-method because the reduction of ${\hat \chi}^{s}$ due to $\delta\Sigma$ is overestimated. As a result, the $GV^I$-method is much superior to the $GV$-method.
Figure \[fig:Self\] expresses the self-energies for $GV$, $GV^I$ and $GV^0$-methods, respectively. These methods are equivalent to the FLEX approximation when $I=0$. In later sections, we solve the single impurity problem in the presence of the Coulomb interaction on the basis of these three methods.
Method of Numerical Calculation {#sec:method}
===============================
In this section, we study the two-dimensional Hubbard model with an impurity potential at the origin. We work on the $(N\times N)$-square lattice with periodic boundary condition. $N$ should be large (at least 64) enough to achieve the thermodynamic limit at low temperatures. On the other hand, the range of nonlocal change in electronic states due to the impurity is only a few lattice spacings. Taking this fact into account, we calculate $\delta\Sigma_{\a,\b}$ ($\a$ and $\b$ being the lattice points in real space) by the (improved) FLEX approximation only for $|\a|, |\b| \le M$ ($M\ll N$), and we put $\delta\Sigma_{\a,\b}=0$ for $|\a| > M$ or $|\b| > M$. Here we put $M=6\sim8$, which is enough to obtain a reliable numerical result. Here, we explain the method how to reduce the working area to a $((2M+1)\times (2M+1))$-square lattice in solving the single impurity problem in the $(N\times N)$-square lattice. This technique helps us to avoid considerable numerical difficulty. Hereafter, we use $i,j,\cdots$ ($\a,\b,\cdots$) to represent the lattice points in the region A+B (region A) in Fig.\[fig:region\].
According to eq.(\[eqn:GI\]), $G^I_{i,j}$ is given by $$\begin{aligned}
G^I_{0,j}&=& G^I_{j,0}= \frac{G^0_{0,j}}{1-I G^0_{0,0}},
\label{eqn:GI0} \\
G^I_{i,j}&=& G^0_{i,j} + I \frac{G^0_{i,0}G^0_{0,j}}{1-I G^0_{0,0}}
\ \ \ \mbox{for $i,j\ne0$},
\label{eqn:GIij}\end{aligned}$$ where $G^0_{i,j}(\e_n)=G^0({\bf r}_i,{\bf r}_j;\e_n)=
G^0({\bf r}_i-{\bf r}_j;\e_n)= \frac1{N^2} \sum_\k
G^0_\k(\e_n)\exp(i\k\cdot({\bf r}_i-{\bf r}_j)$; $G^0_\k(\e_n)$ is the Green function given by the FLEX approximation without impurity potential. Therefore, $G^I_{i,j}$ in the limit of $I=\infty$ is $$\begin{aligned}
G^I_{0,j}&=& G^I_{j,0}= 0 ,
\label{eqn:GI0j-Iinf} \\
G^I_{i,j}&=& G^0_{i,j} - \frac{G^0_{i,0}G^0_{0,j}}{G^0_{0,0}}
\ \ \ \mbox{for $i,j\ne0$} .
\label{eqn:GIij-Iinf}\end{aligned}$$
According to eq.(\[eqn:Dyson-I\]), the Dyson equation for the full Green function $G_{\a,\b}$ inside of region A is given by $$\begin{aligned}
G_{\a,\b}&=& G^I_{\a,\b}+ \sum_{\b'}^{\rm A} A_{\a,\b'}G_{\b',\b} ,
\label{eqn:Dyson} \\
A_{\a,\b}&=& \sum_{\b'}^{\rm A} G^I_{\a,\b'}\delta\Sigma_{\b',\b} .\end{aligned}$$ Note that $\delta\Sigma_{\a,\b}$ is finite only when both $\a$ and $\b$ are inside of region A in the present approximation. By solving eq.(\[eqn:Dyson\]), we obtain $$\begin{aligned}
{\hat G}(\e_n)&=& \left( {\hat 1}-{\hat A}(\e_n) \right)^{-1}
{\hat G}^I(\e_n) ,\end{aligned}$$ which is a ($(2M+1)^2\times (2M+1)^2$)-matrix equation. In the limit of $I=\infty$, $G_{\a,\b}$ is given by $$\begin{aligned}
G_{\a,\b} &=&
\sum_{\delta\ne0}^{\rm A} \left( {\hat 1}-{\hat A} \right)_{\a\delta}^{-1}
\left( G^0_{\delta,\b}-\frac{G^0_{\delta,0}G^0_{0,\b}}{G^0_{0,0}}
\right)
\label{eqn:GIab-Iinf}\end{aligned}$$ Note that $G_{\a,0}=G_{0,\b}=0$. $A_{\a,\b}$ in eq.(\[eqn:GIab-Iinf\]) is given by $$\begin{aligned}
A_{\a,\b}&=& D_{\a\b}^{0}-\frac{G^0_{\a,0}}{G^0_{0,0}}D_{0,\b}^{0},
\\
D_{\a,\b}^{0}&=& \sum_{\b'}^{\rm A} G_{\a,\b'}^0\delta\Sigma_{\b',\b} .
\label{eqn:D0}\end{aligned}$$
Next, we study the spin and charge susceptibilities in the presence of an impurity. In the FLEX approximation, the equation for the magnetic susceptibility in real space is $$\begin{aligned}
\chi_{i,j}^s = \Pi_{i,j}+ U \sum_{l}^{\rm A+B} \Pi_{i,l}\chi_{l,j}^s ,
\label{eqn:chis-org}\end{aligned}$$ where $i$, $j$ and $l$ represent the lattice points in region A+B. $\Pi_{i,j}$ is the irreducible susceptibility defined in eq.(\[eqn:Pi-GV\]) in the $GV$-method or in eq.(\[eqn:Pi-GVI\]) in the $GV^I$-method. This equation is not easy to solve because of its huge matrix size; $(N^2\times N^2)$. To solve this difficulty, we reduce the above equation to $((2M+1)^2\times (2M+1)^2)$-matrix equation (inside of region A), approximating that $\Pi_{i,j}=\Pi_{i-j}^0$ when $i$ and/or $j$ are in region B. Here, $\Pi_{i-j}^0$ is the irreducible susceptibility for the host system: $\Pi_{i-j}^0(\w_l)= -T\sum_{\e_n}G_{i-j}^0(\e_n+\w_l)G_{j-i}^0(\e_n)$.
Here, $\chi_{\a,\b}^s$ inside region A is rewritten as $$\begin{aligned}
\chi_{\a,\b}^s &=&
\Pi_{\a,\b}+U\sum_{\gamma\ne0}^{\rm A}\Pi_{\a,\gamma}\chi_{\gamma,\b}^s
\nonumber \\
& &+ {\delta \chi}_{\a,\b}^s+U\sum_{\gamma\ne0}^{\rm A}
{\delta \chi}_{\a,\gamma}^s\chi_{\gamma,\b}^s,
\label{eqn:chis-I} \\
{\delta \chi}_{\a,\b}^s &=&
U\sum_l^{\rm B} \Pi_{\a,l}^0\Pi_{l,\b}^0
\nonumber \\
& &+U^2\sum_{l,m}^{\rm B} \Pi_{\a,l}^0\Pi_{l,m}^0\Pi_{m,\b}^0
+\cdots .
\label{eqn:chis-bar}\end{aligned}$$ The infinite series of summation in eq.(\[eqn:chis-bar\]) can be taken by using the following fact: In the case of $I=0$, eq.(\[eqn:chis-I\]) gives $\chi_{\a-\b}^{0s} \equiv\frac1{N^2}\sum_\k
\Pi_\k^{0}(1-U\Pi_\k^0)^{-1} \exp(i\k\cdot({\bf r}_\a-{\bf r}_\b))$. As a result, we obtain that $$\begin{aligned}
\chi_{\a,\b}^{0s} &=&
\Pi_{\a,\b}^0+U\sum_{\gamma}^{\rm A}\Pi_{\a,\gamma}^0\chi_{\gamma,\b}^{0s}
\nonumber \\
& &+ {\delta \chi}_{\a,\b}^{s}+U\sum_{\gamma}^{\rm A}
{\delta \chi}_{\a,\gamma}^s\chi_{\gamma,\b}^{0s},
\label{eqn:chis0-I} \\\end{aligned}$$ where the summation of $\gamma$ contains the origin. By solving this $((2M+1)^2\times (2M+1)^2)$-matrix equation, ${\delta \chi}^{s}$ is obtained as $$\begin{aligned}
{\hat {\delta \chi}}^{s}
= \left( {\hat \chi}^{0s}-{\hat \Pi}^{0}
-U{\hat \Pi}^{0}{\hat \chi}^{0s} \right)
\left( 1+U{\hat \chi}^{0s} \right)^{-1} .
\label{eqn:chis-bar2}\end{aligned}$$ Using eq.(\[eqn:chis-bar2\]), the solution of eq.(\[eqn:chis-I\]) is given by $$\begin{aligned}
{\hat \chi}^s&=& \left( 1-U{\hat \Pi} -U{\hat {\delta \chi}}^{s*}
\right)^{-1} \left( {\hat \Pi}+{\hat {\delta \chi}}^{s*} \right) ,
\label{eqn:chis-I2} \\
{\delta \chi}_{\a,\b}^{s*}&\equiv&
{\delta \chi}_{\a,\b}^{s} (1-\delta_{\a\b,0}) ,
\label{eqn:deltachis_ast}\end{aligned}$$ where the factor $1-\delta_{\a\b,0}$ in eq. (\[eqn:deltachis\_ast\]) represents the elimination of $\gamma=0$ in the summation in eq. (\[eqn:chis-I\]). Numerical calculation of eq.(\[eqn:chis-I2\]) is easy because its matrix size $((2M+1)^2\times (2M+1)^2)$ is not large for $M=6\sim8$. In the numerical study, we have to check that all the eigenvalues of $1-U{\hat \Pi} -U{\hat {\delta \chi}}^{s*}$ in eq.(\[eqn:chis-I2\]) are positive, because a single impurity in a paramagnetic bulk system could not induce a static magnetic order.
In the same way, we derive the charge susceptibility, which is given by the following $(N^2\times N^2)$-matrix equation: $$\begin{aligned}
\chi_{i,j}^c = \Pi_{i,j}- U \sum_{l}^{\rm A+B} \Pi_{i,l}\chi_{l,j}^c .
\label{eqn:chic-org}\end{aligned}$$ Taking the same procedure as used in deriving eq.(\[eqn:chis-I2\]), $\chi_{\a,\b}^c$ in the region A is given by the following $((2M+1)^2\times (2M+1)^2)$-matrix: $$\begin{aligned}
{\hat \chi}^c&=& \left( 1+U{\hat \Pi} +U{\hat {\delta \chi}}^{c*}
\right)^{-1} \left( {\hat \Pi}+{\hat {\delta \chi}}^{c*} \right) .
\label{eqn:chic-I2} \end{aligned}$$ Here, ${\delta \chi}_{\a,\b}^{c*}$ is given by $$\begin{aligned}
{\delta \chi}_{\a,\b}^{c*}&\equiv&
{\delta \chi}_{\a,\b}^{c} (1-\delta_{\a\b,0}), \\
{\hat {\delta \chi}}^{c}
&=& \left( {\hat \chi}^{0c}-{\hat \Pi}^{0}
+U{\hat \Pi}^{0}{\hat \chi}^{0c} \right)
\left( 1-U{\hat \chi}^{0c} \right)^{-1} ,
\label{eqn:chic-bar2} \end{aligned}$$ where ${\chi}_{i-j}^{0c}=\frac1{N^2}\sum_\k
\Pi_\k^{0}(1+U\Pi_\k^0)^{-1} \exp(i\k\cdot({\bf r}_i-{\bf r}_j))$ is the charge susceptibility for the host system.
Finally, the spin and charge susceptibilities in the $GV^I$-method, ${\hat \chi}^{Is}$ and ${\hat \chi}^{Ic}$, are obtained as follows: $$\begin{aligned}
{\hat \chi}^{Is}&=& \left( 1-U{\hat \Pi}^I -U{\hat {\delta \chi}}^{s*}
\right)^{-1} \left( {\hat \Pi}^I+{\hat {\delta \chi}}^{s*} \right),
\label{eqn:chis-I3} \\
{\hat \chi}^{Ic}&=& \left( 1+U{\hat \Pi}^I +U{\hat {\delta \chi}}^{c*}
\right)^{-1} \left( {\hat \Pi}^I+{\hat {\delta \chi}}^{c*} \right) ,
\label{eqn:chic-I3} \end{aligned}$$ where ${\hat \Pi}^I$ is given in eq.(\[eqn:Pi-GVI\]). Note that we have to check that all the eigenvalues of $1-U{\hat \Pi}^I -U{\hat {\delta \chi}}^{s*}$ in eq.(\[eqn:chis-I3\]) are positive in the numerical study.
Numerical Results for Local DOS and Spin Susceptibilities {#sec:numerical}
=========================================================
In this section, we show several numerical results given by $GV^0$, $GV^I$ and $GV$ methods. In each methods, the self-energy is obtained self-consistently. We find that $GV^I$-method gives the most reliable results, irrespective that the fully self-consistent condition for the quasiparticle interaction is not imposed. In the $GV$-method, on the other hand, the reduction of $\chi^s$ due to $\delta\Sigma$ \[i.e., the nonlocal change in the self-energy given by the cross terms between $I$ and $U$\] is overestimated. In §\[sec:VC\], we will show that the reduction of $V$ due to $\delta\Sigma$ is almost recovered if one takes account of the VC due to the excess spin fluctuations induced by the impurity. By this reason, the $GV^I$-method is the most reliable formalism. In the present section, we mainly focus on the numerical results given by the $GV^I$-method.
In the present numerical study, the dispersion of the conduction electron is given by $$\begin{aligned}
\e_\k&=& 2t(\cos(k_x)+\cos(k_y))
+ 4t'\cos(k_x)\cos(k_y)
\nonumber \\
& & + 2t''(\cos(2k_x)+\cos(2k_y)),\end{aligned}$$ where $t$, $t'$, and $t''$ are the nearest, the next nearest, and the third nearest neighbor hopping integrals, respectively. In the present study, we use the following set of parameters: (I) YBCO (hole-doped): $t_0=-1$, $t_1=1/6$, $t_2=-1/5$, $U=6\sim8$. (II) NCCO (electron-doped): $t_0=-1$, $t_1=1/6$, $t_2=-1/5$, $U=5.5$. (III) LSCO (hole-doped): $t_0=-1$, $t_1=1/10$, $t_2=-1/10$, $U=4\sim5$. These hopping parameters are equal to those used in ref.[@Kontani-Hall]. They were determined qualitatively by fitting to the Fermi surface (FS) given by ARPES measurements or the LDA band calculations. The shape of the FS’s for (I)-(III), all of which are hole-like, are shown in ref. [@Kontani-Hall]. Because $t_0\sim4000$K in real systems, $T=0.01$ corresponds to $\sim40$K. In the present numerical study, $64\times64$ $\k$-meshes and 1024 Matsubara frequencies are used.
The value of $U$ used in the present study is rather smaller than the bandwitdh ($W_{\rm band}$), irrespective that the real coulomb interaction is larger than $W_{\rm band}$. This will be justified by considering that $U$ used here is the [*effective*]{} Coulomb interaction $U_{\rm eff}$ between quasiparticles with low energies. In fact, $U_{\rm eff} \sim W_{\rm band}$ according to the Kanamori theory based on the two-particle approximation [@Kanamori].
In Ref.[@Kontani-Hall], the shape of the Fermi surface, the temperature and the momentum dependences of the spin susceptibility \[$\chi_\q^s(0)$\] and the quasiparticle damping rate \[Im$\Sigma_\q(-i\delta)$\] given by the FLEX approximation are explained in detail. The obtained results are well consistent with experiments. For example, the $\q$-dependence of $\chi_\q^s(0)$ shows that $\xi_{\rm AF}=2\sim3$a in YBCO (n=0.85) at $T=0.02$, which is consistent with neutron measurements.
Local Density of States {#sec:DOS}
-----------------------
Figure \[fig:DOS\] represents the density of states (DOS) for a hole-doped system (LSCO, $n=0.9$) at $T=0.02$, at the nearest-neighbor site (${\bf r}=(1,0)$) and the next-nearest-neighbor site (${\bf r}=(1,1)$) of the impurity, respectively. Here, “host” represents the DOS without the impurity given by the FLEX approximation, $N_{\rm host}(\w)\equiv \frac1{\pi N^2}\sum_\k {\rm Im}G_\k^0(\w-i\delta)$. On the other hand, $N_l^I(\w)\equiv \frac1{\pi}{\rm Im}G_{l,l}^I(\w-i\delta)$ at site $l$, where ${\hat G}^I$ is given in eq. (\[eqn:GIij-Iinf\]). In $N_l^I(\w)$, effect of $\delta\Sigma$ induced around the impurity is dropped. At (1,0) \[at (1,1)\], $N_l^I(\w)$ is larger \[smaller\] than $N_{\rm host}(\w)$, which is recognized as the Friedel oscillation [@Fujimoto]. This result is changed only slightly by $GV^0$ or $GV$ methods, which are not shown in \[fig:DOS\]. However, the DOS given by the $GV^I$-method is much smaller than $N_l^I(\w)$ in under-doped systems, because Im$\delta\Sigma$ takes a large value around the impurity. The Green function is given by $$\begin{aligned}
{\hat G}_{\rm [GVI]}&=&
\left([{\hat G}^I]^{-1}-{\hat \delta\Sigma}_{\rm [GVI]} \right)^{-1},\end{aligned}$$ which are easily obtained by eq.(\[eqn:GIab-Iinf\]) in the present numerical study. As shown in Fig. \[fig:DOS\], the local DOS given by the $GV^I$-method is strongly suppressed, especially at $(1,0)$. This suppression becomes more prominent for $U=5$. The reason for this suppression is the extremely short quasiparticle lifetime, which is caused by the huge local spin susceptibility ${\hat \chi}^{Is}$ around the impurity. We will discuss the impurity effect on the spin susceptibility in the next subsection.
Figure \[fig:DOS2D\] shows the reduction of the local DOS at the Fermi level obtained by the $GV^I$-method, both for hole-doped and electron-doped systems at $T=0.02$. We see that the DOS is prominently suppressed in a wide region around the impurity, especially along the diagonal axis. The suppression of the DOS is also recognized along $x$, $y$-axis, which is caused by the enhanced quasiparticle damping, Im$\delta\Sigma(-i\delta)$, given by the $GV^I$-method. The strong suppression of the DOS around the impurity site is consistent with the “Swiss cheese structure” observed in the STM measurement [@Pan]. For LSCO, the radius of the Swiss cheese hole in Fig. \[fig:DOS2D\] is about 3a (a being the lattice spacing). It increases further for $U=5$, approximately in proportion to the AF correlation length $\xi_{\rm AF} (\propto \sqrt{T})$. Because quasiparticle lifetime is extremely short in the Swiss cheese hole, small number of impurities will induce the huge residual resistivity and the prominent reduction of $T_{\rm c}$ in under-doped systems. In §\[sec:rho\], we will calculate the residual resistivity and confirm this expectation.
Static Spin Susceptibilities {#sec:kaiS}
----------------------------
Figure \[fig:kaiS\] shows the obtained local spin susceptibility $\chi^s({\bf r},{\bf r})$ along $(1,0)$ and $(1,1)$ directions at $T=0.02$. We see that $\chi^s({\bf r},{\bf r})$ is significantly enhanced around the impurity as $U$ increases, or as $n$ approaches unity. The radius of area where $\chi^s$ is enhanced is about $3\sim 4$a, which would corresponds to the Swiss cheese hole in the DOS. This result is consistent with the impurity effect in under-doped HTSC’s observed by NMR measurements. On the other hand, an opposite result is given by the $GV$-method, which is not reliable as discussed above. Note that we have checked that all the eigenvalues of $1-U{\hat \Pi} -U{\hat {\delta \chi}}^{s*}$ in eq.(\[eqn:chis-I2\]) are positive in the present numerical study.
Figure \[fig:kaiS-T\] represents the uniform susceptibility $\chi_{\rm uniform}^s= N^{-2}\sum_{i,j}\chi_{i,j}^{Is}(0)$ for LSCO ($n=0.9$, $U=5$) given by the $GV^I$-method. $n_{\rm imp}$ is the concentration of the nonmagnetic impurity. Here, each impurity is assumed to be independent. We see that the uniform susceptibility without impurity decreases slightly at lower temperatures, which corresponds to the “weak pseudo-gap behavior” in HTSC’s above the strong pseudo-gap temperatures, $T^\ast \sim200$K. Surprisingly, Fig. \[fig:kaiS-T\] shows that a nonmagnetic impurity induces an approximate Curie-Weiss like uniform susceptibility $\Delta\chi \approx n_{\rm imp}\cdot \mu_{\rm eff}^2/3(T+\Theta)$. For $U=5$, $\mu_{\rm eff} = 0.74\mu_{\rm B}$ and $\Theta=0$, which means that 42% of a magnetic moment of spin-$\frac12$ ($\mu_{\rm eff} = 1.73\mu_{\rm B}$) is induced by a single nonmagnetic impurity. A similar behavior is obtained for YBCO in the present study. The obtained induced moment is slightly smaller than the experimental value $\mu_{\rm eff} \sim 1\mu_{\rm B}$ in YBa$_2$Cu$_3$O$_{6.66}$ ($T_{\rm c}\approx60K$) [@Alloul99-2]. In the present calculation, a simple relation $\Delta\chi \propto \chi_Q^0$ predicted by previous theoretical studies [@Bulut01; @Bulut00; @Ohashi; @Prelovsek] approximately holds. The obtained $\mu_{\rm eff}$ is the present study is much larger and consistent with experiments.
Figure \[fig:kaiS-AF\] shows the nonlocal spin susceptibility around the impurity given by the $GV^I$-method, $\chi^s({\bf r}, {\bf r}')$. This result means that staggered susceptibility given by the $GV^I$-method is strongly enhanced around the impurity. On the other hand, it is slightly suppressed in the $GV$-method. We consider that the former result is correct whereas the latter result is an artifact of the $GV$-method because of the lack of the vertex corrections. (see §\[sec:VC\].)
Here, we discuss the origin of the enhancement of $\chi^{Is}$: Reflecting the large $N_l^I(\w)$ at $l=(1,0)$, the absolute value of $\Pi_{i,j}^I(0)$ in the $GV^I$-method is strongly enhanced especially for $i=(1,0)$ and (a) $j=(1,0)$, (b) $j=(1,1)$, and (c) $j=(2,1)$. In the case of LSCO ($U=4$, $n=0.9$) at $T=0.02$, $\Pi_{i,j}^I(0)$ \[$\Pi_{i-j}^0(0)$\] becomes $0.172$ \[$0.160$\] for (a), $-0.160$ \[$-0.100$\] for (b), and $0.104$ \[$0.0784$\] for (c). This fact will give the enhancement of ${\hat \chi}^{Is}={\hat \Pi}^I (1-U{\hat \Pi}^I)^{-1}$ around the impurity site since eigenvalues of $(1-U{\hat \Pi}^I)$ become smaller. In contrast, an “on-site” nonmagnetic impurity does not give the enhancement of spin susceptibility in the RPA [@Bulut01; @Bulut00; @Ohashi]. The reason would be that the strong reduction of ${\hat \Pi}^0)$ due to thermal and quantum fluctuations are well described in the FLEX approximation. Then, the reduction of fluctuations due to an impurity would give rise to the enhancement of susceptibilities.
In summary, we find that both local and staggered spin susceptibilities are increased around the impurity site, within the radius of about 3a $(\sim \xi_{\rm AF})$ at $T=0.02$. Similar results were obtained for both the hole-doped systems (YBCO and LSCO) and the electron-doped ones (NCCO), rather insensitive to model parameters. Moreover, similar impurity effects were obtained in the $t$-$J$ model, by exact diagonalization studies [@Ziegler; @Poilblanc]. As a result, the obtained impurity effects in this section will be universal in systems close to the AF-QCP.
Feedback Effect and Vertex Corrections {#sec:VC}
======================================
In previous sections, we show that the local spin susceptibility given by the $GV^I$-method, $\chi^{Is}$, is prominently enhanced around the impurity. However, $\chi^{Is}$ could be modified if we go beyond the $GV^I$-method because the induced susceptibility around the impurity, $\Delta\chi \equiv \chi^{Is}-\chi^{0s}$, changes the susceptibility itself, in the form of the self-energy correction ($\delta\Sigma$) and the vertex correction (VC). We call this self-interaction effect “the feedback effect”. In the feedback effect, the susceptibility is enhanced by the VC whereas it is reduced by $\delta\Sigma$. In the $GV$-method, where only the latter effect is taken into account, the local susceptibility becomes smaller than the host’s value. This inconsistent result suggests the importance of the VC’s. In the present section, we study the VC’s for the spin susceptibility, and show that the VC’s almost cancel the self-energy correction. We find that the total feedback effect is very small in the $GV$-method with VC’s ($GV$+VC-method): The obtained spin susceptibility is similar to that by the $GV^I$-method. Therefore, we conclude that the $GV^I$-method is superior to the $GV$-method.
The irreducible susceptibility given by $GV^I$-method ($\Pi^{I,\s\s}$) and $GV$-method ($\Pi^{\s\s}$) are expressed in eqs.(\[eqn:Pi-GV\]) and (\[eqn:Pi-GVI\]), respectively. Hereafter, we discuss the feedback effect for the irreducible susceptibility perturbatively with respect to $\Delta {\hat V} = {\hat V}^I-{\hat V}^0$. In this respect, $\Pi^{\s\s}$ can be expanded with respect to $\Delta {\hat V}$ as
$$\begin{aligned}
\Pi^{\s\s}_{i,j}(0) &\approx& \Pi^{I,\s\s}_{i,j}
-2T\sum_{i',j',\e_m,\e_n} G_{j',j}^I(\e_n)G_{j,i}^I(\e_n)G_{i,i'}^I(\e_n)
\cdot \Delta V_{i',j'}(\e_n-\e_m)G_{i',j'}^I(\e_m) ,
\label{eqn:Pi-exp}\end{aligned}$$
up to the lowest order, which is expressed in Fig. \[fig:VC\]. We have checked numerically that eq. (\[eqn:Pi-exp\]) is satisfied well.
Next, we study the VC for irreducible susceptibility up to the second-order with respect to $\Delta {\hat V}$. The lowest order term, which we call the Maki-Thompson (MT) term customarily, is given by $$\begin{aligned}
\Delta \Pi_{\rm MT}^{\uparrow\uparrow}(i,j)
&=& -\frac{U^2T^2}{2} \sum_{i',j';\e_m,\e_n}F(i,j,i',j';\e_m,\e_n)
\left(\Delta\chi_{i',j'}^s(\e_m-\e_n)
+\Delta\chi_{i',j'}^c(\e_m-\e_n) \right)
\label{eqn:Pi-MT},
\\
\Delta \Pi_{\rm MT}^{\uparrow\downarrow}(i,j)
&=& -\frac{U^2T^2}{2} \sum_{i',j';\e_m,\e_n}F(i,j,i',j';\e_m,\e_n)
\Delta\chi_{i',j'}^s(\e_m-\e_n)
\label{eqn:Pi-MT2},
\\
\Delta\chi_{i,j}^{s,c} &=& \chi_{i,j}^{Is,c}-\chi_{i,j}^{0s,c} ,
\\
F(i,j,i',j';\e_m,\e_n) &=&
G_{j',i}^I(\e_m)G_{i,i'}^I(\e_m)G_{i',j}^I(\e_n)G_{j,j'}^I(\e_n) ,\end{aligned}$$ which is shown in Fig. \[fig:VC\]. We also discuss the second-order term given as $$\begin{aligned}
\Delta \Pi_{\rm AL}^{\s\s'}(i,j)
&=& T \sum_{i_1,i_2,j_1,j_2;\w_l}
F'(i,i_1,i_2;\w_l)(F'(j,j_1,j_2;\w_l)+F'(j,j_1,j_2;-\w_l))
\nonumber \\
& &\times (X_{\s\s'}^I(i_1,i_2,j_1,j_2;\w_l) - X_{\s\s'}^0(i_1,i_2,j_1,j_2;\w_l))
\label{eqn:Pi-AL},
\\
F'(i,i_1,i_2;\w_l) &=& T\sum_{\e_n}
G_{i_2,i}^I(\e_n)G_{i,i_1}^I(\e_n)G_{i_1,i_2}^I(\e_n+\w_l) ,
\\
X_{\uparrow\uparrow}^\xi(i_1,i_2,j_1,j_2;\w_l)
&=& \frac{5U^4}{4}\chi_{i_1,j_1}^{\xi s}\chi_{i_2,j_2}^{\xi s}
+ \frac{U^3}{4} \chi_{i_1,j_1}^{\xi s}(U\chi_{i_2,j_2}^{\xi c}+4\delta_{i_2,j_2})
+\frac{U^3}{4}\chi_{i_2,j_2}^{\xi s}(U\chi_{i_1,j_1}^{\xi c}+4\delta_{i_1,j_1})
\nonumber \\
& & + \frac{U^4}{4} \chi_{i_1,j_1}^{\xi c}\chi_{i_2,j_2}^{\xi c} ,
\\
X_{\uparrow\downarrow}^\xi(i_1,i_2,j_1,j_2;\w_l)
&=& \frac{U^4}{4}(\chi_{i_1,j_1}^{\xi s}-\chi_{i_1,j_1}^{\xi c})
(\chi_{i_2,j_2}^{\xi s}-\chi_{i_2,j_2}^{\xi c})
+ \frac{U^3}{2}\delta_{i_1,j_1} (\chi_{i_2,j_2}^{\xi s}-\chi_{i_2,j_2}^{\xi c})
\nonumber \\
& &+ \frac{U^3}{2}\delta_{i_2,j_2} (\chi_{i_1,j_1}^{\xi s}-\chi_{i_1,j_1}^{\xi c}) ,\end{aligned}$$ where $\xi=0$ or $I$. We call eq.(\[eqn:Pi-AL\]) the Aslamazov-Larkin term.
In the FLEX approximation, the irreducible VC’s given by the Ward identity, $\Gamma^{\rm irr}=\delta\Sigma/\delta G$, are composed of the MT-term and the AL-term [@Bickers].
As a result, the irreducible susceptibility given by the “$GV$-method with VC’s up to the second-order with respect to $\Delta\chi^{s,c}$” ($GV$+VC method) is given by $$\begin{aligned}
\Pi^{\s\s'}(i,j)
&=& \Pi(i,j)\delta_{\s,\s'}
\nonumber \\
& &+ \Delta \Pi_{\rm MT}^{\s\s'}(i,j)+\Delta \Pi_{\rm AL}^{\s\s'}(i,j) .\end{aligned}$$ The spin susceptibility in the $GV$+VC method is obtained as $$\begin{aligned}
{\hat \chi}_{GV+{\rm VC}}^s &=& {\hat \chi}_{\uparrow\uparrow}+
{\hat \chi}_{\uparrow\downarrow} ,
\label{eqn:chi-GVVC}
\\
{\hat \chi}_{\uparrow\uparrow} &=&
{\hat \Pi}^{\uparrow\uparrow}
+{\hat \Pi}^{\uparrow\uparrow}U{\hat \chi}_{\uparrow\downarrow}
-{\hat \Pi}^{\uparrow\downarrow}U{\hat \chi}_{\uparrow\uparrow},
\label{eqn:chi-uu}
\\
{\hat \chi}_{\uparrow\downarrow} &=&
-{\hat \Pi}^{\uparrow\uparrow}
+{\hat \Pi}^{\uparrow\uparrow}U{\hat \chi}_{\uparrow\uparrow}
-{\hat \Pi}^{\uparrow\downarrow}U{\hat \chi}_{\uparrow\downarrow},
\label{eqn:chi-ud}\end{aligned}$$
In the present numerical study, we calculate $\Delta \Pi_{\rm MT,AL}(\a,\b)$ only for region A in Fig. \[fig:region\]. Here we put $M=3$ in the present section. Unfortunately, it is not easy to calculate all the elements of eqs. (\[eqn:Pi-MT\]), (\[eqn:Pi-MT2\]) and (\[eqn:Pi-AL\]) in the region A because of the huge computation time. Therefore, we calculate $\Delta \Pi_{\rm MT,AL}(\a,\b)$ only for $|\a-\b|\le4$ in region A, and derive the static spin susceptibility ${\hat \chi}_{GV+{\rm VC}}^s$ by solving eqs.(\[eqn:chi-GVVC\]) - (\[eqn:chi-ud\]) for region A+B.
Figure \[fig:kaiS-VC\] show the local spin susceptibility given by $GV^I$-method \[${\hat \chi}^{Is}(i,i)$\], $GV$-method \[${\hat \chi}^{s}(i,i)$\], and $GV$+VC-method \[${\hat \chi}_{GV+{\rm VC}}^{s}(i,i)$\] for LSCO ($U=4$, $n=0.9$), respectively. At $i=(1,0)$, we see that ${\hat \chi}^{Is}(i,i)$ increases whereas ${\hat \chi}^{s}(i,i)$ decreases, as we have shown in Fig. \[fig:kaiS\]. Note that we have put $\Pi(i,j)= \Pi_I(i,j)= \Pi_{GV+{\rm VC}}(i,j)=0$ for $|i-j|>4$ in deriving Fig. \[fig:kaiS-VC\] to make comparison between different methods. Because of this fact, both ${\hat \chi}^{Is}(i,i)$ and ${\hat \chi}^{s}(i,i)$ in Fig. \[fig:kaiS-VC\] are smaller than those in Fig. \[fig:kaiS\] for $U=4$.
As shown in Fig. \[fig:kaiS-VC\], ${\hat \chi}_{GV+{\rm VC}}^{s}(i,i)$ at $i=(1,0)$ is strongly enhanced due to the VC’s, to be comparable with ${\hat \chi}^{Is}(i,i)$. This enhancement is brought mainly by the MT-term, whereas the AL-term slightly reduces the spin susceptibility. Therefore, the suppression of ${\hat \chi}^{s}$ in the $GV$-method, which is caused by the non-local self-energy correction $\delta\Sigma$, is almost recovered by the VC’s. Therefore, the $GV^I$-method gives a reliable spin susceptibility around the impurity, because the total feedback effects almost cancel.
We comment that the superiority of the $GV^I$-method over the $GV$-method and the importance of the VC’s would remind us of the $GW$ approximation [@Holm1; @Holm2; @Godby]. It is a first principle calculation for the self-energy, which is given by the convolution of the Green function $G$ and the screened interaction $W$ within the RPA. In the “fully self-consistent $GW$”, both $G$ and $W$ are obtained self-consistently. In the $GW_0$ scheme, on the other hand, the self-energy is given by $G$ and $W_0$, where $W_0$ is the screened interaction without self-energy correction. Reference [@Holm1; @Holm2; @Godby] shows that the descriptions of the bandwidth reduction and the satellite structure in the quasiparticle spectrum are satisfactory in the $GW_0$, whereas results given by $GW$ are much worse. This result clearly indicates the necessity of VC’s in the $GW$.
Transport Phenomena {#sec:transport}
===================
In previous sections, we showed that the magnetic susceptibility is strongly enhanced around the impurity. This fact will cause the strong nonlocal change in the self-energy around the impurity, $\delta\Sigma$. In the present section, we will show that $\delta\Sigma$ gives rise to a huge residual resistivity at finite temperatures in nearly AF Fermi liquids, which is the most important finding in the present paper. Moreover, a small number of nonmagnetic impurities can cause a “Kondo-like” insulating behavior ($d\rho/dT<0$) at low temperatures, when the system is very close to AF-QCP. Different from a conventional single-channel Kondo effect, the residual resistivity can be much larger than the value for s-wave unitary scattering. These findings naturally explain various long-standing problems on the transport phenomena in HTSC’s, heavy fermion systems and organic superconductors.
$T$-matrix {#sec:Tmatrix}
----------
First, we derive the expression for the $t$-matrix, ${\hat t}(\e)$, which is defined as ${\hat G}= {\hat G}^0 + {\hat G}^0 {\hat t} {\hat G}^0$. Therefore, the $t$-matrix due to the impurity potential $I$ at center ${\bf r}=(0,0)$ is given by $$\begin{aligned}
t_{\a,\b}&=& \sum_{n=1}^4 t_{\a,\b}^{(n)},
\label{eqn:t1234} \\
t_{\a,\b}^{(1)}&=&
(I+I^2 G_{0,0}) \delta_{\a,0}\delta_{\b,0},
\label{eqn:t1} \\
t_{\a,\b}^{(2)}&=&
I\sum_{\nu}^{\rm A} \left( G_{0,\nu}\delta\Sigma_{\nu,\b}\cdot \delta_{\a,0}
+ \delta\Sigma_{\a,\nu}G_{\nu,0}\cdot \delta_{\b,0} \right),
\label{eqn:t2} \\
t_{\a,\b}^{(3)}&=&
\sum_{\nu,\mu}^{\rm A} \delta\Sigma_{\a,\nu}G_{\nu,\mu}
\delta\Sigma_{\mu,\b},
\label{eqn:t3} \\
t_{\a,\b}^{(4)}&=& \delta\Sigma_{\a,\b} ,
\label{eqn:t4} \end{aligned}$$ where $t_{\a,\b}^{(1)} \sim t_{\a,\b}^{(4)}$ is schematically expressed in Fig. \[fig:Tmatrix\]. $\a$ and $\b$ represent sites in region A.
Here, we derive the expression for the $t$-matrix in the limit $I\rightarrow \infty$. First, the second term of eq. (\[eqn:t1\]) is rewritten as $$\begin{aligned}
I^2 G_{0,0}&=& I^2 G_{0,0}^I + I^2\sum_{\a,\b}^{\rm A}
G_{0,\a}^I \delta\Sigma_{\a,\b} G_{\b,0}^I
\nonumber \\
& &+ I^2 \sum_{\a,\b,\gamma,\delta}^{\rm A}
G_{0,\a}^I \delta\Sigma_{\a,\b} G_{\b,\gamma}
\delta\Sigma_{\gamma,\delta}^I G_{\delta,0}^I .\end{aligned}$$ According to eq. (\[eqn:GI0\]), $$\begin{aligned}
I^2 G_{0,0}^I &=& -I -\frac1{G_{0,0}^0} + O(I^{-1}),
\\
I G_{0,\a}^I &=& -\frac{G_{0,\a}^0}{G_{0,0}^0} + O(I^{-1}) .\end{aligned}$$ Thus, eq. (\[eqn:t1\]) in the limit $I\rightarrow \infty$ is given by $$\begin{aligned}
I+I^2 G_{0,0}&=& -\frac1{G_{0,0}^0}
+ \sum_{\a\b}^{\rm A} \frac{G_{0,\a}^0 G_{\b,0}^0}{(G_{0,0})^2}
\nonumber \\
& &\times\left( \delta\Sigma_{\a,\b} + \sum_{\gamma\delta}^{\rm A}
\delta\Sigma_{\a,\gamma}G_{\gamma,\delta}
\delta\Sigma_{\delta,\b} \right) .
\label{eqn:t1-a}\end{aligned}$$
In the same way, eq. (\[eqn:t2\]) in the limit $I\rightarrow \infty$ is given by $$\begin{aligned}
& &-\sum_\delta^{\rm A} \frac{\delta\Sigma_{\a,\delta} }{G_{0,0}^0}
\left( G_{\delta,0}^0
+\sum_{\xi\eta}^{\rm A} G_{\delta,\xi}\delta\Sigma_{\xi,\eta}
G_{\eta,0}^0 \right) \delta_{0,\b}
\nonumber \\
& &\ \ \ \
+ \langle \a \leftrightarrow \b \rangle .\end{aligned}$$ Note that both eqs. (\[eqn:t3\]) and (\[eqn:t4\]) contain $I$ only through $\delta\Sigma$, which does not diverge even when $I=\infty$.
To study the effect of the impurity on the transport phenomena, we take the average of the t-matrix with respect to the position of the impurity. The obtained result is $$\begin{aligned}
T_l &\equiv& \sum_\a^{\rm A} t_{l+\a,\a}
\nonumber \\
&=& -\frac1{G_{0,0}^0}\left(
1- \frac1{G_{0,0}^0}\sum_{\a\b}^{\rm A} D_{0,\a}^{0}
(\delta_{\a,\b}+D_{\a,\b}) G_{\b,0}^0 \right) \delta_{l,0}
\nonumber \\
\label{eqn:T1} \\
& &- \frac2{G_{0,0}^0}\sum_{\a}^{\rm A}
D_{0,\a}^{0} \left( \delta_{\a,l} + D_{\a,l} \right)
\label{eqn:T2} \\
& &+ \sum_{\a\b\gamma}^{\rm A} \delta\Sigma_{l+\a,\b}G_{\b,\gamma}
\delta\Sigma_{\gamma,\a}
\label{eqn:T3} \\
& &+ \sum_{\a}^{\rm A} \delta\Sigma_{l+\a,\a} ,
\label{eqn:T4}\end{aligned}$$ where eqs. (\[eqn:T1\]), (\[eqn:T2\]),(\[eqn:T3\]) and (\[eqn:T4\]) come from eqs. (\[eqn:t1\]), (\[eqn:t2\]), (\[eqn:t3\]) and (\[eqn:t4\]), respectively. $D_{\a,\b}^{0}$ is given in eq. (\[eqn:D0\]). Similarly, $D_{\a,\b}$ is defined as $$\begin{aligned}
D_{\a,\b}&=& \sum_\gamma^{\rm A} G_{\a,\gamma}\delta\Sigma_{\gamma,\b}
= \sum_\gamma^{\rm A} \delta\Sigma_{\b,\gamma} G_{\gamma,\a} .
\label{eqn:D}\end{aligned}$$ Note that $D_{\a,0}=D_{0,\a}=0$ and $D_{\a,0}^{0}=0$, whereas $D_{0,\a}^{0}\ne 0$. After the analytic continuation of $T_l(\e_n)$, the quasiparticle damping rate (without the renormalization factor) due to the impurity is given by [@Eliashberg; @Langer] $$\begin{aligned}
\gamma_\k^{\rm imp}(\e) &=& \frac{n_{\rm imp}}{N^2}
\sum_l {\rm Im}T_l(\e-i\delta)e^{i\k \cdot {\bf r}_l} ,
\label{eqn:gammak}\end{aligned}$$ where $n_{\rm imp}$ is the density of impurities.
Figure \[fig:Gamma-comp\] shows $\gamma_\k^{\rm imp}(0)$ along the Fermi surface for LSCO at $T=0.02$, obtained by $GV^0$, $GV$ and $GV^I$-methods. Here, we put $n_{\rm imp}=1$. $\gamma_\k^{\rm host}={\rm Im}\Sigma_\k^0(-i\delta)$ in the host system given by the FLEX approximation. Filled square and triangular represents $$\begin{aligned}
\gamma_{\rm imp}^0(\e)\equiv
-{\rm Im}\frac{n_{\rm imp}}{G_0^0(\e-i\delta)} ,
\label{eqn:gamma0}\end{aligned}$$ for $n_{\rm imp}=1$, where $G_0^0(\e)=\frac1{N^2}\sum_\k G_\k^0(\e)$ is the local Green function of the host. Equation (\[eqn:gamma0\]) is a well-known expression for the quasiparticle damping rate due to s-wave unitary impurities. In fact, $\gamma_{\rm imp}^0$ is derived from eq. (\[eqn:gammak\]) by putting $\delta\Sigma_{\a,\b}=-\Sigma_{\a-\b}^0
\cdot\delta_{\a\b,0}$ in eqs.(\[eqn:T1\])-(\[eqn:T4\]). When the particle-hole symmetry is approximately satisfied around the Fermi level, eq.(\[eqn:gamma0\]) becomes $\gamma_{\rm imp}^0(\e)= n_{\rm imp}/\pi N_{\rm host}(\e)$, where $N_{\rm host}(\e)={\rm Im}G_0^0(\e-i\delta)/\pi$ is the DOS of the host system.
In each method ($GV$, $GV^I$ and $GV^0$), $\gamma_\k^{\rm imp}$ has a strong $\k$-dependence similar to $\gamma_\k^{\rm host}$, as shown in Fig. \[fig:Gamma-comp\]. As a result, the structure of the “hot spot” and the “cold spot”, which are located around $(\pi/2,\pi/2)$ and $(\pi/,0)$ respectively, is not smeared out by strong non-magnetic impurities. This highly nontrivial result is brought by the $\k$-dependence of $\delta\Sigma_\k$. This finding strongly suggests that the enhancement of the Hall coefficient near the AF-QCP, which is brought by the strong back-flow (current vertex correction) around the cold spot [@Kontani-Hall; @Kontani-rev], does not decrease due to the strong impurities. In fact, the Hall coefficient for under-doped YBCO slightly increases by the doping of non-magnetic impurities [@Ong].
On the other hand, in the case of weak impurities where the Born approximation is reliable, $\gamma_\k^{\rm imp}$ should be almost $\k$-independent. Thus, the structure of the “hot/cold spots” will be smeared out by (large numbers of) weak non-magnetic impurities. By this reason, the enlarged Hall coefficient near the AF-QCP, which is brought by the back-flow around the cold spot, is reduced by weak non-magnetic impurities [@future]. This theoretical result would be able to explain the reduction of $R_{\rm H}$ in CeCoIn$_5$ at very low temperatures [@Ce115; @Ce115-2].
Another important finding is that $\gamma_\k^{\rm imp}$ given by the $GV^I$-method becomes larger than $\gamma_{\rm imp}^0$, due to the non-local scattering (non s-wave scattering) given by $\delta\Sigma$. Figure \[fig:Gamma\] shows $\gamma_\k^{\rm imp}(0)$ for $n_{\rm imp}=1$ given by the $GV^I$-method, at $T=0.02$ for LSCO, YBCO and NCCO. In both hole and electron-doped systems, the hot/cold spot structure in the host system remains even in the presence of impurities. The absolute value of $\gamma_\k^{\rm imp}(0)$ increases drastically when the system is close to the AF-QCP. as shown in \[fig:Gamma\]. This finding gives the explanation for the huge residual resistivity in metals near the AF-QCP, which has been a long-standing problem in strongly correlated electron systems. In contrast, $\gamma_\k^{\rm imp}$ given by the $GV^0$-method is comparable to $\gamma_{\rm imp}^0$, because the enhancement of ${\hat \chi}^s$ around the impurity is not taken into account. Also, $\gamma_\k^{\rm imp}$ by the $GV$-method is much smaller than $\gamma_{\rm imp}^0$. This result will be an artifact of the $GV$-method, as discussed in §\[sec:VC\].
Here, we examine the origin of the enhancement of $\gamma_\k^{\rm imp}$ in more detail. Figure \[fig:Gamma-1234\] shows $\gamma_\k^{\rm imp(1)}$ ($l=1\sim4$) for LSCO and YBCO, which represent contributions by eqs.(\[eqn:T1\])-(\[eqn:T4\]), respectively. They are also expressed in Fig. \[fig:Tmatrix\]. Note that $\gamma_\k^{\rm imp}= \sum_{l=1}^4
\gamma_\k^{\rm imp(l)}$. In both YBCO and LSCO, $\gamma_\k^{\rm imp(2)}$ and $\gamma_\k^{\rm imp(4)}$ give main contributions: The latter is dominant for YBCO, whereas $\gamma_\k^{\rm imp(2)}$ is comparable to $\gamma_\k^{\rm imp(4)}$ around $\k=(\pi/2,\pi/2)$ for LSCO. In both systems, $\gamma_\k^{\rm imp(4)}$ grows drastically below $T=0.02$ as $U$ is increased. In the same way, $\gamma_\k^{\rm imp(4)}$ takes a large value for NCCO at lower temperatures. This enhancement of $\gamma_\k^{\rm imp(4)}$ at lower temperatures gives rise to the insulating behavior of the resistivity, as we will show later.
Resistivity {#sec:rho}
-----------
Here, we calculate the resistivity in nearly AF metals in the presence of strong impurities. Hereafter, we take account of the impurity effect only up to $O(n_{\rm imp})$. In other words, we neglect the interference effect (e.g., the weak localization effect) which is given by the higher order terms with respect to $n_{\rm imp}$. In fact, many anomalous impurity effects of HTSC in which we are interested are of the order of $O(n_{\rm imp})$. For example, the residual resistivity in HTSC is proportional to the impurity concentration for $n_{\rm imp}{\raisebox{-0.75ex}[-1.5ex]{$\;\stackrel{<}{\sim}\;$}}4$% [@Uchida]. Surprisingly, the residual resistivity per impurity increases drastically as the system approaches the half-filling. The relation $\Delta\rho \sim (4\hbar/e^2)n_{\rm imp}/\delta$ holds in the under-doped region, which is $n/\delta$ times larger than the residual resistivity in 2D electron gas. ($\delta=|1-n|$ is the carrier doping concentration.) Hereafter, we will explain this experimental fact based on the idea that the effective cross section of an impurity is enlarged due to many body effect near the AF-QCP.
The conductivity is given by the two-particle Green function; we show some diagrams in Fig. \[fig:Rho-diagram\]. Here, the cross represents the impurity potential, and the filled circle is the three point vertex due to the electron-electron correlation. In Fig. \[fig:Rho-diagram\], the type (a) diagrams give the correction of the order of $O(n_{\rm imp})$, whereas the type (b) diagrams are $O(n_{\rm imp}^2)$ because they contain cross terms between different impurities [@Langer]. Thus, we drop all the diagrams which contain cross terms. The type (c) diagrams, which contain current vertex corrections (CVC’s) due to impurities, which are necessary to cancel the effect of forward scattering on the conductivity. However, we drop all the diagrams with CVC’s for the simplicity of the calculations, which would be allowed for a qualitative discussion. We also drop the CVC due to the electron-electron correlation because it gives merely a small correction to the conductivity, as shown in ref. [@Kontani-Hall].
As a result, by neglecting the CVC’s, the conductivity without and with impurities up to $O(n_{\rm imp})$, $\sigma_0$ and $\sigma_{\rm imp}$, are given by the following equations [@Eliashberg; @Langer; @comment]: $$\begin{aligned}
\sigma_0&=& e^2\sum_\k \int\frac{d\e}{\pi}
\left( -\frac{\d f}{\d\e} \right)
|G_\k^0(\e)|^2 v_{\k x}^2(\e) ,
\\
\sigma_{\rm imp}&=& e^2\sum_\k \int\frac{d\e}{\pi}
\left( -\frac{\d f}{\d\e} \right)
|{\bar G}_\k(\e)|^2 v_{\k x}^2(\e) ,
\label{eqn:sigma-imp}\end{aligned}$$ where $v_{\k x}(\e)= \d\e_\k/\d k_x + {\rm Re}\Sigma_\k(\e)$. $G_\k^0(\e)= (\e+\mu-\e_\k-\Sigma_\k(\e))^{-1}$ is the Green function obtained by the FLEX approximation without impurity. The averaged Green function in the presence of impurities, ${\bar G}_\k$, is given by $$\begin{aligned}
{\bar G}_\k(\e-i\delta)= \left(
\{G_\k^0(\e-i\delta)\}^{-1}-i\gamma_\k^{\rm imp}(\e) \right)^{-1} .
\label{eqn:Gbar}\end{aligned}$$
Figure \[fig:rho\] shows the temperature dependences of the resistivities for LSCO, YBCO and NCCO obtained by the $GV^I$-method; $\rho_{\rm imp}=1/\s_{\rm imp}$ for $n_{\rm imp}=0.02$. $\rho_0=1/\s_0$ is the resistivity without impurity. We see that the “residual resistivity” at finite $T$, which is the increment of resistivity due to impurity $\Delta\rho\equiv \rho_{\rm imp}-\rho_0$, is approximately constant for a wide range of $T$, except for the abrupt increase at lower temperatures in YBCO and NCCO. $\Delta\rho$ grows as we enlarge $U$ (YBCO, LSCO) or change the filling number $n$ towards the half-filling (NCCO). This result comes from the fact that the scattering cross section of an impurity is enlarged by the nonlocal modulation of the self-energy, $\delta\Sigma$. In other words, non s-wave (elastic and inelastic) scatterings caused by $\delta\Sigma$ gives an anomalously large residual resistivity near the AF-QCP.
Figure \[fig:rho\] also shows that $\rho_{\rm imp}^0$ derived from eq.(\[eqn:sigma-imp\]) by replacing $\gamma_\k^{\rm imp}(\e)$ in eq.(\[eqn:Gbar\]) with $\gamma_{\rm imp}^0(\e)$ which is momentum independent. We call it the “local scattering approximation” because non-s-wave impurity scattering processes caused by the nonlocal effect ($\delta\Sigma$) are dropped. In contrast to $\Delta\rho$, the doping dependences of $\Delta\rho^0\equiv \rho_{\rm imp}^0-\rho_0$ is much moderate. Moreover, $\Delta\rho^0$ slightly decreases at low temperatures as $\gamma_\k$ becomes anisotropic: In fact, $\displaystyle \langle \frac1{\gamma_\k+\gamma_{\rm imp}^0}
\rangle^{-1}- \langle \frac1{\gamma_\k} \rangle^{-1} \ll
\gamma_{\rm imp}^0$ when $\gamma_\k (\gg \gamma_{\rm imp}^0)$ is very anisotropic.
In each compound (YBCO, LSCO, NCCO), the average spacing between CuO$_2$-layers is about 6Å. Using the relation $h/e^2=26$k$\Omega$, $\rho=1$ in the present calculation corresponds to 250$\mu\Omega\cdot$cm. As shown in fig. \[fig:rho\], residual resistivities for $n_{\rm imp}=0.02$ at $T=0.05$ for LSCO ($U=5$), YBCO ($U=8$) and NCCO ($n=1.13$) are 0.7, 1.0 and 0.8, respectively. They correspond to $175\sim 250\mu\Omega\cdot$cm. These obtained values are close to the experimental residual resistivity in under-doped HTSC’s; $\Delta\rho\sim (4\hbar/e^2)n_{\rm imp}/|1-n|$ [@Uchida].
Another important findings is the insulating behavior $(d\rho_{\rm imp}/dT<0)$ in YBCO and NCCO at lower temperatures. We also checked that a similar upturn is also observed in LSCO for $U=6$. This insulating behavior of $\rho_{\rm imp}$ is caused by the steep increment of $\gamma_\k^{\rm imp}$ at lower temperatures, which is mainly given by $\gamma_\k^{\rm imp(4)} \propto {\rm Im}\delta{\hat \Sigma}(0)$. Therefore, the physical origin of this phenomenon is the strong inelastic scattering around the impurity. The obtained insulating behavior will be universal for systems with strong nonmagnetic impurities in the close vicinity of the AF-QCP. Actually, the upturn of $\rho$ is widely observed in under-doped HTSC’s, both in hole-doped and electron-doped compounds [@Uchida; @Ando1; @Ando2; @Sekitani; @Shibauchi]. The origin is not the weak localization as we explained in §\[sec:intro\]. Based on the present study, we expect that residual disorder will be the origin of the insulating behavior in HTSC’s.
In Fig. \[fig:rho-T002\], we show that $U$ dependences of $\Delta\rho$ and $\Delta\rho^0$ (per impurity) at $T=0.02$ given by the $GV^I$-method, both for LSCO and YBCO. $\Delta\rho^0$ increases as $U$ is raised, inversely proportional to the DOS given by the FLEX approximation, $N_{\rm host}(0)$. We stress that $\Delta\rho$ increases drastically as $U$ is raised, much faster than $\Delta\rho^0$ does. This prominent increment is derived from the strong inelastic scattering around the impurity, Im$\delta{\hat \Sigma}(0)$. For comparison, we also show results given by the $GV$ and $GV^0$-methods in Fig. \[fig:rho-T002\]. $\Delta\rho_{\rm [GV0]}$ is close to $\Delta\rho^0$ because the enhancement of AF fluctuations around the impurity are not taken into account. Even worse, $\Delta\rho_{\rm [GV]}$ decreases for $U>3$ in LSCO. This result should be an artifact of the $GV$-method, because it fails to reproduce the enhancement of spin fluctuations as explained in previous sections.
The prominent $U$-dependence of $\Delta\rho$ given by $GV^I$-method would corresponds to the pressure dependence of $\Delta\rho$ observed in $\kappa$-(BEDT-TTF)$_4$Hg$_{2.89}$Br$_8$, which stays near the AF-QCP at ambient pressure [@Taniguchi]. In this compound, $\Delta\rho$ decreases to one tenth of its original value by applying the pressure. Considering that the applied pressure makes $U/W_{\rm band}$ small, this experimental result is consistent with Fig. \[fig:rho-T002\]. We also note that the residual resistivity in heavy fermion systems near the magnetic instabilities frequently show prominent pressure dependence; $\Delta\rho$ takes a maximum value at the AF-QCP, and it decreases quickly as the system goes away from the AF-QCP. [@Jaccard1; @Jaccard2; @CeCuAu; @Ce115; @Ce115-2]. This experimental fact is well explained by Fig. \[fig:rho-T002\].
Figure \[fig:rho-n\] shows the filling dependence of $\Delta\rho$ given by the $GV^I$-method at $T=0.02$, which is above the upturn temperature of $\rho$ for YBCO as recognized in Fig. \[fig:rho\]. In both LSCO and YBCO, $\Delta\rho$ increases drastically as $n$ approaches unity, far beyond the s-wave unitary scattering value ($\sim 4/n$). The obtained result is consistent with experimental observations in HTSC’s, where $\Delta\rho \sim \Delta\rho^0$ in the over-doped region, whereas $\Delta\rho \gg \Delta\rho^0$ in the under-doped region [@Uchida].
In the present calculation, we dropped all the CVC’s for simplicity. As shown in ref. [@Kontani-Hall], $\rho_0$ without impurities is slightly enlarged by the CVC; effect of the CVC for $\rho_0$ is small because the origin of the resistivity is the large angle scattering due to AF fluctuations($\q \approx (\pi,\pi)$). \[In general, CVC’s for the resistivity are important when small angle scatterings are dominant.\] In the present calculation, the origin of the huge $\Delta\rho \propto \gamma_\k^{\rm imp(4)}$ is also the AF fluctuations induced around the impurity. Therefore, we expect that obtained $\Delta\rho$ in this section is qualitatively reliable. On the other hand, CVC’s play quite important roles on $R_{\rm H}$, $\Delta\rho/\rho$ and $\nu$ [@Kontani-rev]: It is an important future issue to study these transport coefficients in the presence of impurities by taking CVC’s into account. Finally, we note that the obtained insulating behavior of $\rho_{\rm imp}$ might be under-estimated because the area of region A in the present numerical study ($M=6\sim8$) would be not enough in the very close vicinity of the AF-QCP ($\xi_{\rm AF}\gg 1$).
Decrease of the Hole Density $n_{\rm h}$ ($=|1-n|$) around the Impurity, and Increase of $\Delta\rho$ at $T=0$ {#sec:rho0}
--------------------------------------------------------------------------------------------------------------
Up to now, we have found that $\gamma_\k^{\rm imp(4)}$ (and $\gamma_\k^{\rm imp(2)}$) give the main contribution to the huge residual resistivity $\Delta\rho_{\rm imp}$ at finite temperatures. However, $\gamma_\k^{\rm imp(4)}(0)=0$ at zero temperature because $\delta\Sigma_{l,m}(0)$ becomes a real function at $T=0$. Therefore, it is highly nontrivial to predict the value of $\Delta\rho_{\rm imp}$ at $T=0$. On the other hand, $\gamma_\k^{\rm imp(l)}$, $\gamma_\k^{\rm imp(2)}$ and $\gamma_\k^{\rm imp(3)}$ could give an enlarged $\Delta\rho_{\rm imp} \gg \Delta\rho_{\rm imp}^0$ at zero temperatures, because they are finite even at $T=0$.
Hereafter, we discuss the residual resistivity at $T=0$. For simplicity, we assume an two-dimensional isotropic system with the dispersion $\e_k= \k^2/2m$. At $T=0$, $\gamma_\k^{\rm imp}$ is given by $$\begin{aligned}
\gamma_\k^{\rm imp}&=& n_{\rm imp}{\rm Im}T_{\k,\k}
\nonumber \\
&=& n_{\rm imp} \frac{2}{m}\sum_l \sin^2\delta_l ,\end{aligned}$$ where $l$ represents the angular momentum with respect to the impurity potential ($l=0,\pm1,\pm2,\cdots$), and $\delta_l$ is the phase shift for channel $l$. If we drop the CVC, the resistivity at $T=0$ is given by [@Langer] $$\begin{aligned}
\rho_{\rm imp}=\frac{4\hbar n_{\rm imp}}{e^2 n} \sum_l \sin^2\delta_l .\end{aligned}$$ Here, $\sin^2\delta_l$ is replaced with $\sin^2\delta_l - \cos(\delta_l-\delta_{l+1})\sin\delta_l\sin\delta_{l+1}$ if the CVC due to impurity is taken into account. On the other hand, the number of localized electrons around the impurity, $\Delta n_{\rm tot}$, in an isotropic 2D system is given by [@sumrule] $$\begin{aligned}
\Delta n_{\rm tot}= \frac{2}{\pi}\sum_l \delta_l .\end{aligned}$$ Thus, the residual resistivity at $T=0$ will grows as the number of electrons in bound states increases.
YBCO
$T$ 0.01 0.015 0.02 0.05
-------------------------- --------- -------- --------- ---------
$100\times\Delta n(1,0)$ 6.15 1.50 1.06 0.60
$\Delta n_{\rm tot}$ 2.86 0.334 0.190 0.052
$\a_{\rm St}$ 0.9968 0.9950 0.9932 0.9807
: Temperature dependences of $\Delta n(1,0)$, $\Delta n_{\rm tot}$ and the Stoner factor $\a_{\rm St}={\rm max}_\q U\Pi_\q^0(0)$. []{data-label="table1"}
NCCO
$T$ 0.02 0.025 0.03 0.06
-------------------------- --------- -------- --------- ---------
$100\times\Delta n(1,0)$ -3.84 -3.12 -2.78 -2.13
$\Delta n_{\rm tot}$ -1.52 -0.897 -0.639 -0.254
$\a_{\rm St}$ 0.9965 0.9958 0.9948 0.9839
: Temperature dependences of $\Delta n(1,0)$, $\Delta n_{\rm tot}$ and the Stoner factor $\a_{\rm St}={\rm max}_\q U\Pi_\q^0(0)$. []{data-label="table1"}
Table \[table1\] represent $\Delta n(1,0) \equiv n(1,0)- n$ and $\Delta n_{\rm tot} \equiv \sum_{\bf r} \Delta n({\bf r})$ for both YBCO ($n=0.9$, $U=8$) and NCCO ($n=1.13$, $U=5.5$). We see that the number of electron at ${\bf r}=(1,0)$ approaches the half filling ($n=1$) at low temperatures for both YBCO and NCCO. Moreover, $\Delta n_{\rm tot}$ increases (decreases) prominently in YBCO (NCCO) as $T$ decreases. This result suggests that the residual resistivities at $T=0$ will be larger than $\rho_{\rm imp}^0$ in the vicinity of the AF-QCP.
Finally, we discuss why $n(1,0)$ approaches unity and $\Delta n_{\rm tot}$ increases (decreases) in hole-doped (electron-doped) systems. In the FLEX approximation, the thermodynamic potential $\Omega$ in a uniform system is given by [@Ikeda] $$\begin{aligned}
\Omega&=& -T\sum_{\q,l}{\rm Tr}\left[{\Sigma}{G}
+ {\rm ln}(-[{G}^0]^{-1}+{\Sigma}) \right]
\nonumber \\
& &+ T\sum_{\q,l} {\rm Tr}\left[ \frac32 {\rm ln}(1-U{\Pi^0})
+ \frac12 {\rm ln}(1+U{\Pi^0}) \right.
\nonumber \\
& & \ \ \left. + U {\Pi^0}+U^2 [{\Pi^0}]^2 \right] .
\label{eqn:Omega}\end{aligned}$$ As we explained, the AF fluctuations are enhanced around the impurity. This effect would be expressed by increasing $\Pi^0(\q,\w_l)$ in eq. (\[eqn:Omega\]) by $\Delta\Pi^0$ $(>0)$, or $U$ by $\Delta U$ $(>0)$. According to eq. (\[eqn:Omega\]), $$\begin{aligned}
\frac{\d\Omega}{\d U}&\approx&
-\frac{3}{2} T\sum_{\q,l} {\Pi^0}(\q,\w_l)
\left( 1-U{\Pi^0}(\q,\w_l) \right)^{-1} ,
\label{eqn:dOdU}\end{aligned}$$ in the case of $1-U{\Pi^0}(\Q,0)\ll 1$. In deriving (\[eqn:dOdU\]), we used the fact that the implicit derivative through $\Sigma$ vanishes because of the stationary condition $\delta \Omega/ \delta \Sigma=0$ in the conserving approximation [@Bickers; @Baym-Kadanoff]. According to eq.(\[eqn:dOdU\]), we obtain $$\begin{aligned}
\frac{\d n}{\d U}
&=& -\frac{\d}{\d U}\left( \frac{\d \Omega}{\d \mu} \right)
\nonumber \\
&=& \frac32 T\sum_{\q,l} \frac1{(1-U\Pi^0(\q,\w_l))^{2}}
\frac{\d \Pi^0(\q,\w_l)}{\d\mu} .
\label{eqn:dndU}\end{aligned}$$ As a result, the electron density $n$ around the impurity, where $\Delta U>0$ is satisfied as mentioned above, increase (decreases) when $\d\Pi(\Q,0)/\d\mu$ is positive (negative). Therefore, we conclude that $\Delta n_{\rm tot}$ will increase (decrease) in hole-doped (electron-doped) systems, as recognized by numerical results given by the $GV^I$-method.
Discussions {#sec:disc}
===========
Summary of the Present Work and Future Problems {#sec:sum}
-----------------------------------------------
The present study reveals that a single impurity strongly affects the electronic states in a wide area around the impurity in the vicinity of the AF-QCP. For this purpose, we developed the $GV^I$-FLEX method, which is a powerful method to study the impurity effect in strongly correlated systems. The $GV^I$ method is much superior to the $GV$, which is a fully self-consistent FLEX approximation. Using the $GV^I$ method, characteristic impurity effects in under-doped HTSC’s are well explained in a unified way, without introducing any exotic mechanisms assuming the breakdown of the Fermi liquid state. The main numerical results are shown in Fig. \[fig:DOS2D\] (local DOS around the impurity site), Figs. \[fig:kaiS\]-\[fig:kaiS-AF\] (local and staggered susceptibilities), and Figs. \[fig:rho\]-\[fig:rho-n\] (resistivity in the presence of impurities). Qualitatively, these obtained numerical results are very similar for YBCO, LSCO and NCCO. Therefore, novel impurity effects in nearly AF metals revealed by the present work would be be universal.
Based on the $GV^I$ method, we found that both local and staggered susceptibilities are prominently enhanced around the impurity site, as shown in Figs. \[fig:kaiS\] and \[fig:kaiS\]. Especially, a nonmagnetic impurity causes a Curie-like spin susceptibility, $\mu_{\rm eff}^2/3T$. The $GV^I$-method gives $\mu_{\rm eff}\approx 0.74\mu_{\rm B}$ for LSCO ($n=0.9$, $U=5$), shown in Figs. \[fig:kaiS-AF\]. Note that $\mu_{\rm eff} \sim 1\mu_{\rm B}$ in YBa$_2$Cu$_3$O$_{6.66}$ ($T_{\rm c}\approx60K$). We also found that the quasiparticle damping rate takes a huge value around the impurity, owing to the enhanced AF fluctuations. By this reason, the local DOS at the Fermi level is strongly suppressed around the impurity site, which forms a so-called “Swiss cheese structure” as shown in Fig. \[fig:swiss\]. Its radius is about the AF correlation length of the host, $\xi_{\rm AF}$, which is about $3\sim4$a (a being the lattice spacing) in slightly under-doped HTSC’s at $T=0.02$. We guess that Swiss cheese holes stay in a normal state even below $T_{\rm c}$, because of the extremely short quasiparticle lifetime there. In fact, the residual specific heat ($T\ll T_{\rm c}$) induced by an impurity becomes very large in under-doped systems [@Ido2]. This experimental fact will be explained in our future study based on the $GV^I$-method [@future].
Near the AF-QCP, the short quasiparticle lifetime inside the Swiss cheese hole gives rise to a huge residual resistivity $\Delta\rho$, as shown in Figs. \[fig:rho\]-\[fig:rho-n\]. In the under-doped region, $\Delta\rho$ grows far beyond the s-wave unitary scattering limit $\sim (4\hbar/e^2)n_{\rm imp}/n$. We find that $\Delta\rho$ is almost $T$-independent for a wide range of temperature, and it increases drastically as the system approaches the AF-QCP. This result is consistent with experiments for HTSC’s. The obtained value of $\Delta\rho$, $175\sim250\mu\Omega\cdot$cm for $n_{\rm imp}=0.02$, are recognized in under-doped HTSC’s [@Uchida]. Furthermore, in the close vicinity of the AF-QCP, the resistivity given by the $GV^I$-method shows the “Kondo-like“ insulating behavior ($d\rho/dT<0$) under the presence of nonmagnetic impurities with low concentration ($\sim2$%). This surprising result would explain the “upturn of resistivity” which is frequently observed in under-dope HTSC’s, by assuming the existence of residual disorders. The mechanism of this insulating behavior had been a long-standing unsolved issue in under-doped HTSC’s. Different from a conventional single-channel Kondo effect, the residual resistivity given by the $GV^I$-method grows far beyond $(4\hbar/e^2)n_{\rm imp}/n$.
We stress that that $\gamma_\k^{\rm imp}$ given by the $GV^I$-method has strong $\k$-dependence, so the structure of “hot/cold spots” is maintained against impurity doping. This result could be examined by the ARPES measurements. We also comment that the CVC’s due to spin fluctuations cause a finite “residual resistivity”, if we define it as the extrapolated value at $T=0$, even in the absence of impurities [@Kontani-Hall]. We have to take this fact into account in analysing experimental data.
In the FLEX approximation, the AF-order (in the RPA) is suppressed by thermal and quantum fluctuations, which is expressed by $\Sigma^0$. In the $GV^I$-method, the reduction of fluctuations around the impurity site gives rise to the enhancement of susceptibility. However, this mechanism is absent in the RPA. Therefore, the enhancement of susceptibility is tiny within the RPA. In addition, in the $GV^I$-method, the spin (charge) susceptibility $\chi^{\rm Is(c)}$ contains the self-energy correction by $U$’s and that by $I$’s are treated on the same footing, whereas the cross term $\delta\Sigma$ is dropped because it will be cancelled out by VC’s in large part. On the other hand, $\chi^{\rm RPA}$ given by the RPA contains only self-energy correction by $I$’s [@Bulut01; @Bulut00; @Ohashi]. This equal footing treatment of the correlation effect and the impurity effect in the $GV^I$-method would be the reason for the superiority of this method.
In future, we will study the transport phenomena by taking the current vertex corrections (CVC) accurately, in order to solve the impurity effect on various transport coefficients in HTSC’s and in related systems. [@future]. We will also study the superconducting state around a nonmagnetic impurity in under-doped HTSC’s [@future], to explain experimental observations given by STM/STS measurements [@Pan; @Davis; @Ido]. As for HF systems and in organic metals, the strength of residual impurity (or disorder) potential would be comparable with bandwidth $W_{\rm band}$. Therefore, we have to study to what extent the obtained results in the present paper (for $I=\infty$) hold in the case of $I\sim W_{\rm band}$.
Possibility of the Impurity-Induced Magnetic Order {#sec:comparison}
--------------------------------------------------
In the $GV^I$-method, the self-energy for the host system is given by the FLEX approximation, which cannot be applicable for heavily under-doped systems near the Mott insulating state. However, FLEX approximation gives qualitatively reliable results for slightly under-doped region ($|1-n|\sim 0.1$) to over-doped region [@Takimoto; @Koikegami; @Wermbter; @Kontani-Hall; @Manske-PRB]. To obtain quantitatively reliable results for under-doped region, vertex correction (VC) for the self-energy will be necessary, as indicated by ref. [@Schmalian]. The pseudo-gap phenomena under $T^\ast$ are well reproduced by the FLEX+$T$-matrix approximation, where self-energy correction due to strong superconducting (SC) fluctuations are taken into account [@Yamada-rev; @Dahm-T; @Kontani-rev; @Kontani-N]. By taking the CVC due to SC fluctuations into consideration, anomalous transport phenomena in HTSC’s under $T^\ast$ (e.g., prominent enhancement of the Nernst coefficient) are well understood in a unified way [@Kontani-rev; @Kontani-N].
One of the merits of the FLEX approximation is that the Mermin-Wagner theorem with respect to the magnetic instability is satisfied. In fact, previous numerical studies based on the FLEX report that no SDW-order emerges in two dimensional systems at finite $T$. In Appendix A, we offer an strong analytical evidence that the FLEX approximation satisfies the Mermin-Wagner theorem. Also, the $GV^I$-method does not predict any SDW-order in two dimensional systems with single impurities, at least for model parameters studied in the present paper. However, when the concentration of impurities is finite, Swiss cheese holes will overlap when $\xi_{\rm AF}$ exceeds the mean distance between impurities ($l$). Therefore, a SDW-order or a spin-glass order would happen when $\xi_{\rm AF}>l$. In fact, in Zn-doped YBCO, the freezing of local moments due to Zn was observed by $\mu$-SR study at very low temperatures [@mSR]. Note that impurity induced AF order occurs in two-leg ladder Heisenberg models [@Fukuyama].
Comments on Related Theoretical Works {#sec:comment}
-------------------------------------
Here, we discuss the impurity effect on the electronic states of the “host system”. We did not study this effect because it is higher order effect with respect to $n_{\rm imp}$. In Zn-doped YBa$_2$Cu$_4$O$_8$, Ihot et al. measured $^{63}$Cu $1/T_1T$ in the host system (i.e., away from the Zn sites) [@Itoh], and found that the host AF fluctuations above $T^\ast$ are reduced by a few percent doping of Zn. They suggest that the localization effect would be the origin of this reduction.
Based on the FLEX approximation ($GV$-method in the present paper), authors of ref. [@Kudo] studied the impurity effect on the electronic state by neglecting the $\k$-dependence of the self-energy. They reported that the AF fluctuation in the host system is depressed by impurities, which seems to be consistent with ref. [@Itoh]. As we have shown however, the $GV$-method fails to reproduce correct electronic states around the impurity: In fact, both the Curie like $\Delta\chi^s$ as well as the huge $\Delta\rho$ are satisfactorily reproduced only by the $GV^I$-method. We note that $\rho(T)$ in HTSC shows a nearly parallel shift by the impurity doping, which means that the inelastic scattering in the host system is independent of impurities. This fact would suggest that the AF fluctuations in the host system is unchanged, in contrast to the NMR result [@Itoh]. It would be an important issue to understand these two experimental facts consistently.
Next, we comment that Miyake et al. have intensively studied the impurity effects near QCP [@Miyake02]. They found that $\Delta\rho$ due to weak impurities (Born scattering) is enlarged when the charge fluctuations are developed. Their analysis corresponds to $GV^0$-method since the susceptibility is assumed to be independent of impurities. In contrast, the present work based on $GV^I$-method shows that the $\Delta\rho$ due to strong impurities increases near the AF-QCP, which originates from the enhancement of $\chi^{Is}$ around the impurity. We note that $\chi^{\rm c}_{\rm FLEX} \equiv d n/d\mu$ is less than half of its non-interacting value in the present range of parameters. In analysing experiments, we have to consider carefully which kind of criticality may occur in the compound under consideration.
Finally, we explain previous studies on disordered Hubbard model based on the dynamical-mean-field-theory (DMFT) in the $d=\infty$ limit [@Kotliar; @Vollhardt; @Mutou], and we make comparison with the present study. In the DMFT, the local-moment formation is found when strong hopping disorders (offdiagonal disorders) exist [@Kotliar]. Also, it is found that the N[' e]{}el temperature increases due to weak onsite disorders (diagonal disorders); $I<U$ [@Vollhardt]. However, a local-moment formation outsside of the impurity and a huge residual resistivity, which are realized in HTSC’s, cannot be explained by the DMFT. For this purpose, a nonlocal modulation of the self-energy around the impurity have to be taken into account, which is possible in the $GV^I$-method.
We would like to thank Y. Ando, Y. Matsuda, T. Shibauchi, T. Sekitani, M. Ido, M. Oda, N. Momono, H. Taniguchi, M. Sato, K. Yamada, D.S. Hirashima, Y. Tanaka, M. Ogata, Y. Yanase and S. Onari for valuable comments and discussions.
Mermin-Wagner theorem in 2D electron systems {#sec:Ap}
============================================
The Mermin-Wagner (M-W) theorem states that any magnetic instabilities are absent at finite $T$ in 2D systems. Actually, the SCR theory satisfies the M-W theorem [@Moriya]. As for the FLEX approximation, however, the M-W theorem had been recognized only by numerical studies. In this appendix, we present a strong analytical evidence that the FLEX-type self-consistent spin fluctuation theory satisfies the M-W theorem.
Here we introduce the phenomenological expression for the dynamical spin susceptibility as $$\begin{aligned}
\chi_\q(\w) = \frac{\chi_{\bf Q}}{1+\xi_{\rm AF}^2(\q-\Q)^2-i\w/\w_{\rm sf}},\end{aligned}$$ where $\xi_{\rm AF}$ is the AF correlation length. $\Q$ is one of the nesting vectors which minimize $|\q-\Q|$. Apparently, $\chi_\q(0)=\chi_{-\q}(0)$. Both $\chi_{\bf Q}$ and $\w_{\rm sf}^{-1}$ are proportional to $\xi_{\rm AF}^2$ in the FLEX approximation.
When the system is very close to the AF phase at finite temperatures where $\w_{\rm sf} \gg T$ is satisfied, the self-energy within the scheme of the FLEX approximation is given by $$\begin{aligned}
\Sigma_\k(i\w_n)
&\approx& T\sum_\q G_{\k+\q}(i\w_n)\chi_\q(0)
\nonumber \\
&\approx& G_{\k+\Q}(i\w_n)A
\label{eqn:S0}
\\
A&=&T\frac{3U^2}{2}\sum_\q \chi_\q(0)
\label{eqn:A}\end{aligned}$$ where the static approximation is applied, which will offer the upper limit of $T_N$. $A$ diverges as $T\rightarrow T_N$ in proportion to $T\ln\xi_{\rm AF}$. Especially, $A=\infty$ at $T=T_N(>0)$. On the other hand, $A$ is finite even at $T=T_N$ in 3D systems. According to eq.(\[eqn:S0\]), $$\begin{aligned}
\Sigma_\k(i\w_n)
&=& \frac{A}{i\w_n-\e_{\k+\Q}- \Sigma_{\k+\Q}(i\w_n)}
\nonumber \\
&=& \frac{A}{i\w_n-\e_{\k+\Q}- \frac{A}{i\w_n-\e_{\k}- \Sigma_{\k}(i\w_n)}}
\label{eqn:S1}\end{aligned}$$ Equation (\[eqn:S1\]) can be solved analytically. Considering $\Sigma_\k(\w)=0$ when $A=0$, the self-energy and the Green function for real frequencies are given by
$$\begin{aligned}
\Sigma_\k(\w)
&=& \frac12 (\w-\e_\k) - {\rm sgn}(\w-\e_\k)\frac12
\sqrt{(\w-\e_\k)^2-4A\frac{\w-\e_\k}{\w-\e_{\k+\Q}}}
\\
G_\k(\w)
&=& \left(
\frac12 (\w-\e_\k) + {\rm sgn}(\w-\e_\k)\frac12
\sqrt{(\w-\e_\k)^2-4A\frac{\w-\e_\k}{\w-\e_{\k+\Q}}}
\right)^{-1}
\label{eqn:G1}\end{aligned}$$
One can check that $\w\cdot{\rm Im}\Sigma_\k(\w) \le0$. This is not a Fermi liquid because the renormalization factor $z$ is zero (owing to the static approximation). Hereafter, we assume $\e_\k = 2t(\cos k_x+ \cos k_y)$ at half filling ($n=1$), that is, both the perfect nesting and the particle-hole symmetry exist. Apparently, $Q=(\pi,\pi)$, $\e_{\k+\Q}=-\e_\k$, and $\mu=0$. The irreducible susceptibility at $\q=\Q$ and $\w=0$ is given by $$\begin{aligned}
\Pi_\Q(0) &=&
-\sum_\k\int\frac{d\w}{2\pi} {\rm th}\frac{\w}{2T}
{\rm Im} \{ G_{\k+\Q}^R(\w)G_{\k}^R(\w) \}
\label{eqn:C1}\end{aligned}$$ According to eq. (\[eqn:G1\]), $$\begin{aligned}
G_{\k+\Q}(\w)G_{\k}(\w)
&=& \left(\frac14 (\w^2-\e_\k^2)
+ \frac12 {\rm sgn}(\w^2-\e_\k^2)\sqrt{g_\k(\w)}
+\frac14 {\rm sgn}(\w^2-\e_\k^2)
|\w^2-\e_\k^2-4A| \right)^{-1}
\\
g_\k(\w)
&=& (\w^2-\e_\k^2)(\w^2-\e_\k^2-4A)\end{aligned}$$ $g_\k(\w)$ is negative when $|\e_\k| < |\w| < \sqrt{\e_\k^2+4A}$. Apparently, the integrand in eq.(\[eqn:C1\]) is finite only when $g_\k(\w)<0$. $$\begin{aligned}
-{\rm Im} \{ G_{\k+\Q}(\w)G_{\k}(\w) \}
&=& -{\rm Im} \{ ( \ A+\frac{i}{2} \sqrt{-g_\k(\w)} \ )^{-1} \}
< \frac{1}{2A} \ \ \ \ \mbox{for $g_\k(\w)<0$}
\\
&=& 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\mbox{for $g_\k(\w)>0$} \end{aligned}$$
According to eq.(\[eqn:C1\]), in the case of $\sqrt{A} \gg W_{\rm band}$, $$\begin{aligned}
\Pi_\Q(0) \sim O(A^{-1/2})
\label{eqn:chi}\end{aligned}$$ Note that Im$G_\k(\w)$ is non-zero when $g_\k(\w)<0$, and one can check that $\int_{-\infty}^\infty d\w {\rm Im} G_\k^R(\w)=-\pi$ for any $A$ by MATHEMATICA. Considering that $\chi_\Q(0)=\Pi_\Q(0)/(1-U\Pi_\Q(0))$ in the FLEX approximation, eq. (\[eqn:chi\]) means that $\chi_\Q(0)$ approaches to zero as $T \rightarrow T_N$ (because $A\rightarrow\infty$ in the case of $T_N>0$) in 2D systems. Equation (\[eqn:chi\]) suggests that $\xi_{\rm AF} \propto e^{1/T}$ when the ground state is a ordered state.
As a result, $T_N$ cannot take a finite value in the two dimensional model with perfect nesting at half filling, which is the most likely to cause the magnetic instability. Therefore, the present analysis gives a compelling evidence the the FLEX approximation satisfies the M-W theorem for general 2D systems.
Here we rewrite $A$ as $A'+A''$, where $\displaystyle
A'=T \frac{3U^2}{2}\sum_\q^{|\q-{\bf Q}|<q_{\rm c}} \chi_\q(0)$, and $q_{\rm c}$ is a cutoff momentum. Then, $A''$ is a smooth function around the AF-QCP. When $T>\xi_{\rm AF}^{-2}+q_{\rm c}^2$, $A'$ will be the number of free bosons (magnons) with energy $\e_\q= q^2$ within the radius of $q_{\rm c}$ at the chemical potential $\mu=-\xi_{\rm AF}^{-2}$. Thus, a magnetic instability ($\xi_{\rm AF}\rightarrow\infty$) corresponds to the Bose-Einstein condensation of magnons ($\mu=0$). Therefore, a meaningful correspondence between the M-W theorem and the absence (presence) of the Bose-Einstein condensation in 2D (3D) systems is recognized.
[99]{}
Y. Yanase, T. Jujo, T. Nomura, H. Ikeda, T. Hotta and K. Yamada: Phys. Rep. [**387**]{} (2003) 1.
T. Moriya and K. Ueda: Adv. Physics [**49**]{} (2000) 555.
D. Manske, [*Theory of Unconventional Superconductors: Cooper-Pairing Mediated By Spin Excitations*]{} (Springer 2004, Berlin)
P. Monthoux and D. Pines: Phys. Rev. B [**47**]{} (1993) 6069.
H. Kontani and K. Yamada: J. Phy. Soc. Jpn. [**74**]{} (2005) 155.
N. E. Bickers and S. R. White: Phys. Rev. B [**43**]{} (1991) 8044.
P. Monthoux and D. J. Scalapino, Phys. Rev. Lett. [**72**]{} (1994) 1874.
T. Takimoto and T. Moriya: J. Phy. Soc. Jpn. [**66**]{} (1997) 2459.
S. Koikegami, S. Fujimoto and K. Yamada: J. Phy. Soc. Jpn. [**66**]{} (1997) 1438.
S. Wermbter: Phys. Rev. B [**55**]{} (1997) R10149.
D. Manske, I. Eremin and K.H. Bennemann: Phys. Rev. B [**67**]{} (2003) 134520.
T. Dahm, D. Manske and L. Tewordt: Europhys. Lett. [**55**]{} (2001) 93.
J. Takeda, T. Nishikawa, and M. Sato: Physica C [**231**]{} (1994) 293.
T. Kimura, S. Miyasaka, H. Takagi, K. Tamasaku, H. Eisaki, S. Uchida, K. Kitazawa, M. Hiroi, M. Sera, and N. Kobayashi: Phys. Rev. B [**53**]{} (1996) 8733.
H. Kontani, K. Kanki and K. Ueda: Phys. Rev. B [**59**]{} (1999) 14723, K. Kanki and H. Kontani: J. Phys. Soc. Jpn. [**68**]{} (1999) 1614.
H. Kontani: J. Phys. Soc. Jpn. [**70**]{} (2001) 1873.
H. Kontani: J. Phys. Soc. Jpn. [**70**]{} (2001) 2840.
H. Kontani: Phys. Rev. Lett. [**89**]{} (2003) 237003.
W. Ziegler, D. Poilblanc, R. Preuss, W. Hanke, and D. J. Scalapino: Phys. Rev. B [**53**]{} (1996) 8704
D. Poilblanc, D.J. Scalapino, and W. Hanke: Phys. Rev. Lett. [**72**]{} (1994) 884.
N. Bulut, D. Hone, D.J. Scalapino, and E.Y. Loh: Phys. Rev. Lett. [**62**]{} (1989) 2192.
A.W. Sandvik, E. Dagotto, and D.J. Scalapino: Phys. Rev. B [**56**]{} (1997) 11701.
H. Tsuchiura, Y. Tanaka, M. Ogata, and S. Kashiwaya: Phys. Rev. B [**64**]{} (2001) 140501(R).
N. Bulut: Physica C [**363**]{} (2001) 260.
N. Bulut: Phys. Rev. B [**61**]{} (2000) 9051.
Y. Ohashi: J. Phys. Soc. Jpn. [**70**]{} (2001) 2054.
P. Prelovsek, and I. Sega: Phys. Rev. Lett. [**93**]{} (2004) 207202.
S. Fujimoto: Phys. Rev. B [**63**]{} (2001) 024406.
P. Mendels, J. Bobroff, G. Collin, H. Alloul, M. Gabay, J.F. Marucco, N. Blanchard and B. Grenier: Europhys. Lett. [**46**]{} (1999) 678.
K. Ishida, Y. Kitaoka, K. Yamazoe, K. Asayama, and Y. Yamada: Phys. Rev. Lett. [**76**]{} (1996) 531.
A. V. Mahajan, H. Alloul, G. Collin, and J. F. Marucco: Phys. Rev. Lett. [**72**]{} (1994) 3100.
W. A. MacFarlane, J. Bobroff, H. Alloul, P. Mendels, N. Blanchard, G. Collin, and J.-F. Marucco: Phys. Rev. Lett. [**85**]{} (2000) 1108.
A. V. Mahajan, H. Alloul, G. Collin, J. F.Marucco, G. Collin, and J.-F. Marucco: Eur. Phy. J. [**B**]{} 13 (2000) 457.
J. Bobroff, W. A. MacFarlane, H. Alloul, P. Mendels, N. Blanchard, G. Collin, and J.-F. Marucco: Phys. Rev. Lett. [**83**]{} (1999) 4381.
M.-H. Julien, T. Feher, M. Horvatic, C. Berthier, O.N. Bakharev, P. Segransan, G. Collin, J.-F. Marucco: Phys. Rev. Lett. [**84**]{} (2000) 3422.
Y. Fukuzumi, K. Mizuhashi, K. Takenaka, and S. Uchida: Phys. Rev. Lett. [**76**]{} (1996) 684.
D.Jaccard, E. Vargoz, K. Alami-Yadri, H. Wilhelm: cond-mat/9711089.
J. Flouquet, P. Haen, F. Lapierre, C. Fierz, A. Amato and D. Jaccard: J. Mag. Mag. Matt. [**76&77**]{} (1998) 285.
H. Wilhelm, S. Raymond, D. Jaccard, O. Stockert, H.V. Lohneysen and A. Rosch: J. Phys.: Condens. Matter [**13**]{} (2001) L329.
Y. Nakajima, K. Izawa, Y. Matsuda, S. Uji, T. Terashima, H. Shishido, R. Settai, Y. Onuki and H. Kontani, J. Phy. Soc. Jpn. [**73**]{} (2004) 5.
Y. Nakajima, K. Izawa, Y. Matsuda, K. Behnia, H. Kontani, M. Hedo, Y. Uwatoko, T. Matsumoto, H. Shishido, R. Settai and Y. Onuki: to be published in J. Phy. Soc. Jpn. [**75**]{} (2006) 023705.
H. Taniguchi et al.: unpublished. Y. Ando, G.S. Boebinger, A. Passner, T. Kimura and K. Kishio: Phys. Rev. Lett. [**75**]{} (1995) 4662.
G.S. Boebinger, Y. Ando, A. Passner, T. Kimura, M. Okuya, J. Shimoyama, K. Kishio, K. Tamasaku, N. Ichikawa, and S. Uchida: Phys. Rev. Lett. [**77**]{} (1996) 5417.
T. Sekitani, M. Naito, and N. Miura: Phys. Rev. B [**67**]{} (2003) 174503.
T. Kawakami, T. Shibauchi, Y. Terao, M. Suzuki, and L. Krusin-Elbaum: Phys. Rev. Lett. [**95**]{} (2005) 017001.
R.L. Greene: private communication.
S.H. Pan, E.W. Hudson, K.M. Lang, H. Eisaki, S. Uchida and J.C. Davis: Nature [**403**]{} (2000) 746.
B. Nachumi, A. Keren, K. Kojima, M. Larkin, G. M. Luke, J. Merrin, O. Tchernyshov, and Y. J. Uemura, N. Ichikawa, M. Goto, and S. Uchida: Phys. Rev. Lett. [**77**]{} (1996) 5421.
T. Nakano, N. Momono, T. Nagata, M. Oda and M. Ido: Phys. Rev. B [**58**]{} (1998) 5831.
J. Lee, A. Slezak and J.C. Davis: J. Phys. Chem. Solids [**66**]{} (2005) 1370.
N. Momono, A. Hashimoto, Y. Kobatake, M. Oda and M. Ido: J. Phys. Soc. Jpn. [**74**]{} (2005) 2400.
J. Kanamori: Prog. Theor. Phys. [**30**]{} (1963) 275.
B. Holm and F. Aryasetiawan: Phys. Rev. B [**56**]{} (1997) 12825.
B. Holm and U. von Barth: Phys. Rev. B [**57**]{} (1998) 2108.
P. Garcia-Gonzalez and R.W. Godby: Phys. Rev. B [**63**]{} (2001) 075112.
G. M. Eliashberg : Sov. Phys. JETP [**14**]{} (1962), 886.
J.S. Langer: Phys. Rev. [**120**]{} (1960) 714.
In these equations, we drop the “incoherent part of conductivity $\s_{\rm imc}$”, which gives a finite (quantitatively important) contribution at higher temperatures. The expression for $\s_{\rm imc}$ is derived in H. Kontani and H. Kino: Phys. Rev. B [**63**]{} (2001) 134524.
T. R. Chien, Z. Z. Wang, and N. P. Ong: Phys. Rev. Lett. [**67**]{} (1991) 2088.
present authors: unpublished.
J.S. Langer: Phys. Rev. [**121**]{} (1961) 1090.
H. Ikeda, Y. Nishikawa and K. Yamada: J. Phys.: COndens. Matter [**15**]{} (2003) S2241.
G. Baym and L.P. Kadanoff: Phys. Rev. [**124**]{} (1961), 287; G. Baym: Phys. Rev. [**127**]{} (1962), 1391.
J. Schmalian, D. Pines and B. Stojkovic: Phys. Rev. B [**60**]{} (1999) 667.
P. Mendels, H. Alloul, J. H. Brewer, G. D. Morris, T. L. Duty, S. Johnston, E. J. Ansaldo, G. Collin, J. F. Marucco, C. Niedermayer, D. R. Noakes and C. E. Stronach: Phys. Rev. B [**49**]{} (1994) 10035.
H. Fukuyama, N. Nagaosa1, M. Saito and T. Tanimoto: J. Phys. Soc. Jpn. [**65**]{} (1996) 2377.
Y. Itoh, T. Machi, N. Watanabe, S. Adachi and N. Koshizuka: J. Phys. Soc. Jpn. [**70**]{} (2001) 1881.
K. Kudo and K. Yamada: J. Phys. Soc. Jpn. [**73**]{} (2004) 2219.
K. Miyake and O. Narikiyo: J. Phys. Soc. Jpn. [**71**]{} (2002) 867.
V. Dobrosavljevic and G. Kotliar: Phys. Rev. B [**50**]{} (1994) 1430.
M. Ulmke, V. Janis and D. Vollhardt: Phys. Rev. B [**51**]{} (1995) 10411.
T. Mutou: Phys. Rev. B [**60**]{} (1999) 2268.
|
---
address: |
Institute of Nuclear and Particle Physics, Department of Physics\
University of Virginia, Charlottesville, USA
author:
- 'X. SONG'
title: 'SPIN AND ORBITAL ANGULAR MOMENTUM IN THE CHIRAL QUARK MODEL [^1] '
---
=cmr8
1.5pt
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
[**Synopsis**]{}: Recent measurements show (1) [*strong flavor asymmetry of the antiquark sea*]{} [@1]: $\bar d>\bar u$, (2) [*nonzero strange quark content*]{} [@2]: $<\bar ss>\neq 0$, (3) [*sum of quark spins is small*]{} [@3]: $<s_z>_{q+\bar
q}=0.1-0.2$, and (4) [*antiquark sea is unpolarized*]{} [@4]: $\Delta\bar u, \Delta\bar d\simeq 0$. All these features can be well and naturally understood in the chiral quark model ($\chi$QM) (for an incomplete list of references see \[5,6\]).
[**The Chiral quark model**]{}: The effective Lagrangian in the model is [@6] $${\it L}_I=g_8{\bar q}\pmatrix{({\rm GB})_+^0
& {\pi}^+ & {\sqrt\epsilon}K^+\cr
{\pi}^-& ({\rm GB})_-^0
& {\sqrt\epsilon}K^0\cr
{\sqrt\epsilon}K^-& {\sqrt\epsilon}{\bar K}^0
&({\rm GB})_s^0
\cr }q,
\eqno (1)$$ where $({\rm GB})_{\pm}^0=\pm {\pi^0}/{\sqrt 2}+{\sqrt{\epsilon_{\eta}}}
{\eta^0}/{\sqrt 6}+{\zeta'}{\eta'^0}/{\sqrt 3}$, $({\rm GB})_s^0=
-{\sqrt{\epsilon_{\eta}}}{\eta^0}/{\sqrt 6}+{\zeta'}{\eta'^0}/{\sqrt 3}$. The breaking effects are explicitly included. $a\equiv|g_8|^2$ denotes the probability of chiral fluctuation or splitting $u(d)\to d(u)+\pi^{+(-)}$, and $\epsilon a$, the probability of $u(d)\to s+K^{-(0)}$ and so on. Since the coupling between the quarks and GBs is rather weak, the fluctuation $q\to q'+{\rm GB}$ can be treated as a small perturbation ($a\simeq 0.12-0.15$).
[**Spin, flavor and orbital contents**]{}: For a spin-up valence $u$-quark, the allowed fluctuations are $$u_{\up}\to d_{\dw}+\pi^+,~~u_{\up}\to s_{\dw}+K^+,~~
u_{\up}\to u_{\dw}+({\rm GB})_+^0,~~u_{\up}\to u_{\up}.
\eqno (2)$$ The important feature of these fluctuations is that a quark [*flips*]{} its spin and changes (or maintains) its flavor by emitting a charged (or neutral) Goldstone boson. The light antiquark sea asymmetry $\bar d>\bar u$ is attributed to the existing [*flavor asymmetry of the valence quark numbers*]{} (two $u_v$ and one $d_v$) in the proton. The quark spin reduction is due to the [*spin dilution*]{} in the chiral splitting processes $q_{\up}\to q_{\dw}$+GB. Most importantly, the quark spin flips in the fluctuation with GB emission, hence the quark spin component changes one unit of angular momentum, $(s_z)_f-(s_z)_i=+1$ or $-1$, the angular momentum conservation requires an [*equal amount change*]{} of the orbital angular momentum (OAM) but with [*opposite sign*]{}, i.e. $(L_z)_f-(L_z)_i=-1$ or $+1$. This [*induced orbital motion*]{} is distributed among the quarks and antiquarks, and compensates the spin reduction in the chiral splitting. Assuming that the valence quark structure of the nucleon is SU(3)$_f\otimes$SU(2)$_s$, and $\epsilon_\eta=\epsilon$, one obtains [@6] $$\Delta u^p={4\over 5}\Delta_3-a,~~\Delta d^p=-{1\over 5}\Delta_3-a,~~
\Delta s^p=-\epsilon a,
\eqno (3)$$ The total spin carried by quarks and antiquarks is ${1\over 2}\Delta
\Sigma^p={1\over 2}(\Delta u^p+\Delta d^p+\Delta s^p)={1\over 2}
-a(1+\epsilon+f)$, where $f={1\over 2}+{{\epsilon}\over 6}
+{{\zeta'^2}\over 3}$. The best fit to the existing data (see Table 1) leads to $a\simeq 0.12-0.15$, $\epsilon\simeq 0.4-0.5$ and $\zeta'^2\simeq
0$, which gives $a(1+\epsilon+f)\simeq 0.25$. It means that about one half of the proton spin is carried by quarks and antiquarks. In addition, all antiquark sea helicities are zero, $\Delta\bar q=0$ ($\bar q=\bar u,\bar d,
\bar s$) due to equal components of $\bar q_{\up}$ and $\bar q_{\dw}$ in the GB.
-----------------------------------------------------------------------------------------------------------
Quantity Data $\chi$QM NQM
--------------------------------- ---------------------------------------------------- ----------- --------
$\bar d-\bar u$ $0.147\pm 0.039$ [@1], $0.100\pm 0.018$ [@1] $0.130^*$ 0
${{\bar u}/{\bar d}}$ $[{{\bar u(x)}\over {\bar d(x)}}]_{x=0.18}=0.51\pm 0.68 $-$
0.06$ [@1], $0.67\pm 0.06$ [@1]
${{2\bar s}/{(\bar u+\bar d)}}$ $0.477\pm 0.051$ [@2] 0.72 $-$
${{2\bar s}/{(u+d)}}$ $0.099\pm 0.009$ [@2] 0.13 0
${{\sum\bar q}/{\sum q}}$ $0.245\pm 0.005$ [@2] 0.24 0
$f_s$ $0.10\pm 0.06$ [@2], $0.15\pm 0.03$ [@2] 0.10 0
$f_3/f_8$ $0.21\pm 0.05$ 0.22 1/3
$\Delta u$ $0.85\pm 0.05$ [@3] 0.86 4/3
$\Delta d$ $-0.41\pm 0.05$ [@3] $-$0.40 $-$1/3
$\Delta s$ $-0.07\pm 0.05$ [@3] $-0.07$ 0
$\Delta\bar u$, $\Delta\bar d$ $-0.02\pm 0.11$ [@4] 0 0
$\Delta_3/\Delta_8$ 2.17$\pm 0.10$ 2.12 5/3
$\Delta_3$ 1.2601$\pm 0.0028$ 1.26$^*$ 5/3
$\Delta_8$ 0.579$\pm 0.025$ 0.60$^*$ 1
-----------------------------------------------------------------------------------------------------------
: Quark spin and flavor observables in the proton in the chiral quark model and nonrelativistic quark model (NQM).
Using the hybrid quark-gluon model [@7] and assuming the induced orbital motion is [*equally shared*]{} among quarks, antiquarks and gluon, one has $$({\rm cos}^2\theta-{1\over 3}{\rm sin}^2\theta)[{1\over 2}-a(1-3k)
(1+\epsilon+f)]+<J_z>_{\rm G}={1\over 2}
\eqno (4)$$ where $\theta$ is the mixing angle and $k$ is the [*sharing factor*]{}. If independent gluon degrees of freedom are neglected, $<J_z>_{\rm G}=0$, $k=1/3$, $<L_z>_{q+\bar q}^p
=a(1+\epsilon+f)$, and $<J_z>_{q+\bar q}^p=1/2$. The missing part of the quark spin [*is entirely transferred*]{} to the orbital motion of quarks and antiquarks. If intrinsic gluon does exist, then $k<1/3$. The spin and OAM carried by quarks and antiquarks given in $\chi$QM without the gluon and other nucleon models are listed in Table II. Extension to other octet and decuplet baryons were given in \[6\]. Including both spin and orbital contributions, the baryon magnetic moments are calculated. The result shows that the Franklin sum rule, $\mu_p-\mu_n+\mu_{\Sigma^-}-\mu_{\Sigma^+}+\mu_{\Xi^0}-\mu_{\Xi^-}=0$, still holds even the orbital contributions are included [@6].
[**Summary**]{}: The $\chi$QM prediction on quark spin and flavor contents is in good agreement with the existing data. The probabilities of the chiral splittings $q\to q+\pi$, $q\to q+K(\eta)$, and $q\to q+\eta'$ are 10-15$\%$, 5-7$\%$ and 0.1$\%$ respectively. The OAM carried by quarks and antiquarks are given. The OAM contributions on the baryon magnetic moments are also discussed.
NQM MIT bag $\chi$QM CS [@8] Skyrme
---------------------- ----- --------- ---------- --------- --------
$<s_z>^p_{q+\bar q}$ 1/2 0.32 0.24 0.08 0
$<L_z>^p_{q+\bar q}$ 0 0.18 0.26 0.42 1/2
: Quark spin and orbital angular momenta in different models.
[**References**]{}
[99]{}
A. Baldit [*et al.*]{} (NA51), [**B332**]{}, [244]{} (1994); E. A. Hawker [*et al.*]{} (E866), , [**80**]{} 3715 (1998).
J. Gasser et al., [**B253**]{}, 252 (1991); A. O. Bazarko, [*et al*]{}, [**C65**]{}, 189 (1995); S. J. Dong et al., [**75**]{}, 2096 (1995).
D. Adams, [*et al*]{} (SMC), [**D56**]{}, 5330 (1997); K. Abe, [*et al*]{} (E143), hep-ph/9802357.
B. Adeva, [*et al*]{} (SMC), , [**B420**]{}, 180 (1998).
A. Manohar and H. Georgi, [**B234**]{}, [189]{} (1984); E. J. Eichten et al., [**D45**]{}, [2269]{} (1992); T. P. Cheng and L.-F. Li, [**74**]{}, [2872]{} (1995); X. Song et al., [**D55**]{}, 2624 (1997); H. J. Weber et al., [**A12**]{}, 729 (1997); J. Linde et al., [**D57**]{}, 452 (1998).
X. Song, $\bf D57$, 4114 (1998); hep-ph/9802206; hep-ph/9804461, and references there in.
H. J. Lipkin, [**B251**]{}, 613 (1990).
M. Casu and L. M. Sehgal, [**D55**]{}, 2644 (1997).
[^1]: Talk given at 13th International Symposium on High Energy Spin Physics, Protvino, Russia, September 8-12, 1998
|
---
author:
- Weicheng Kuo
- Anelia Angelova
- 'Tsung-Yi Lin'
- Angela Dai
bibliography:
- 'egbib.bib'
title: ' : 3D Shape Prediction by Learning to Segment and Retrieve '
---
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Georgia Gkioxari for her advice on Mesh R-CNN and the support of the ZD.B (Zentrum Digitalisierung.Bayern) for Angela Dai.
Appendix A: Additional Results on Pix3D {#appendix-a-additional-results-on-pix3d .unnumbered}
=======================================
In Figure \[fig:pix3d\_ours\], we show more qualitative results of our approach on Pix3D [@sun2018pix3d]. Furthermore, we conduct ablation studies to shed light on the roles of each component in the system. Our analysis shows that shape, pose, and translation are all important for estimating the viewer-centered geometry, with shape retrieval having the most room for improvement, and box detection having the least. The analysis was done by replacing each predicted component with its ground truth counterpart. In terms of Mesh AP, groundtruth shapes help by +14.6, rotation by +10.2, and translation by +7.5. Surprisingly, groundtruth 2D boxes offer no improvement because the detections on Pix3D are very good ( 90 Box AP, similar to Mesh R-CNN) and the small advantage is offset by the distribution shift between train (jittered boxes) and test (perfect boxes) time. This agrees with what Mesh R-CNN reports, i.e. they also observed a loss when using ground truth boxes (6 point loss on Mesh AP50).
Appendix B: Network Architecture Details {#appendix-b-network-architecture-details .unnumbered}
========================================
The image-stream network architecture comprises 2D detection as bounding box, class label, and instance mask prediction, as well as our 3D shape retrieval and pose estimation. For the 2D detection, our architecture borrows from that of ShapeMask [@kuo2019shapemask]. For the 3D inference with shape embedding, pose classification, and pose regression, and object center prediction, these branches all use the same architecture as the coarse mask prediction branch of [@kuo2019shapemask] (with the exception of the output layers). The inputs of these branches are the features from the region of interest (ROI) of detection backbone feature pyramid network. We detail each branch in Table \[table:arch-shape-embed\], \[table:arch-pose-class\], and \[table:arch-center-reg\].
Appendix C: t-SNE visualizations for image-CAD embeddings {#appendix-c-t-sne-visualizations-for-image-cad-embeddings .unnumbered}
=========================================================
Figures \[fig:cadmask\_tsne\_2\], \[fig:cadmask\_tsne\_0\], \[fig:cadmask\_tsne\_1\] show the t-SNE visualizations of the image-shape embedding spaces for the bed, wardrobe, desk, table, tool, misc, and chair classes.
![t-SNE embedding of for the chair class. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_2"}](./figures/tsne/tsne-chair-0.jpg "fig:"){width="\textwidth"}\
![Additional qualitative results of on Pix3D [@sun2018pix3d].[]{data-label="fig:pix3d_ours"}](./figures/pix3d_ours.jpg "fig:"){width="\textwidth"}\
![t-SNE embeddings of for the bed (top), wardrobe (middle) and desk (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_0"}](./figures/tsne/tsne-bed-0.jpg "fig:"){width="90.00000%"}\
![t-SNE embeddings of for the bed (top), wardrobe (middle) and desk (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_0"}](./figures/tsne/tsne-wardrobe-0.jpg "fig:"){width="70.00000%"}\
![t-SNE embeddings of for the bed (top), wardrobe (middle) and desk (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_0"}](./figures/tsne/tsne-desk-0.jpg "fig:"){width="90.00000%"}\
![t-SNE embeddings of for the table (top), tool (middle) and misc (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_1"}](./figures/tsne/tsne-table-0.jpg "fig:"){width="\textwidth"}\
![t-SNE embeddings of for the table (top), tool (middle) and misc (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_1"}](./figures/tsne/tsne-tool-0.jpg "fig:"){width="85.00000%"}\
![t-SNE embeddings of for the table (top), tool (middle) and misc (bottom) classes. Red points correspond to images, and blue to shapes. []{data-label="fig:cadmask_tsne_1"}](./figures/tsne/tsne-misc-0.jpg "fig:"){width="85.00000%"}\
|
---
abstract: 'Modern ontology debugging methods allow efficient identification and localization of faulty axioms defined by a user while developing an ontology. The ontology development process in this case is characterized by rather frequent and regular calls to a reasoner resulting in an early user awareness of modeling errors. In such a scenario an ontology usually includes only a small number of conflict sets, i.e. sets of axioms preserving the faults. This property allows efficient use of standard model-based diagnosis techniques based on the application of hitting set algorithms to a number of given conflict sets. However, in many use cases such as ontology alignment the ontologies might include many more conflict sets than in usual ontology development settings, thus making precomputation of conflict sets and consequently ontology diagnosis infeasible. In this paper we suggest a debugging approach based on a direct computation of diagnoses that omits calculation of conflict sets. Embedded in an ontology debugger, the proposed algorithm is able to identify diagnoses for an ontology which includes a large number of faults and for which application of standard diagnosis methods fails. The evaluation results show that the approach is practicable and is able to identify a fault in adequate time.'
author:
- Kostyantyn Shchekotykhin
- Philipp Fleiss
- Patrick Rodler
- Gerhard Friedrich
bibliography:
- 'V-Know.bib'
title: 'Direct computation of diagnoses for ontology debugging[^1]'
---
Introduction
============
Ontology development and maintenance relies on an ability of users to express their knowledge in form of logical axioms. However, the knowledge acquisition process might be problematic since a user can make a mistake in an axiom being modified or a correctly specified axiom can trigger a hidden bug in an ontology. These bugs might be of different nature and are caused by violation of *requirements* such as consistency of an ontology, satisfiability of classes, presence or absence of some entailments. In such scenarios as ontology matching the complexity of faults might be very high because multiple disagreements between ontological definitions and/or modeling problems an be triggered by aliments at once.
Ontology debugging tools [@Kalyanpur.Just.ISWC07; @friedrich2005gdm; @Horridge2008] can simplify the development process by allowing their users specification of requirements to the intended (target) ontology. If some of the requirements are broken, i.e. an ontology ${\mathcal{O}}$ is *faulty*, the debugging tool can compute a set of axioms ${\mathcal{D}}\subseteq {\mathcal{O}}$ called *diagnosis*. An expert should remove or modify at least all axioms of a diagnosis in order to be able to formulate the *target ontology* ${\mathcal{O}}_t$. Nevertheless, in real-world scenarios debugging tools can return a set of alternative diagnoses ${{\bf{D}}}$, since it is quite hard for a user to specify such set of requirements that allows formulation of the target ontology ${\mathcal{O}}_t$ only. Consequently, the user has to differentiate between multiple diagnoses ${\mathcal{D}}\in {{\bf{D}}}$ in order to find the *target diagnosis* ${\mathcal{D}}_t$, which application allows formulation of the intended ontology. Diagnosis discrimination methods [@Kalyanpur2006; @Shchekotykhin2012] allow their users to reduce the number of diagnoses to be considered. The first approach presented in [@Kalyanpur2006] uses a number of heuristics that rank diagnoses depending on the structure of axioms ${\mathit{ax}}_i \in {\mathcal{D}}$, usage in test cases, provenance information, etc. Only diagnoses with the highest ranks are returned to the user. A more sophisticated approach suggested in [@Shchekotykhin2012] identifies the target diagnosis by asking an oracle, like an expert or information extraction system, a sequence of questions: whether some axiom is entailed by the target ontology or not. Given an answer the algorithm removes all diagnoses that are inconsistent with it. Furthermore, the query is used to create an additional test case, which allows the search algorithm to prune the search space and reduce the number of diagnoses to be computed. Moreover, in order to speed up the computations the method approximates the set of all diagnoses with a set of $n$ leading diagnoses, i.e. the $n$ best diagnoses with respect to a given measure.
All the approaches listed above follow the standard model-based diagnosis approach [@Reiter87] and compute diagnoses using minimal conflict sets $CS$, i.e. irreducible sets of axioms ${\mathit{ax}}_i \in {\mathcal{O}}$ that preserve violation of at least one requirement. The computation of the conflict sets can be done within a polynomial number of calls to the reasoner, e.g. by <span style="font-variant:small-caps;">QuickXPlain</span> algorithm [@junker04]. To identify a diagnosis of cardinality $|{\mathcal{D}}|=m$ the hitting set algorithm suggested in [@Reiter87] requires computation of $m$ minimal conflict sets. In the use cases when an ontology is generated by an ontology learning or matching system the number of minimal conflict sets $m$ can be large, thus making the ontology debugging practically infeasible.
In this paper we present two algorithms <span style="font-variant:small-caps;">Inv-HS-Tree</span> and <span style="font-variant:small-caps;">Inv-QuickXPlain</span>, which inverse the standard model-based approach to ontology debugging and compute diagnoses directly, rather than by means of minimal conflict sets. Thus, given some predefined number of leading diagnoses $n$ the breadth-first search algorithm <span style="font-variant:small-caps;">Inv-HS-Tree</span> executes the direct diagnosis algorithm <span style="font-variant:small-caps;">Inv-QuickXPlain</span> exactly $n$ times. This property allows the new search approach to perform well when applied to ontologies with large number of conflicts. The evaluation shows that the direct computation of diagnoses allows to apply ontology debugging in the scenarios suggesting diagnosis of generated ontologies. The system based on <span style="font-variant:small-caps;">Inv-QuickXPlain</span> is able to compute diagnoses and identify the target one in the cases when common model-based diagnosis techniques fail. Moreover, the suggested algorithms are able to maintain a comparable or a slightly better performance in the cases that can be analyzed by both debugging strategies. The remainder of the paper is organized as follows: Section \[sec:diag\] gives a brief introduction to the main notions of ontology debugging. The details of the suggested algorithms and their application are presented in Section \[sec:details\]. In Section \[sec:eval\] we provide evaluation results and conclude in Section \[sec:conc\].
Ontology debugging {#sec:diag}
==================
Let us exemplify the ontology debugging process by the following use case:
\[ex:simple\] Consider an ontology ${\mathcal{O}}$ with the terminology ${\mathcal{T}}$:
----------------------------------------------------- ------------------------------------------- --------------------------------------------------------------
${\mathit{ax}}_1 : A \sqsubseteq B$ ${\mathit{ax}}_2 : B \sqsubseteq E$ ${\mathit{ax}}_3 : B \sqsubseteq D \sqcap \lnot \exists s.C$
${\mathit{ax}}_4 : C \sqsubseteq \lnot(D \sqcup E)$ ${\mathit{ax}}_5 : D \sqsubseteq \lnot B$
----------------------------------------------------- ------------------------------------------- --------------------------------------------------------------
and assertions ${\mathcal{A}}:\{A(w), A(v), s(v,w)\}$. Because of the axioms ${\mathit{ax}}_3$ and ${\mathit{ax}}_5$ the given terminology is incoherent and it includes two unsatisfiable classes $A$ and $B$. Moreover, the assertions $A(w)$ and $A(v)$ make the ontology inconsistent.
Assume that the user is sure that the assertional axioms are correct, i.e. this part of the ontology is included to the *background knowledge* ${\mathcal{B}}$ of the debugging system and, thus, cannot be considered as faulty in the debugging process. In this case, the debugger can identify the only irreducible set of axioms $CS': \tuple{{\mathit{ax}}_3,{\mathit{ax}}_5}$ – minimal conflict set – preserving both inconsistency and incoherency of ${\mathcal{O}}$. Modification of at least one axiom of one of the minimal (irreducible) diagnoses ${\mathcal{D}}'_1: \diag{{\mathit{ax}}_3}$ or ${\mathcal{D}}'_2: \diag{{\mathit{ax}}_5}$ is required in order to restore both consistency and coherency of the ontology.
In some debugging systems, e.g. [@Shchekotykhin2012], the user can also provide *positive* ${P}$ and *negative* ${N}$ test cases, where each test case is a set of axioms that should be entailed (positive) and not entailed (negative) by the ontology resulting in application of the debugger. If in the example the user specifies $${P}=\setof{\setof{B(w)}} \text{ and } {N}=\setof{\setof{\lnot C(w)}}$$ then the debugger returns another set of minimal conflicts sets: $$CS_1: \tuple{{\mathit{ax}}_1,{\mathit{ax}}_3} \quad CS_2: \tuple{{\mathit{ax}}_2,{\mathit{ax}}_4} \quad CS_3: \tuple{{\mathit{ax}}_3,{\mathit{ax}}_5}\quad CS_4: \tuple{{\mathit{ax}}_3, {\mathit{ax}}_4}$$ and diagnoses: $${\mathcal{D}}_1: \diag{{\mathit{ax}}_3,{\mathit{ax}}_4} \quad {\mathcal{D}}_2: \diag{{\mathit{ax}}_2,{\mathit{ax}}_3} \quad {\mathcal{D}}_3: \diag{{\mathit{ax}}_1,{\mathit{ax}}_4,{\mathit{ax}}_5}$$ The reason is that both ontologies ${\mathcal{O}}'_1 = {\mathcal{O}}\setminus {\mathcal{D}}'_1$ and ${\mathcal{O}}'_2 = {\mathcal{O}}\setminus {\mathcal{D}}'_2$ resulting in application of the diagnoses ${\mathcal{D}}'_1$ and ${\mathcal{D}}'_2$ do not fulfill the test cases. For instance, ${\mathcal{O}}'_2$ is invalid since the axioms $\setof{{\mathit{ax}}_1,{\mathit{ax}}_3} \subset {\mathcal{O}}'_2$ entail $\lnot C(w)$, which must not be entailed.
\[def:target\] The target ontology ${\mathcal{O}}_t$ is a set of axioms that is characterized by a background knowledge ${\mathcal{B}}$, sets of positive ${P}$ and negative ${N}$ test cases. The target ontology ${\mathcal{O}}_t$ should fulfill the following necessary requirements[^2]:
- ${\mathcal{O}}_t \cup {\mathcal{B}}$ must be consistent and, optionally, coherent
- ${\mathcal{O}}_t \cup {\mathcal{B}}\models{p}\qquad \forall {p}\in {P}$
- ${\mathcal{O}}_t \cup {\mathcal{B}}\not\models{n}\qquad \forall {n}\in {N}$
The ontology ${\mathcal{O}}$ is *faulty* with respect to a predefined ${\mathcal{B}}, {P}$ and ${N}$ iff ${\mathcal{O}}$ does not fulfill the necessary requirements.
\[def:diagproblem\] $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}}$ is a *diagnosis problem instance*, where ${\mathcal{O}}$ is a faulty ontology, ${\mathcal{B}}$ is a background theory, ${P}$ is a set of test cases that must be entailed by the target ontology ${\mathcal{O}}_t$ and ${N}$ is a set of test cases that must not be entailed by ${\mathcal{O}}_t$. The instance is *diagnosable* if ${\mathcal{B}}\cup \bigcup_{{p}\in {P}} {p}$ is consistent (coherent) and ${\mathcal{B}}\cup \bigcup_{{p}\in {P}} {p}\not\models {n}$ for each ${n}\in{N}$.
The ontology debugging approaches [@Horridge2008; @Kalyanpur.Just.ISWC07; @Shchekotykhin2012] can be applied to any knowledge representation language for which there is a sound and complete procedure for deciding whether an ontology entails an axiom or not. Moreover, the entailment relation $\models$ *must* be extensive, monotone and idempotent.
Another important aspect of the ontology debugging systems comes from the model-based diagnosis techniques [@Reiter87; @dekleer1987] that they are based on. The model-based diagnosis theory considers the modification operation as a sequence of add/delete operations and focuses only on deletion. That is, if an ontology includes faulty axiom then the simplest way to remove the fault is to remove the axiom. However, removing an axiom might be a too coarse modification, since the ontology can lose some of the entailments that must be preserved. Therefore, the model-based diagnosis takes also into account axioms (ontology extension) $EX$, which are added by the user to the ontology after removing all axioms of a diagnosis. Usually the set of axioms $EX$ is either formulated by the user or generated by a learning system [@Lehmann2010]. If $EX$ is not empty then all axioms ${\mathit{ax}}_i \in EX$ are added to the set of positive test cases ${P}$, since each axiom ${\mathit{ax}}_i$ must be entailed by the intended ontology ${\mathcal{O}}_t$.
\[def:diag\] For a diagnosis problem instance $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}}$ a subset of the ontology axioms ${\mathcal{D}}\subset{\mathcal{O}}$ is a *diagnosis* iff ${\mathcal{O}}_t = ({\mathcal{O}}\setminus{\mathcal{D}})$ fulfills the requirements of Definition \[def:target\].
Due to computational complexity of the diagnosis problem, in practice the set of all diagnoses is approximated by the set of minimal diagnoses. For a diagnosis problem instance $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}}$ a diagnosis ${\mathcal{D}}$ is minimal iff there is no diagnosis ${\mathcal{D}}'$ of the same instance such that ${\mathcal{D}}'\subset{\mathcal{D}}$.
The computation of minimal diagnoses in model-based approaches is done by means of conflict sets, which are used to constrain the search space.
\[def:conf\] For a problem instance $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}}$ a set of axiom $CS\subseteq{\mathcal{O}}$ is a *conflict set* iff one of the conditions holds:
- $CS\cup{\mathcal{B}}\cup\bigcup_{{p}\in {P}}$ is inconsistent (incoherent) or
- $\exists{n}\in{N}$ such that $CS\cup{\mathcal{B}}\cup\bigcup_{{p}\in {P}}\models{n}$
Just as for diagnoses, computation of conflict sets is reduced to computation of minimal conflict sets. A conflict set $CS$ is *minimal* iff there is no conflict set $CS'$ such that $CS'\subset CS$.
#### Computation of minimal conflict sets.
In practice, the diagnosis systems use two types of strategies for computation of conflict sets, namely, brute-force [@junker01; @Kalyanpur.Just.ISWC07] and divide-and-conquer [@junker04]. The first strategy can be split into acquisition and minimization stages. During the acquisition stage the algorithm adds axioms of an ontology ${\mathcal{O}}\setminus{\mathcal{B}}$ to a buffer while a set of axioms in the buffer is not a conflict set. As soon as at least one conflict set is added to the buffer, the algorithm switches to the minimization stage. In this stage axioms are removed from the buffer such that the set of axioms in the buffer remains a conflict set after each deletion. The algorithm outputs a minimal conflict set or ’no conflicts’. In the worst case a brute force algorithm requires $O(m)$ calls to the reasoner, where $m$ is the number of axioms in a faulty ontology. The algorithm implementing divide-and-conquer strategy starts with a buffer containing all axioms of an ontology, i.e. the conflict set is in the buffer, and splits it into smaller and simpler sub-problems. The algorithm continues splitting until it identifies a sequence of sub-problems including only one axiom such that a set including all these axioms is a minimal conflict set. The divide-and-conquer algorithm requires in the worst case $O(k\log(\frac{m}{k}))$ calls to the reasoner, where $k$ is the cardinality of a returned conflict set. Taking into account that in practice $k\ll m$, the divide-and-conquer strategy is preferred to the brute-force.
#### Identification of minimal diagnoses.
The computation of minimal diagnoses in modern ontology debugging systems is implemented using the Reiter’s Hitting Set <span style="font-variant:small-caps;">HS-Tree</span> algorithm [@Reiter87; @greiner1989correction]. The algorithm constructs a directed tree from root to the leaves, where each node $nd$ is labeled either with a minimal conflict set $CS(nd)$ or $\checkmark$ (consistent) or $\times$ (pruned). The latter two labels indicate that the node is closed. Each edge outgoing from the open node $nd$ is labeled with an element $s \in CS(nd)$. $HS(nd)$ is a set of edge labels on the path from the root to the node $nd$. Initially the algorithm creates an empty root node and adds it to the *queue*, thus, implementing a breadth-first search strategy. Until the queue is empty, the algorithm retrieves the first node $nd$ from the queue and labels it with either:
1. $\times$ if there is a node $nd'$, labeled with either $\checkmark$ or $\times$, such that $H(nd')\subseteq H(nd)$ (pruning non-minimal paths), or
2. $CS(nd')$ if a node $nd'$ exists such that its label $CS(nd') \cap H(nd) = \emptyset$ (reuse), or
3. $CS$ if $CS$ is a minimal conflict set computed for the diagnosis problem instance $\tuple{{\mathcal{O}}\setminus H(nd),{\mathcal{B}},{P},{N}}$ by one of the algorithms mentioned above (compute), or
4. $\checkmark~~$ (consistent).
The leaf nodes of a complete tree are either pruned ($\times$) or consistent ($\checkmark$) nodes. The set of labels $H(nd)$ of each consistent node $nd$ corresponds to a minimal diagnosis. The minimality of the diagnoses is guaranteed due to the minimality of conflict sets, pruning rule and breadth-first search strategy. Moreover, because of the latter the minimal diagnoses are generated in order of increasing cardinality.
#### Diagnoses discrimination.
In many real-world scenarios an ontology debugger can return a large number of diagnoses, thus, placing the burden of diagnosis discrimination on the user. Without an adequate tool support the user is often unable to understand the difference between the minimal diagnoses and to select an appropriate one. The diagnosis discrimination method suggested in [@Shchekotykhin2012] uses the fact that different ontologies, e.g. ${\mathcal{O}}_1={\mathcal{O}}\setminus{\mathcal{D}}_1$ and ${\mathcal{O}}_2={\mathcal{O}}\setminus{\mathcal{D}}_2$, resulting in the application of different diagnoses, entail different sets of axioms. Consequently, there exists a set of axioms ${Q}$ such that ${\mathcal{O}}_1\models{Q}$ and ${\mathcal{O}}_2\not\models{Q}$. If such a set of axioms ${Q}$ exists, it can be used as a query to some oracle such as the user or an information extraction system. If the oracle answers $yes$ then the target ontology ${\mathcal{O}}_t$ should entail ${Q}$ and, hence, ${Q}$ should be added to the set of positive test cases ${P}\cup\setof{{Q}}$. Given the answer $no$ the set of axioms is added to the negative test cases ${N}\cup\setof{{Q}}$ to ensure that the target ontology does not entail ${Q}$. Thus, in the first case the set of axioms ${\mathcal{D}}_2$ can be removed from the set of diagnoses ${{\bf{D}}}$ because ${\mathcal{D}}_2$ is not a diagnosis of the updated diagnosis problem instance $\tuple{{\mathcal{O}},{\mathcal{B}},{P}\cup\setof{{Q}},{N}}$ according to Definition \[def:diag\]. Similarly, in the second case the set of axioms ${\mathcal{D}}_1$ is not a diagnosis of $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}\cup\setof{{Q}}}$.
However, many different queries might exist for the set of diagnoses $|{{\bf{D}}}| > 2$. In the extreme case there are $2^n-2$ possible queries for a set of diagnoses including $n$ elements. To select the best query the authors in [@Shchekotykhin2012] suggest two measures: <span style="font-variant:small-caps;">split-in-half</span> and <span style="font-variant:small-caps;">entropy</span>. The first measure is a greedy approach preferring the queries which allow to remove a half of the minimal diagnoses from ${{\bf{D}}}$, given an answer of an oracle. The second is an information-theoretic measure, which estimates the information gain for both outcomes of each query and returns the one that maximizes the information gain. The *prior fault probabilities* required for <span style="font-variant:small-caps;">Entropy</span> measure can be obtained from statistics of previous diagnosis sessions. For instance, if the user has problems with understanding of restrictions then the diagnosis logs will contain more repairs of axioms including restrictions. Consequently, the prior fault probabilities of axioms including restrictions should be higher. Given the fault probabilities of axioms, one can calculate prior fault probabilities of minimal diagnoses including these axioms as well as evaluate <span style="font-variant:small-caps;">Entropy</span> (see [@Shchekotykhin2012] for more details).
A general algorithm of the interactive ontology diagnosis process can be described as follows:
1. Generate a set of diagnoses ${{\bf{D}}}$ including at most $n$ diagnoses.
2. Compute a set of queries and select the best one according to some predefined measure.
3. Ask the oracle and, depending on the answer, add the query either to ${P}$ or to ${N}$.
4. Update the set of diagnoses ${{\bf{D}}}$ and remove the ones that do not comply with the newly acquired test case, according to the Definition \[def:diag\].
5. Update the tree and repeat from Step 1 if the queue contains open nodes.
6. Return the set of diagnoses ${{\bf{D}}}$.
The resulting set of diagnoses ${{\bf{D}}}$ includes only diagnoses that are not differentiable in terms of their entailments, but have some syntactical differences. The preferred diagnosis in this case should be selected by the user using some text differencing and comparison tool.
Note, that a similar idea can be found in [@Nikitina2011] where authors use queries to an oracle to revise an ontology. Given a consistent and coherent ontology ${\mathcal{O}}$ the system partitions it into two ontologies ${\mathcal{O}}^{\models}$ and ${\mathcal{O}}^{\not\models}$ containing required and incorrect consequences correspondingly. The system can deal with inconsistent/incoherent ontologies if a union of all minimal conflict sets is put to the initial set of incorrect consequences ${\mathcal{O}}^{\not\models}_0$. Computation of the set ${\mathcal{O}}^{\not\models}_0$ requires application of an ontology debugger and is not addressed in [@Nikitina2011].
#### Example 1, continued.
Assume that the user mistakenly negated only the restriction in ${\mathit{ax}}_3$ instead of the whole description $B \sqsubseteq \lnot (D \sqcap \exists s.C)$. Moreover, in ${\mathit{ax}}_4$ a disjunction was placed instead of conjunction because of a typo, i.e. $C \sqsubseteq \lnot(D \sqcap E)$.
The interactive diagnosis process, illustrated in Fig. \[fig:hs:ex\], applies the three techniques described above to find the target diagnosis ${\mathcal{D}}_t=\diag{{\mathit{ax}}_3,{\mathit{ax}}_4}$. In the first iteration the system starts with the root node, which is labeled with $\tuple{{\mathit{ax}}_1,{\mathit{ax}}_3}$ – the first conflict returned by <span style="font-variant:small-caps;">QuickXPlain</span>. Next, <span style="font-variant:small-caps;">HS-Tree</span> generates successor nodes and labels the edges leading to these nodes with corresponding axioms. The algorithm extends the search tree until two *leading minimal diagnoses* are computed. Given the set of minimal diagnoses the diagnosis discrimination algorithm identifies a query using entailments of the two ontologies ${\mathcal{O}}_1={\mathcal{O}}\setminus{\mathcal{D}}_1$ and ${\mathcal{O}}_2={\mathcal{O}}\setminus{\mathcal{D}}_2$, which are deduced by the classification and realization services of a standard Description Logic reasoner. One of the entailments $E(w)$ can be used as a query, since ${\mathcal{O}}_1\models E(w)$ and ${\mathcal{O}}_2\not\models E(w)$. Given a positive answer of an oracle the algorithm updates the search tree and closes the node corresponding to the invalid minimal diagnosis ${\mathcal{D}}_2$. Since there are some open nodes, the algorithm continues and finds the next diagnosis ${\mathcal{D}}_3$. The two more nodes, expanded by the <span style="font-variant:small-caps;">HS-Tree</span> in the second iteration, are closed since both sets of labels on the paths to these nodes from the root are supersets of the closed paths $\setof{{\mathit{ax}}_3,{\mathit{ax}}_2}$ and $\setof{{\mathit{ax}}_3,{\mathit{ax}}_4}$. For the two minimal diagnoses ${\mathcal{D}}_1$ and ${\mathcal{D}}_3$ the diagnosis discrimination finds a query ${Q}=\setof{B(v)}$, which is answered positively by the oracle. Consequently, the algorithm removes ${\mathcal{D}}_3$ and continues with the expansion of the last node labeled with $\tuple{{\mathit{ax}}_3, {\mathit{ax}}_4}$. Since the paths to the successors of this node are supersets of existing closed paths in the tree, the algorithm closes these nodes and terminates. The diagnosis ${\mathcal{D}}_1$ suggesting modification of the axioms ${\mathit{ax}}_4$ and ${\mathit{ax}}_3$ is returned to the user.
The example shows that a modern ontology debugger can efficiently identify the target diagnosis. As it is demonstrated by different evaluation studies [@Kalyanpur.Just.ISWC07; @Shchekotykhin2012], the debuggers work well in an ontology development and maintenance process in which users modify an ontology manually. In such process the users classify ontology regularly and, therefore, are able to identify the presence of faults early, i.e. a user introduces only a small number of modifications to the ontology between two calls to a reasoner. Therefore, faulty ontologies in this scenario can be characterized by a small number of minimal conflict sets that can generate a large number of possible diagnoses. For instance, Transportation ontology (see [@Shchekotykhin2012]) includes only $9$ minimal conflict sets that generate $1782$ minimal diagnoses. In such case <span style="font-variant:small-caps;">HS-Tree</span> makes only $9$ calls to <span style="font-variant:small-caps;">QuickXPlain</span> and then reuses the identified minimal conflicts to label all other nodes. The number of calls to the reasoner, which is the main “source of complexity”, is rather low and can be approximated by $9k\log{\frac{n}{k}}+ |Nodes|$, where $|Nodes|$ is the cardinality of the set containing all nodes of the search tree, $k$ is the maximum cardinality of all computed minimal conflict sets and $n$ is the number of axiom in the faulty ontology. The combination of ontology debugging with diagnosis discrimination allows to reduce the number of calls to the reasoner, because often acquired test cases invalidate not only diagnoses that are already computed by <span style="font-variant:small-caps;">HS-Tree</span>, but also those that are not. All these factors together with such techniques as module extraction [@SattlerSZ09] make the application of ontology debuggers feasible in the described scenario.
However, in such applications as ontology matching or learning the number of minimal conflict sets can be much higher, because all axioms are generated at once. For instance, ontology alignments, identified by most of ontology matching systems in the last Ontology Alignment Evaluation Initiative (OAEI), are often incoherent and, in some cases, inconsistent [@Ferrara2011]. The large number of minimal conflict set makes the application of ontology debugging problematic because of the large number of calls to the reasoner and the memory required by the breadth-first search algorithm. To overcome this problem we suggest a novel ontology debugging approach that computes diagnoses directly, i.e. without precomputation of minimal conflict sets.
Direct diagnosis of ontologies {#sec:details}
==============================
The main idea behind the approach is to start with the set ${\mathcal{D}}_0=\emptyset$ and extend it until such a subset of ontology axioms ${\mathcal{D}}\subseteq {\mathcal{O}}$ is found that ${\mathcal{D}}$ is a minimal diagnosis with respect to the Definition \[def:diag\]. In the first step Algorithm \[algo:qx\] verifies the input data, i.e. if the input ontology ${\mathcal{O}}$, background knowledge ${\mathcal{B}}$, positive ${P}$ and negative ${N}$ test cases together constitute a valid diagnosis problem instance $\tuple{{\mathcal{O}},{\mathcal{B}},{P},{N}}$ (Definition \[def:diagproblem\]). Thus it verifies: a) whether the background theory together with the positive and negative test cases is consistent; and b) if the ontology is faulty. In both cases <span style="font-variant:small-caps;">Inv-QuickXPlain</span> calls <span style="font-variant:small-caps;">verifyRequirements</span> function that implements Definition \[def:diag\] and tests if axioms in the set ${\mathcal{D}}$ are a minimal diagnosis or not. The test function requires a reasoner that implements consistency/coherency checking (<span style="font-variant:small-caps;">isConsistent</span>) and allows to decide whether a set of axioms is entailed by the ontology (<span style="font-variant:small-caps;">entails</span>).
<span style="font-variant:small-caps;">findDiagnosis</span> is the main function of algorithm which takes six arguments as input. The values of the arguments ${\mathcal{B}}$, ${\mathcal{O}}$ and ${N}$ remain constant during the recursion and are required only for verification of requirements. Whereas values of ${\mathcal{D}}$, $\Delta$ and ${\mathcal{O}}_\Delta$ are used to provide a set of axioms corresponding to the actual diagnosis and two diagnosis sub-problems on the next level of the recursion. The sub-problems are constructed during the execution <span style="font-variant:small-caps;">findDiagnosis</span> by splitting a given diagnosis problem with <span style="font-variant:small-caps;">split</span> function. In the most of the implementations <span style="font-variant:small-caps;">split</span> simply partitions the set of axioms into two sets of equal cardinality. The algorithm continues to divide diagnosis problems (<span style="font-variant:small-caps;">findDiagnoses</span> line 12) until it identifies that the set ${\mathcal{D}}$ is a diagnosis (line 7). In further iterations the algorithm minimizes the diagnosis by splitting it into sub-problems of the form ${\mathcal{D}}= {\mathcal{D}}' \cup {\mathcal{O}}_\Delta$, where ${\mathcal{O}}_\Delta$ contains only one axiom. In the case when ${\mathcal{D}}$ is a diagnosis and ${\mathcal{D}}'$ is not, the algorithm decides that ${\mathcal{O}}_\Delta$ is a subset of the sought minimal diagnosis. Just as the original algorithm, <span style="font-variant:small-caps;">Inv-QuickXPlain</span> always terminates and returns a minimal diagnosis for a given diagnosis problem instance.
${\mathcal{O}}' \leftarrow {\mathcal{O}}\setminus{\mathcal{B}}$ ${\mathcal{B}}' \leftarrow {\mathcal{B}}\cup \bigcup_{{p}\in{P}} {p}$
$\generate({\mathcal{B}}', \emptyset, {\mathcal{O}}', {\mathcal{O}}', {N})$
#### Example \[ex:simple\], continued.
Let us look again at the ontology diagnosis example and show how a diagnosis is computed by <span style="font-variant:small-caps;">Inv-QuickXPlain</span> (see Fig. \[fig:qx\]). The algorithm starts with an empty diagnosis ${\mathcal{D}}=\emptyset$ and ${\mathcal{O}}_\Delta$ containing all axioms of the problem. <span style="font-variant:small-caps;">verifyRequirements</span> returns $false$ since the ${\mathcal{B}}\cup {\mathcal{O}}\setminus \emptyset$ is inconsistent. Therefore, the algorithm splits ${\mathcal{O}}_\Delta$ into $\setof{{\mathit{ax}}_1,{\mathit{ax}}_2}$ and $\setof{{\mathit{ax}}_3,{\mathit{ax}}_4,{\mathit{ax}}_5}$ and passes the sub-problem to the next level of recursion. Since, the set ${\mathcal{D}}=\setof{{\mathit{ax}}_1,{\mathit{ax}}_2}$ is not a diagnosis, the ontology ${\mathcal{B}}\cup ({\mathcal{O}}\setminus {\mathcal{D}})$ is inconsistent and the problem in ${\mathcal{O}}_\Delta$ is split one more time. On the second level of recursion the set ${\mathcal{D}}$ is a diagnosis, although not minimal. The function <span style="font-variant:small-caps;">verifyRequirements</span> returns *true* and the algorithm starts to analyze the found diagnosis. Therefore, it verifies whether the last extension of the set ${\mathcal{D}}$ is a subset of a minimal diagnosis. Since, the extension includes only one axiom ${\mathit{ax}}_3$ an the extended set $\setof{{\mathit{ax}}_1,{\mathit{ax}}_2}$ is not a diagnosis, the algorithm concludes that ${\mathit{ax}}_3$ is an element of the target diagnosis. The left-most branch of the recursion tree terminates and returns $\setof{{\mathit{ax}}_3}$. This axiom is added to the set ${\mathcal{D}}$ and the algorithm starts investigating whether the two axioms $\setof{{\mathit{ax}}_1,{\mathit{ax}}_2}$ also belong to a minimal diagnosis. First, it tests the set $\setof{{\mathit{ax}}_3,{\mathit{ax}}_1}$, which is not a diagnosis, and on the next iteration it identifies the correct result $\setof{{\mathit{ax}}_3,{\mathit{ax}}_2}$.
![Recursive calls of <span style="font-variant:small-caps;">Inv-HS-Tree</span> for the diagnosis problem instance in Example \[ex:simple\]. The background theory ${\mathcal{B}}$, original ontology ${\mathcal{O}}$ and the set of negative test cases ${N}$ remain constant and therefore are omitted. Solid arrows show recursive calls (line 12 – left and line 13 – right) and dashed arrows indicate returns.[]{data-label="fig:qx"}](treeQX.pdf){width=".9\textwidth"}
<span style="font-variant:small-caps;">Inv-QuickXPlain</span> is a deterministic algorithm and returns the same minimal diagnosis if applied twice to a diagnosis problem instance. In order to obtain different diagnoses, the problem instance should be changed such that the <span style="font-variant:small-caps;">Inv-QuickXPlain</span> will identify the next diagnosis. Therefore, we suggest <span style="font-variant:small-caps;">Inv-HS-Tree</span>, which is a modification of the <span style="font-variant:small-caps;">HS-Tree</span> algorithm presented in Section \[sec:diag\]. The inverse algorithm labels each node $nd$ of the tree with a minimal diagnosis ${\mathcal{D}}(nd)$. The rules 1, 2 and 4 of the original algorithm remain the same in <span style="font-variant:small-caps;">Inv-HS-Tree</span> and the rule 3 is modified as follows:
- The open node $nd$ is labeled with ${\mathcal{D}}$ if ${\mathcal{D}}$ is a minimal diagnosis for the diagnosis problem instance $\tuple{{\mathcal{O}}, {\mathcal{B}}\cup H(nd),{P},{N}}$ computed by <span style="font-variant:small-caps;">Inv-QuickXPlain</span> (compute)
where $H(nd)$ is a set containing edge labels on the path from the root to $nd$. In this case elements of $H(nd)$ correspond to the axioms of minimal diagnoses that were used as labels of nodes on the path. Addition of an axiom ${\mathit{ax}}_k$ of a minimal diagnosis ${\mathcal{D}}_i$ to the background theory forces <span style="font-variant:small-caps;">Inv-QuickXPlain</span> to search for a minimal diagnosis that suggests the modification of any other axiom, except ${\mathit{ax}}_k$. That is, the diagnosis ${\mathcal{D}}_i$ will not be rediscovered by the direct diagnosis algorithm.
A modified update procedure is another important feature of <span style="font-variant:small-caps;">Inv-HS-Tree</span>. In the diagnosis discrimination settings the ontology debugger acquires new knowledge that can invalidate some of the diagnoses that are used as labels of the tree nodes. During the tree update <span style="font-variant:small-caps;">Inv-HS-Tree</span> searches for the nodes with invalid labels. Given such a node, the algorithm removes its label and places it to the list of open nodes. Moreover, the algorithm removes all the nodes of a subtree originating from this node. After all nodes with invalid labels are cleaned-up, the algorithm attempts to reconstruct the tree by reusing the remaining valid minimal diagnoses (rule 2, <span style="font-variant:small-caps;">HS-Tree</span>). Such aggressive pruning of the tree is feasible since a) the tree never contains more than $n$ nodes that were computed with <span style="font-variant:small-caps;">Inv-QuickXPlain</span> and b) computation of a possible modification to the minimal diagnosis, that can restore its validness, requires invocation of <span style="font-variant:small-caps;">Inv-QuickXPlain</span> and, therefore, as hard as computation of a new diagnosis. Note also, that in a common diagnosis discrimination setting $n$ is often set to a small number, e.g. $10$, in order to achieve good responsiveness of the system. Consequently, in this settings the size of the tree will be small. The latter is another advantage of the direct method as it requires much less memory in comparison to a debugger based on the breadth-first strategy.
#### Example \[ex:simple\], continued.
Applied to the sample diagnosis problem instance the direct ontology debugger computes two minimal diagnoses $\diag{{\mathit{ax}}_2, {\mathit{ax}}_3}$ and $\diag{{\mathit{ax}}_3, {\mathit{ax}}_4}$ in the first iteration (see Fig. \[fig:dual:ex\]). For these diagnoses the discrimination method identifies the query $E(w)$, which is answered *yes* by the oracle. The label of the root node in this case becomes invalid. Consequently, the algorithm removes the label of the root, deletes its subtree and places the root to the list of the open nodes. Next, according to the rule 2, the valid minimal diagnosis ${\mathcal{D}}_2$ is reused to label the root. Given a diagnosis problem instance $\tuple{{\mathcal{O}}, {\mathcal{B}}\cup \setof{{\mathit{ax}}_3}, {P}, {N}}$, <span style="font-variant:small-caps;">Inv-QuickXPlain</span> computes the last minimal diagnosis ${\mathcal{D}}_3$. Given the positive answer to the query $B(v)$ the algorithm labels both open nodes with $\checkmark$ and returns ${\mathcal{D}}_2$ as the result. Moreover, the labels on the edges of the tree correspond to the minimal conflicts $\tuple{{\mathit{ax}}_3}$ and $\tuple{{\mathit{ax}}_4}$ of the final diagnosis problem instance.
Evaluation {#sec:eval}
==========
We evaluated the direct ontology debugging technique using aligned ontologies generated in the framework of OAEI 2011 [@Ferrara2011]. These ontologies represent a real-world scenario in which a user generated ontology alignments by means of some (semi-)automatic tools. In such case the size of the minimal conflict sets and their configuration might be substantially different from the ones considered in the manual ontology development process, e.g. [@Kalyanpur.Just.ISWC07; @Shchekotykhin2012]. In the first experiment we demonstrate that <span style="font-variant:small-caps;">Inv-HS-Tree</span> is able to identify minimal diagnoses in the cases when <span style="font-variant:small-caps;">HS-Tree</span> fails. The second test shows that the direct diagnosis approach is scalable and can be applied to ontologies including thousands of axioms.
The ontology matching problem can be formulated as follows: given two ontologies ${\mathcal{O}}_i$ and ${\mathcal{O}}_j$, the goal of the ontology matching system is to identify a set of alignments $M_{ij}$. Each element of this set is a tuple $\tuple{x_i,x_j,r,v}$, where $x_i \in Q({\mathcal{O}}_i)$, $x_j \in Q({\mathcal{O}}_j)$, $r$ is a semantic relation and $v$ is a confidence value. $Q({\mathcal{O}})$ denotes a set of all matchable elements of an ontology ${\mathcal{O}}$ such as classes or properties. The result of the ontology matching process is the aligned ontology ${\mathcal{O}}_{ij} = {\mathcal{O}}_i \cup M_{ij} \cup {\mathcal{O}}_j$. In the ontologies, used in both experiments, only classes and properties were considered as matchable elements and the set of relations was limited to $r \in \setof{\sqsubseteq, \equiv, \sqsupseteq}$.
![Time required to compute 1, 9 and 30 diagnoses by <span style="font-variant:small-caps;">HS-Tree</span> and <span style="font-variant:small-caps;">Inv-HS-Tree</span> for the Conference problem.[]{data-label="fig:solvable"}](solvable.pdf){width=".8\textwidth"}
In the first experiment we applied the debugging technique to the set of aligned ontologies resulting from “Conference” set of problems, which is characterized by lower precision and recall of the applied systems (the best F-measure 0.65) in comparison, for instance, to the “Anatomy” problem (average F-measure 0.8)[^3]. The Conference test suite[^4] includes 286 ontology alignments generated by the 14 ontology matching systems. We tested all the ontologies of the suite and found that: a) 140 ontologies are consistent and coherent; b) 122 ontologies are incoherent; c) 26 ontologies are inconsistent; and in 8 cases HermiT [@Motik2009] was unable to finish the classification in two hours[^5]. The results show that only two systems `CODI` and `MaasMtch` out of 14 were able to generate consistent and coherent alignments. This observation confirms the importance of high-performance ontology debugging methods.
The 146 ontologies of the cases b) and c) were analyzed with both <span style="font-variant:small-caps;">HS-Tree</span> and <span style="font-variant:small-caps;">Inv-HS-Tree</span>. For each of the ontologies the system computed 1, 9 and 30 leading minimal diagnoses. The results of the experiment presented in Fig.\[fig:solvable\] show that for 133 ontologies out of 146 both approaches were able to compute the required amount of diagnoses. In the experiment where only 1 diagnosis was requested, the direct approach outperforms the <span style="font-variant:small-caps;">HS-Tree</span> as it was expected. In the next two experiments the time difference between the approaches decreases. However, the direct approach was able to avoid a rapid increase of computation time for very hard cases.
------------- ---------------- ---------------- ------------ ------------- --------------
**Matcher** **Ontology 1** **Ontology 2**
**1 Diag** **9 Diags** **30 Diags**
ldoa cmt edas 2540 6919 15470
csa conference edas 3868 14637 39741
ldoa ekaw iasted 10822 71820 229728
mappso edas iasted 11824 89707 293746
csa edas iasted 15439 134049 377361
csa conference ekaw 11257 31010 62823
ldoa cmt ekaw 5602 19730 42284
ldoa conference confof 8291 23576 48062
ldoa conference ekaw 7926 27324 56988
mappso conference ekaw 11394 33763 70469
mappso confof ekaw 9422 25921 55667
optima conference ekaw 11108 29837 62131
optima confof ekaw 7424 22506 44528
------------- ---------------- ---------------- ------------ ------------- --------------
: Ontologies diagnosable only with <span style="font-variant:small-caps;">Inv-HS-Tree</span>.[]{data-label="tab:unsolv30"}
In the 13 cases presented in Table \[tab:unsolv30\] the <span style="font-variant:small-caps;">HS-Tree</span> was unable to find all requested diagnoses in each experiment. Within 2 hours the algorithm calculated only 1 diagnosis for `csa-conference-ekaw` and for `ldoa-conference-confof` it was able to find 1 and 9 diagnoses. The results of the <span style="font-variant:small-caps;">Inv-HS-Tree</span> are comparable with the presented in the Fig. \[fig:solvable\]. This experiment shows that the direct diagnosis is a stable and practically applicable method even in the cases when an ontology matching system outputs results of only moderate quality.
**Matcher** **Ontology 1** **Ontology 2** **Scoring** **Time (ms)** **\#Query** **React (ms)** **\#CC** **CC (ms)**
------------- ---------------- ---------------- ------------- --------------- ------------- ---------------- ---------- -------------
ldoa conference confof ENT 11624 6 1473 430 3
ldoa conference confof SPL 11271 7 1551 365 4
ldoa cmt ekaw ENT 48581 21 2223 603 16
ldoa cmt ekaw SPL 139077 49 2778 609 54
mappso confof ekaw ENT 9987 5 1876 341 7
mappso confof ekaw SPL 31567 13 2338 392 21
optima conference ekaw ENT 16763 5 2553 553 8
optima conference ekaw SPL 16055 8 1900 343 12
optima confof ekaw ENT 23958 20 1137 313 14
optima confof ekaw SPL 17551 10 1698 501 6
ldoa conference ekaw ENT 56699 35 1458 253 53
ldoa conference ekaw SPL 25532 9 2742 411 16
csa conference ekaw ENT 6749 2 2794 499 3
csa conference ekaw SPL 22718 8 2674 345 20
mappso conference ekaw ENT 27451 13 1859 274 28
mappso conference ekaw SPL 70986 16 4152 519 41
ldoa cmt edas ENT 24742 22 1037 303 8
ldoa cmt edas SPL 11206 7 1366 455 2
csa conference edas ENT 18449 6 2736 419 5
csa conference edas SPL 240804 37 6277 859 36
csa edas iasted ENT 1744615 3 349247 1021 1333
csa edas iasted SPL 7751914 8 795497 577 11497
ldoa ekaw iasted ENT 23871492 10 1885975 287 72607
ldoa ekaw iasted SPL 20448978 9 2100123 517 37156
mappso edas iasted ENT 18400292 5 2028276 723 17844
mappso edas iasted SPL 159298994 11 13116596 698 213210
: Diagnosis discrimination using direct ontology debugging. **Scoring** stands for query selection strategy, **react** system reaction time between queries, **\#CC** number of consistency checks, **CC** gives average time needed for one consistency check.[]{data-label="tab:querysessionsinvhstree"}
Moreover, in the first experiment we evaluated the efficiency of the interactive direct debugging approach applied to the cases listed in Table \[tab:unsolv30\]. In order to select the target diagnosis we searched for all possible minimal diagnoses of the following diagnosis problem instance $\tuple{M_{f}, {\mathcal{O}}_i \cup {\mathcal{O}}_j \cup M_t, \emptyset, \emptyset}$, where $M_{f}$ and $M_t$ are the sets of *false* and *true* positive alignments. Both sets can be computed from the set of correct alignments $M_c$, provided by the organizers of OAEI 2011, and the set $M_{ij}$ generated by a ontology matching system as $M_{f} = M_{ij} \setminus M_c$ and $M_t = M_{ij} \cap M_c$. From this set of diagnoses we choose one diagnosis at random as the target. In the experiment the prior fault probabilities of diagnoses were assumed to be $1-v$, where $v$ is the confidence value of the ontology matching system that the alignment is correct. Moreover, all axioms of both ontologies ${\mathcal{O}}_i$ and ${\mathcal{O}}_j$ were assumed to be correct and were assigned small probabilities.
The results presented in Table \[tab:querysessionsinvhstree\] were computed using both split-in-half (SPL) and entropy measure (ENT) for query selection for the diagnosis problem instance $\tuple{M_{ij}, {\mathcal{O}}_i \cup {\mathcal{O}}_j, \emptyset, \emptyset}$. The entropy measure was able to to solve the problem more efficiently because it is able to use information provided by the ontology matcher in terms on confidence values. The experiment shows also that efficiency any debugging methods depends highly on the ability of the underlying reasoner to classify an ontology. Note that the comparison of the suggested debugging technique with the ones build-in to such ontology matching systems as CODI [@noessner2010] or LogMap [@Jimenez-Ruiz2011] is inappropriate, since all these systems use greedy diagnosis techniques (e.g. [@MeilickeStuck2009]), whereas the method presented in this paper is complete. However, the results presented in Table \[tab:unsolv30\] as well as in Fig. \[fig:solvable\] indicate that the suggested approach can find one minimal diagnosis in 25 seconds on average, which is comparable with the time of the mentioned systems.
In the second evaluation scenario we applied the direct method to unsatisfiable ontologies, generated for the Anatomy problem. The source ontologies ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$ include 11545 and 4838 axioms correspondingly, whereas the size of the alignments varies between 1147 and 1461 axioms. The diagnosis selection process was performed in the same way as in the first experiment, i.e. we selected randomly one of the diagnoses of the instance $\tuple{M_{f}, {\mathcal{O}}_1 \cup {\mathcal{O}}_2 \cup M_t, \emptyset, \emptyset}$. The tests were performed for the problem instance diagnosis $\tuple{M_{12}, {\mathcal{O}}_1 \cup {\mathcal{O}}_2, \emptyset, \emptyset}$ for 7 of the 12 systems. We excluded the results of CODI, CSA, MaasMtch, MapEVO and Aroma, because CODI produced coherent alignments and the output of the other systems was not classifiable within 2 hours. The results of the experiment show that the target diagnosis can be computed within 40 second in an average case. Moreover, <span style="font-variant:small-caps;">Inv-HS-Tree</span> slightly outperformed <span style="font-variant:small-caps;">HS-Tree</span>.
**Matcher** **Scoring** **<span style="font-variant:small-caps;">Inv-HS-Tree</span>** **<span style="font-variant:small-caps;">HS-Tree</span>**
------------- ------------- --------------------------------------------------------------- -----------------------------------------------------------
AgrMaker ENT 19.62 20.833
AgrMaker SPL 36.035 36.034
GOMMA-bk ENT 18.343 14.472
GOMMA-bk SPL 18.946 19.512
GOMMA-nobk ENT 18.261 14.255
GOMMA-nobk SPL 18.738 19.473
Lily ENT 78.537 82.524
Lily SPL 82.944 115.242
LogMap ENT 6.595 13.406
LogMap SPL 6.607 15.133
LogMapLt ENT 14.847 12.888
LogMapLt SPL 15.589 17.45
MapSSS ENT 81.064 56.169
MapSSS SPL 88.316 77.585
: Scalability test for <span style="font-variant:small-caps;">Inv-HS-Tree</span>, time given in seconds.[]{data-label="tab:scalability"}
Conclusions {#sec:conc}
===========
In this paper we present an approach to direct computation of diagnoses for ontology debugging. By avoiding computation of conflict sets, the algorithms suggested in the paper are able to diagnose the ontologies for which a common model-based diagnosis technique fails. Moreover, we show that the method can also be used with diagnosis discrimination algorithms, thus, allowing interactive ontology debugging. The experimental results presented in the paper indicate that the performance of a system using the direct computation of diagnoses is either comparable with or outperforms the existing approach in the settings, when faulty ontologies are generated by ontology matching or learning systems. The scalability of the algorithms was demonstrated on a set of big ontologies including thousands of axioms.
[^1]: The research project is funded by Austrian Science Fund (Project V-Know, contract 19996).
[^2]: In the following we assume that the user intends to formulate only one ontology. In the paper we refer to the intended ontology as the target ontology.
[^3]: see http://oaei.ontologymatching.org/2011.5/results/index.html for preliminary results of the evaluation based on reference alignments
[^4]: All ontologies used in the evaluation can be downloaded from http://code.google.com/p/rmbd/wiki/DirectDiagnosis The webpage contains also tables presenting detailed results of the experiment presented in Fig. \[fig:solvable\].
[^5]: The tests were executed on a Core-i7 (3930K) 3.2Ghz, 32GB RAM and with Ubuntu Server 11.04, Java 6 and HetmiT 1.3.6 installed.
|
---
abstract: 'We give a detailed analysis of long range cumulative scattering effects from rough boundaries in waveguides. We assume small random fluctuations of the boundaries and obtain a quantitative statistical description of the wave field. The method of solution is based on coordinate changes that straighten the boundaries. The resulting problem is similar from the mathematical point of view to that of wave propagation in random waveguides with interior inhomogeneities. We quantify the net effect of scattering at the random boundaries and show how it differs from that of scattering by internal inhomogeneities.'
author:
- 'Ricardo Alonso, Liliana Borceaand Josselin Garnier'
title: Wave propagation in waveguides with random boundaries
---
Waveguides, random media, asymptotic analysis.
76B15, 35Q99, 60F05.
Introduction {#sect:intro}
============
We consider acoustic waves propagating in a waveguide with axis along the range direction $z$. In general, the waveguide effect may be due to boundaries or the variation of the wave speed with cross-range, as described for example in [@kohler77; @gomez]. We consider here only the case of waves trapped by boundaries, and take for simplicity the case of two dimensional waveguides with cross-section ${\mathcal D}$ given by a bounded interval of the cross-range $x$. The results extend to three dimensional waveguides with bounded, simply connected cross-section ${\mathcal D} \subset \mathbb{R}^2$.
The pressure field $p(t,x,z)$ satisfies the wave equation $$\label{we}
\left[\partial^2_{z} + \partial^2_x
- \frac{1}{c^2(x)} \partial^2_t \right]p(t,x,z) = F(t,x,z) \, ,$$ with wave speed $c(x)$ and source excitation modeled by $F(t,x,z)$. Since the equation is linear, it suffices to consider a point-like source located at $(x_0,z=0)$ and emitting a pulse signal $f(t)$, $$F(t,x,z) = f(t) \delta(x - x_0 ) \delta(z) \,.
\label{eq:source}$$ Solutions for distributed sources are easily obtained by superposing the wave fields computed here.
The boundaries of the waveguide are rough in the sense that they have small variations around the values $x = 0$ and $x = X$, on a length scale comparable to the wavelength. Explicitly, we let $$B(z) \le x \le T(z) \, , \quad \mbox{where} ~~ |B(z)| \ll X, ~ ~
|T(z)-X| \ll X,
\label{eq:boundaries}$$ and take either Dirichlet boundary conditions $$p(t,x,z) = 0 \,, \quad \mbox{for} ~ x = B(z) ~~\mbox{and} ~~x = T(z),
\label{eq:Dirichlet}$$ or mixed, Dirichlet and Neumann conditions $$p(t,x= B(z),z) = 0\,, \quad \frac{\partial}{\partial n} p(t,x=T(z),z) = 0\,,
\label{eq:mixed}$$ where $n$ is the unit normal to the boundary $x = T(z)$.
The goal of the paper is to quantify the long range effect of scattering at the rough boundaries. More explicitly, to characterize in detail the statistics of the random field $p(t,x,z)$. This is useful in sensor array imaging, for designing robust source or target localization methods, as shown recently in [@borcea] in waveguides with internal inhomogeneities. Examples of other applications are in long range secure communications and time reversal in shallow water or in tunnels [@garnier_papa; @kuperman].
The paper is organized as follows. We begin in section \[sect:homog\] with the case of ideal waveguides, with straight boundaries $B(z) = 0$ and $T(z) = X$, where energy propagates via guided modes that do not interact with each other. Rough, randomly perturbed boundaries are introduced in section \[sect:rand\]. The wave speed is assumed to be known and dependent only on the cross-range. Randomly perturbed wave speeds due to internal inhomogeneities are considered in detail in [@kohler77; @kohler_wg77; @dozier; @garnier_papa; @book07]. Our approach in section \[sect:rand\] uses changes of coordinates that straighten the randomly perturbed boundaries. We carry out the analysis in detail for the case of Dirichlet boundary conditions in sections \[sect:rand\] and \[sect:diffusion\], and discuss the results in section \[sect:comparisson\]. The extension to the mixed boundary conditions is presented in section \[sect:mixed\]. We end in section \[sect:summary\] with a summary.
Our approach based on changes of coordinates that straighten the boundary leads to a transformed problem that is similar from the mathematical point of view to that in waveguides with interior inhomogeneities, so we can use the techniques from [@kohler77; @kohler_wg77; @dozier; @garnier_papa; @book07] to obtain the long range statistical characterization of the wave field in section \[sect:diffusion\]. However, the cumulative scattering effects of rough boundaries are different from those of internal inhomogeneities, as described in section \[sect:comparisson\]. We quantify these effects by estimating in a high frequency regime three important, mode dependent length scales: the scattering mean free path, which is the distance over which the modes lose coherence, the transport mean free path, which is the distance over which the waves forget the initial direction, and the equipartition distance, over which the energy is uniformly distributed among the modes, independently of the initial conditions at the source. We show that the random boundaries affect most strongly the high order modes, which lose coherence rapidly, that is they have a short scattering mean free path. Furthermore, these modes do not exchange efficiently energy with the other modes, so they have a longer transport mean free path. The lower order modes can travel much longer distances before they lose their coherence and remarkably, their scattering mean free path is similar to the transport mean free path and to the equipartition distance. That is to say, in waveguides with random boundaries, when the waves travel distances that exceed the scattering mean free path of the low order modes, not only all the modes are incoherent, but also the energy is uniformly distributed among them. At such distances the wave field has lost all information about the cross-range location of the source in the waveguide. These results can be contrasted with the situation with waveguides with interior random inhomogeneities, in which the main mechanism for the loss of coherence of the fields is the exchange of energy between neighboring modes [@kohler77; @kohler_wg77; @dozier; @garnier_papa; @book07], so the scattering mean free paths and the transport mean free paths are similar for all the modes. The low order modes lose coherence much faster than in waveguides with random boundaries, and the equipartition distance is longer than the scattering mean free path of these modes.
Ideal waveguides {#sect:homog}
================
Ideal waveguides have straight boundaries $x = 0$ and $x = X$. Using separation of variables, we write the wave field as a superposition of waveguide modes. A waveguide mode is a monochromatic wave $P(t,x,z) =
{\widehat}{P}(\omega,x,z) e^{- i \omega t}$ with frequency $\omega$, where ${\widehat}{P}(\omega,x,z)$ satisfies the Helmholtz equation $$\label{eqe0}
\left[{\partial_z^2} + {\partial_x^2} + {\omega^2}/{c^{2}(x)} \right]
{\widehat}{P}( \omega,x,z) = 0 \, , \quad z \in \mathbb{R}, ~ x \in (0,X),$$ and either Dirichlet or mixed, Dirichlet and Neumann homogeneous boundary conditions. The operator $\partial^2_x + \om^2/ c^{2}(x)$ with either of these conditions is self-adjoint in $L^2(0,X)$, and its spectrum consists of an infinite number of discrete eigenvalues $\{\lambda_j(\om)\}_{j \ge 1}$, assumed sorted in descending order. There is a finite number $N(\omega)$ of positive eigenvalues and an infinite number of negative eigenvalues. The eigenfunctions $\phi_j(\omega,x)$ are real and form an orthonormal set $$\int_0^X dx \, { \phi_j(\omega,x)} \phi_l(\omega,x) = \delta_{jl}
\, , \quad j,l \ge 1,
\label{eq:orthog}$$ where $\delta_{jl}$ is the Kronecker delta symbol.
For example, in homogeneous waveguides with $c(x) = c_o$, and for the Dirichlet boundary conditions, the eigenfunctions and eigenvalues are $$\label{eq:1}
\phi_j(x) = \sqrt{\frac{2}{X}} \mbox{sin} \left( \frac{\pi j x}{X}
\right), \qquad \lambda_j(\om) = \left(\frac{\pi}{X}\right)^2 \left[
(kX/\pi)^2 - j^2 \right], \quad \quad j = 1, 2, \ldots$$ and the number of propagating modes is $ N(\om) = \left \lfloor
{k X}/{\pi} \right \rfloor$, where $\lfloor y \rfloor$ is the integer part of $y$ and $k = \om/c_o$ is the homogeneous wavenumber.
To simplify the analysis, we assume that the source emits a pulse $f(t)$ with Fourier transform $${\widehat}{f}(\omega)= \int_{-\infty}^\infty dt \, e^{i \omega t} f(t)\,,$$ supported in a frequency band in which the number of positive eigenvalues is fixed, so we can set $N(\om) = N$. We also assume that there is no zero eigenvalue, and that the eigenvalues are simple. The positive eigenvalues define the modal wavenumbers $\beta_j(\omega)=\sqrt{\lambda_j(\omega)}$ of the forward and backward propagating modes $${\widehat}{P}_j(\omega,x,z) = \phi_j(\omega,x) e^{ \pm i
\beta_j(\omega) z}, \quad j = 1, \ldots, N.$$ The infinitely many remaining modes are evanescent $${\widehat}{P}_j(\omega,x,z) = \phi_j(\omega
,x) e^{ - \beta_j(\omega) |z|}, \quad j > N\,,$$ with wavenumber $\beta_j (\omega)= \sqrt{-\lambda_j(\omega)}\, $.
The wave field $p(t,x,z)$ due to the source located at $(x_0,0)$ is given by the superposition of ${\widehat}{P}_j(\omega,x,z)$, $$\begin{aligned}
{p}(t,x,z) = \int \frac{d \om}{2 \pi} e^{-i \om t} \left[ \sum_{j=1}^N
\frac{{\widehat}{a}_{j,o} (\omega)}{\sqrt{\beta_j(\omega)}} e^{i\beta_j
(\omega)z} \phi_j(\omega,x) + \sum_{j=N+1}^\infty \frac{{\widehat}{e}_{j,o}
(\omega)}{\sqrt{\beta_j(\omega)}} e^{- \beta_j(\omega) z}
\phi_j(\omega,x) \right] {\bf 1}_{(0,\infty)}(z) + \\ \int \frac{d
\om}{2 \pi} e^{-i \om t} \left[ \sum_{j=1}^N \frac{{\widehat}{a}_{j,o}^{\,
-} (\omega)}{\sqrt{\beta_j(\omega)}} e^{-i\beta_j (\omega) z}
\phi_j(\omega,x) + \sum_{j=N+1}^\infty \frac{{\widehat}{e}_{j,o}^{\, -}
(\omega)}{\sqrt{\beta_j(\omega)}} e^{ \beta_j(\omega) z}
\phi_j(\omega,x) \right] {\bf 1}_{(-\infty,0)}(z) \, .\end{aligned}$$ The first term is supported at positive range, and it consists of forward going modes with amplitudes ${\widehat}a_{j,o}/\sqrt{\beta_j}$ and evanescent modes with amplitudes ${\widehat}e_{j,o}/\sqrt{\beta_j}$. The second term is supported at negative range, and it consists of backward going and evanescent modes. The modes do not interact with each other and their amplitudes $$\begin{aligned}
{\widehat}{a}_{j,o}(\omega) &=& {\widehat}{a}_{j,o}^{\, -}(\omega) = \frac{{\widehat}f(\om)}{2i \sqrt{\beta_j(\omega)}} { \phi_j(\omega,x_0 )}\,, \quad j =
1, \ldots, N, \nonumber \\ {\widehat}{e}_{j,o}(\omega) &=&
{\widehat}{e}_{j,o}^{\,-}(\omega) = - \frac{{\widehat}f(\om)}{2
\sqrt{\beta_j(\omega)}} { \phi_j(\omega,x_0 )}\,, \quad j > N,
\label{eq:idealab} \end{aligned}$$ are determined by the source excitation (\[eq:source\]), which gives the jump conditions at $z=0$, $$\begin{aligned}
&&{\widehat}{p}(\omega, x,z=0^+)-{\widehat}{p}(\omega, x,z=0^-)= 0 \, , \nonumber\\ &&
{\partial_z {\widehat}{p}} (\omega,x, z=0^+) - {\partial_z {\widehat}{p}}
(\omega,x,z=0^-) = {\widehat}{f}(\omega) \delta( x - x_0 )\, .\end{aligned}$$
We show next how to use the solution in the ideal waveguides as a reference for defining the wave field in the case of randomly perturbed boundaries.
Waveguides with randomly perturbed boundaries {#sect:rand}
=============================================
We consider a randomly perturbed section of an ideal waveguide, over the range interval $z\in [0,L/\eps^2]$. There are no perturbations for $z < 0$ and $z > L/\eps^2$. The domain of the perturbed section is denoted by $$\label{form:waveguide}
\Omega^\eps = \big\{ (x,z) \in \RR^2, ~ B(z) \leq x \leq T(z), ~ 0 <
z < L/\eps^2 \big\} \, ,$$ where $$B(z) = \eps X \mu(z)\,, \quad \quad T(z) =X[1 +\eps \nu(z)]\,, \qquad
\eps \ll 1.
\label{eq:defBT}$$ Here $\nu$ and $\mu$ are independent, zero-mean, stationary and ergodic random processes in $z$, with covariance function $$\cR_\nu(z) = \EE[ \nu(z+s) \nu(s) ] \quad \mbox{and} ~ ~
\cR_\mu(z) = \EE[ \mu(z+s) \mu(s) ].$$ We assume that $\nu(z)$ and $\mu(z)$ are bounded, at least twice differentiable with bounded derivatives, and have enough decorrelation[^1]. The covariance functions are normalized so that $\cR_\nu(0)$ and $\cR_\mu(0)$ are of order one, and the magnitude of the fluctuations is scaled by the small, dimensionless parameter $\eps$.
That the random fluctuations are confined to the range interval $z \in
(0,L/\eps^2)$, with $L$ an order one length scale can be motivated as follows: By the hyperbolicity of the wave equation, we know that if we observe $p(t,x,z)$ over a finite time window $t \in (0,T^\eps)$, the wave field is affected only by the medium within a finite range $L^\eps$ from the source, directly proportional to the observation time $T^\eps$. We wish to choose $T^\eps$ large enough, in order to capture the cumulative long range effects of scattering from the randomly perturbed boundaries. It turns out that these effects become significant over time scales of order $1/\eps^2$, so we take $L^\eps =
L/\eps^2$. Furthermore, we are interested in the wave field to the right of the source, at positive range. We will see that the backscattered field is small and can be neglected when the conditions of the forward scattering approximation are satisfied (see Subsection \[secforward\]). Thus, the medium on the left of the source has negligible influence on $p(t,x,z)$ for $z > 0$, and we may suppose that the boundaries are unperturbed at negative range. The analysis can be carried out when the conditions of the forward scattering approximation are not satisfied, at considerable complication of the calculations, as was done in [@garnier_solna] for waveguides with internal inhomogeneities.
We assume here and in sections \[sect:diffusion\] and \[sect:comparisson\] the Dirichlet boundary conditions . The extensions to the mixed boundary conditions are presented in section \[sect:mixed\]. The main result of this section is a closed system of random differential equations for the propagating waveguide modes, which describes the cumulative effect of scattering of the wave field by the random boundaries. We derive it in the following subsections and we analyze its solution in the long range limit in section \[sect:diffusion\].
Change of coordinates
---------------------
We reformulate the problem in the randomly perturbed waveguide region $\Omega^\eps$ by changing coordinates that straighten the boundaries, $$\label{eq:6}
x = B(z) + \left[ T(z) - B(z) \right] \frac{\xi}{X}\,, \quad \xi \in
[0,X].$$ We take this coordinate change because it is simple, but we show later, in section \[sect:indc\], that the result is independent of the choice of the change of coordinates. In the new coordinate system, let $$u(t,\xi,z) = p\left(t,B(z) + \left[ T(z) - B(z) \right] \frac{\xi}{X},
z\right)\,, \qquad p(t,x,z) = u\left( t , \frac{(x-B(z))X}{T(z)-B(z)}
,z \right)\,.
\label{eq:defu}$$ We obtain using the chain rule that the Fourier transform ${\widehat}u(\om,\xi,z)$ satisfies the equation $$\begin{aligned}
\nonumber \partial_z^2 {\widehat}{u} + \frac{\left[1 + \left[ (X-\xi)B' +
\xi T'\right]^2 \right]}{(T-B)^2} X^2 \partial_\xi^2 {\widehat}{u} -
\frac{2[(X-\xi)B'+ \xi T']}{T-B}X \partial^2_{\xi z} {\widehat}{u} + \\
\nonumber \left\{ \frac{2 B'(T'-B') }{(T-B)^2} -\frac{B''}{T-B} + \frac{\xi}{X}
\left[ 2 \left(\frac{T'-B'}{T-B}\right)^2 - \frac{T''-B''}{T-B} \right]
\right\} X \partial_\xi {\widehat}{u} + \\ + \omega^2/
c^{2}\big(B(z)+(T(z)-B(z))\xi/X\big) {\widehat}{u} = 0\, ,
\label{eq:pertw1}\end{aligned}$$ for $ z \in (0,L/\eps^2)$ and $\xi \in (0,X)$. Here the prime stands for the $z$-derivative, and the boundary conditions at $\xi = 0$ and $X$ are $${\widehat}{u}(\omega,0,z) ={\widehat}{u}(\omega,X,z)=0\, .$$ Substituting definition of $B(z)$ and $T(z)$, and expanding the coefficients in in series of $\eps$, we obtain that $$\label{eq:7}
\left( \cL_0 + \eps \cL_1 + \eps^2 \cL_2 + \ldots \right) {\widehat}u(\om,\xi,z) = 0\,,$$ where $$\label{eq:8}
\cL_0 = \partial_z^2 + \partial^2_\xi + {\om^2}/{c^2(\xi)}$$ is the unperturbed Helmholtz operator. The first and second order perturbation operators are given by $$\label{eq:9}
\cL_1 + \eps \cL_2 = q^\eps(\xi,z) \partial^2_{\xi z} +
\cM^\eps(\om,\xi,z)\,,$$ with coefficient $$\label{eq:10}
q^\eps(\xi,z) = -2 \left[ (X-\xi) \mu'(z) + \xi \nu'(z) \right]
\left[1 - \eps
\left(\nu(z)-\mu(z)\right) \right],$$ and differential operator $$\begin{aligned}
\cM^\eps(\om,\xi,z) &=& - \left\{ 2 \left(\nu-\mu\right) - 3 \eps
\left( \nu-\mu\right)^2 - \eps \left[ (X-\xi) \mu' + \xi \nu'
\right]^2 \right\} \partial^2_\xi - \nonumber \\ && \left\{ \left[
(X-\xi) \mu'' + \xi \nu'' \right] \left[1 -
\eps\left(\nu-\mu\right)\right] - 2 \eps \left(\nu' -\mu'\right)
\left[ (X-\xi) \mu' + \xi \nu' \right] \right\} \partial_\xi +
\nonumber \\ && \om^2 \left[ (X-\xi) \mu + \xi \nu \right]
\partial_\xi c^{-2}(\xi) + \frac{\eps \om^2}{2} \left[ (X-\xi) \mu
+ \xi \nu \right]^2 \partial^2_\xi c^{-2}(\xi)\,.
\label{eq:11}\end{aligned}$$ The higher order terms are denoted by the dots in , and are negligible as $\eps \to 0$, over the long range scale $L/\eps^2$ considered here.
Wave decomposition and mode coupling {#sect:wavedec}
------------------------------------
Equation is not separable, and its solution is not a superposition of independent waveguide modes, as was the case in ideal waveguides. However, we have a perturbation problem, and we can use the completeness of the set of eigenfunctions $\{ \phi_j(\om,\xi)\}_{j
\ge 1}$ in the ideal waveguide to decompose ${\widehat}u$ in its propagating and evanescent components, $$\label{eq:WD1}
{\widehat}u(\om,\xi,z) = \sum_{j=1}^N \phi_j(\om,\xi) {\widehat}u_j(\om,z) +
\sum_{j=N+1}^\infty \phi_j(\om,\xi) {\widehat}v_j(\om,z).$$ The propagating components ${\widehat}u_j$ are decomposed further in the forward and backward going parts, with amplitudes ${\widehat}a_j(\om,z)$ and ${\widehat}b_j(\om,z)$, $$\begin{aligned}
{\widehat}u_j = \frac{1}{\sqrt{\beta_j}} \left( {\widehat}a_j e^{i \beta_j z} +
{\widehat}b_j e^{-i \beta_j z} \right), \quad j = 1, \ldots, N.\end{aligned}$$ This does not define uniquely the complex valued ${\widehat}a_j$ and ${\widehat}b_j$, so we ask that they also satisfy $$\begin{aligned}
\partial_z{\widehat}u_j = i \sqrt{\beta_j} \left( {\widehat}a_j e^{i \beta_j z}
- {\widehat}b_j e^{-i \beta_j z} \right), \quad j = 1, \ldots, N.
\label{eq:WD2}\end{aligned}$$ This choice is motivated by the behavior of the solution in ideal waveguides, where the amplitudes are independent of range and completely determined by the source excitation. The expression of the wave field is similar to that in ideal waveguides, except that we have both forward and backward going modes, in addition to the evanescent modes, and the amplitudes of the modes are random functions of $z$.
The modes are coupled due to scattering at the random boundaries, as described by the following system of random differential equations obtained by substituting in , and using the orthogonality relation of the eigenfunctions, $$\begin{aligned}
\partial_z {\widehat}a_j &=& i \eps \sum_{l = 1}^N \left[ C_{jl}^\eps \,
{\widehat}a_l e^{i (\beta_l-\beta_j)z} + \overline{C_{jl}^\eps} \, {\widehat}b_l
e^{-i (\beta_l+\beta_j)z}\right] + \frac{i \eps}{2 \sqrt{\beta_j}}
\sum_{l = N+1}^\infty \hspace{-0.1in} e^{-i \beta_j z} \left(
Q_{jl}^\eps \, \partial_z {\widehat}v_l + M_{jl}^\eps \, {\widehat}v_l \right) +
O(\eps^3)\,,\quad \label{eq:WD3} \\ \partial_z {\widehat}b_j &=& -i \eps
\sum_{l = 1}^N \left[C_{jl}^\eps \, {\widehat}a_l e^{i (\beta_l+\beta_j)z}
+ \overline{C_{jl}^\eps} \, {\widehat}b_l e^{-i (\beta_l-\beta_j)z}\right]-
\frac{i \eps}{2 \sqrt{\beta_j}} \sum_{l = N+1}^\infty \hspace{-0.1in}
e^{-i \beta_j z} \left( Q_{jl}^\eps \, \partial_z {\widehat}v_l +
M_{jl}^\eps \, {\widehat}v_l \right) + O(\eps^3)\,. \qquad \label{eq:WD4}\end{aligned}$$ The bar denotes complex conjugation, and the coefficients are defined below. The forward going amplitudes are determined at $z = 0$ by the source excitation (recall ) $${\widehat}a_j(\om,0) = {\widehat}a_{j,o}(\om)\,,\label{eq:WD5a} \quad j = 1,
\ldots, N,$$ and we set $${\widehat}b_j \left( \om,\frac{L}{\eps^2}\right) = 0\,,\quad j = 1,
\ldots, N,
\label{eq:WD5}$$ because there is no incoming wave at the end of the domain. The equations for the amplitudes of the evanescent modes indexed by $j >
N$ are $$\begin{aligned}
\left( \partial^2_z - \beta_j^2\right) {\widehat}v_j &=& - \eps \sum_{l =
1}^N 2 \sqrt{\beta_j}\left[ C_{jl}^\eps \, {\widehat}a_l e^{ i \beta_l
z} + \overline{C_{jl}^\eps} \, {\widehat}b_l e^{-i \beta_l z}
\right]- \eps \hspace{-0.05in}\sum_{l = N+1}^\infty \left(
Q_{jl}^\eps \, \partial_z {\widehat}v_l + M_{jl}^\eps \, {\widehat}v_l \right)
+ O(\eps^3)\,,
\label{eq:WD6}\end{aligned}$$ and we complement them with the decay condition at infinity $$\lim_{z \to \pm \infty} {\widehat}v_j (\om,z) = 0 \, , \quad j > N.
\label{eq:WD7}$$
The coefficients $$C_{jl}^\eps(\om,z) = C_{jl}^{(1)}(\om,z) + \eps C_{jl}^{(2)}(\om,z)\,,
\quad \mbox{for} ~ j \ge 1 ~ ~ \mbox{and}~ l = 1, \ldots, N,
\label{eq:WD4.q1}$$ are defined by $$\begin{aligned}
C_{jl}^{(1)}(\om,z) &=& \frac{1}{2 \sqrt{\beta_j(\om) \beta_l(\om)}}
\int_0^X d \xi \phi_j(\om,\xi) \cA_l(\om,\xi,z) \phi_l(\om,\xi)\,,
\label{eq:WD4C1} \\
C_{jl}^{(2)}(\om,z) &=& \frac{1}{2 \sqrt{\beta_j(\om) \beta_l(\om)}}
\int_0^X d \xi \phi_j(\om,\xi) \cB_l(\om,\xi,z) \phi_l(\om,\xi)\, ,
\label{eq:WD4C2} \end{aligned}$$ in terms of the linear differential operators $$\begin{aligned}
\cA_l = -2 ( \nu-\mu) \partial^2_\xi - 2 i \beta_l \left[ (X-\xi)
\mu'+\xi \nu' \right] \partial_\xi- \left[ (X-\xi) \mu''+\xi \nu''
\right] \partial_\xi + \nonumber \\
\om^2 \left[ (X-\xi) \mu + \xi \nu \right]
\partial_\xi c^{-2}(\xi)\,, \qquad
\label{eq:WD4A} \end{aligned}$$ and $$\begin{aligned}
\cB_l = \left\{3(\nu-\mu)^2 + \left[(X-\xi) \mu' + \xi \nu'\right]^2
\right\}\partial_\xi^2 + 2 i \beta_l (\nu-\mu)\left[(X-\xi) \mu' +
\xi \nu'\right] \partial_\xi + \nonumber \\
\left\{(\nu-\mu)\left[(X-\xi) \mu'' + \xi \nu''\right] + 2
(\nu'-\mu') \left[ (X-\xi)\mu'+ \xi \nu'\right] \right\}
\partial_\xi + \nonumber \\ \frac{\om^2}{2} \left[ (X-\xi) \mu + \xi
\nu \right]^2 \partial^2_\xi c^{-2}(\xi)\, .
\label{eq:WD4B}\end{aligned}$$ We also let for $j \ge 1$ and $l>N$ $$\begin{aligned}
Q_{jl}^\eps(\om,z) &=& \int_0^X d \xi q^\eps(\xi,z) \phi_j(\om,\xi)
\partial_\xi \phi_l(\om,\xi) = Q_{jl}^{(1)}(\om,z) + \eps
Q_{jl}^{(2)}(\om,z)\, , \nonumber \\ M_{jl}^\eps(\om,z) &=& \int_0^X d \xi
\phi_j(\om,\xi) \cM^\eps(\om,\xi,z) \phi_l(\om,\xi) =
M_{jl}^{(1)}(\om,z) + \eps M_{jl}^{(2)}(\om,z)\,.
\label{eq:QM}\end{aligned}$$
Analysis of the evanescent modes {#sect:elim_evanesc}
--------------------------------
We solve equations (\[eq:WD6\]) with radiation conditions (\[eq:WD7\]) in order to express the amplitude of the evanescent modes in terms of the amplitudes of the propagating modes. The substitution of this expression in - gives a closed system of equations for the amplitudes of the propagating modes, as obtained in the next section.
We begin by rewriting (\[eq:WD6\]) in short as $$\left( \partial^2_z - \beta_j^2\right) {\widehat}v_j +
\eps \hspace{-0.05in} \sum_{l = N+1}^\infty \left( Q_{jl}^\eps \,
\partial_z {\widehat}v_l + M_{jl}^\eps \, {\widehat}v_l \right) = -\eps
g_j^\eps\,, \quad \quad j >N ,
\label{eq:E3}$$ where $$g_j^\eps(\om,z) = g_j^{(1)}(\om,z) + \eps
g_j^{(2)}(\om,z) + O(\eps^3)\,, \quad \quad j >N ,
\label{eq:E1}$$ and $$g_j^{(r)} = 2 \sqrt{\beta_j} \sum_{l = 1}^N \left[ C_{jl}^{(r)}
\, {\widehat}a_l(\om,z) e^{ i \beta_l z} + \overline{C_{jl}^{(r)}} \, {\widehat}b_l e^{-i \beta_l z} \right], \quad r = 1,2 ~ ~ \mbox{and} ~ j > N.
\label{eq:E2}$$ Using the Green’s function $G_j = e^{-\beta_j |z|}/(2 \beta_j)$, satisfying $$\partial_z^2 G_j - \beta_j^2 G_j = - \delta(z)\,, \quad \lim_{|z| \to
\infty} G_j = 0\,, \quad \quad j >N ,
\label{eq:E4}$$ and integrating by parts, we get $$\left[( {\bf I} - \eps \Psi) {\widehat}{\itbf v}\right]_j(\om,z) = \frac{\eps}{2
\beta_j(\om)} \int_{-\infty}^\infty ds \, e^{-\beta_j(\om) |s|}
g_j^\eps(\om,z+s)\, , \quad \quad j >N . \label{eq:E5}$$ Here ${\bf I}$ is the identity and $\Psi$ is the linear integral operator $$\begin{aligned}
[\Psi {\widehat}{\itbf v}]_j(\om,z) &=& \frac{1}{2
\beta_j(\om)} \sum_{l= N+1}^\infty \int_{-\infty}^\infty ds \,
e^{-\beta_j(\om)|s|} \left( M_{jl}^\eps-\partial_z
Q_{jl}^\eps\right)(\om,z+s) {\widehat}v_l(\om,z+s) + \nonumber \\ &&
\frac{1}{2} \sum_{l= N+1}^\infty \int_{-\infty}^\infty ds \,
e^{-\beta_j(\om)|s|} \mbox{sgn}(s) Q_{jl}^\eps(\om,z+s) {\widehat}v_l(\om,z+s)\,,
\label{eq:E6}\end{aligned}$$ acting on the infinite vector $ {\widehat}{\itbf v} = \left( {\widehat}v_{N+1},
{\widehat}v_{N+2}, \ldots \right)$ and returning an infinite vector with entries indexed by $j$, for $j > N.$ The solvability of equation follows from the following lemma proved in appendix \[sect:Proof\].
\[lem.1\] Let ${\mathcal L}_N$ be the space of square summable sequences of $L^2(\mathbb{R})$ functions with linear weights, equipped with the norm $$\|{\widehat}{\itbf v} \|_{{\mathcal L}_N} = \sqrt{ \sum_{j=N+1}^\infty \left( j \|
{\widehat}v_j\|_{L^2(\mathbb{R})}\right)^2} \, .$$ The linear operator $\Psi: {\mathcal L}_N \to {\mathcal L}_N$ defined component wise by is bounded.
Thus, the inverse operator is $$(I-\eps \Psi)^{-1} = I + \eps \Psi + \ldots,$$ and the solution of (\[eq:E5\]) is given by $${\widehat}v_j(\om,z) = \frac{\eps}{2 \beta_j(\om)} \int_{-\infty}^\infty ds
\, e^{-\beta_j(\om)|s|} g_j^{(1)}(\om,z+s) + O(\eps^2)\, .
\label{eq:E7}$$ Using definition (\[eq:E2\]) and the fact that the $z$ derivatives of ${\widehat}a_l$ and ${\widehat}b_l$ are of order $\eps$, we get $$\begin{aligned}
{\widehat}v_j(\om,z) &=& \frac{\eps}{\sqrt{\beta_j(\om)}} \sum_{l=1}^N {\widehat}a_l(\om,z) e^{i \beta_l z} \int_{-\infty}^\infty ds \,
e^{-\beta_j(\om)|s|+ i \beta_l(\om) s} C_{jl}^{(1)}(\om,z+s) +
\nonumber \\ && \frac{\eps}{\sqrt{\beta_j(\om)}} \sum_{l=1}^N {\widehat}b_l(\om,z) e^{-i \beta_l z}\int_{-\infty}^\infty ds \,
e^{-\beta_j(\om)|s|- i \beta_l(\om) s}
\overline{C_{jl}^{(1)}(\om,z+s)} + O(\eps^2)\,.
\label{eq:E8}\end{aligned}$$
We also need $${\widehat}w_j(\om,z) = \partial_z {\widehat}v_j(\om,z)\,,
\label{eq:E9}$$ which we compute by taking a $z$ derivative in (\[eq:E3\]) and using the radiation condition ${\widehat}w_j(\om, z) \to 0$ as $|z| \to
\infty$. The resulting equation is similar to (\[eq:E5\]) $$\begin{aligned}
\left[ ({\bf I} - \eps \tilde \Psi) {\itbf w}
\right]_j\hspace{-0.05in}(\om,z) &=& \frac{\eps}{2}
\int_{-\infty}^\infty \hspace{-0.1in} ds \, e^{-\beta_j(\om) |s|}
\left[ \mbox{sgn}(s) g_j^\eps(\om,z+s) +
\hspace{-0.05in}
\sum_{l=N+1}^\infty
\hspace{-0.05in}
M_{jl}^\eps(\om,z+s)
{\widehat}v_l(\om,z+s)\right], \qquad
\label{eq:E10}\end{aligned}$$ where we integrated by parts and introduced the linear integral operator $$\begin{aligned}
[\tilde \Psi {\widehat}{\itbf w}]_j(\om,z) &=& \frac{1}{2}
\sum_{l= N+1}^\infty \int_{-\infty}^\infty ds \, e^{-\beta_j(\om)|s|}
\mbox{sgn}(s) Q_{jl}^\eps(\om,z+s) {\widehat}w_l(\om,z+s)\,.
\label{eq:E12}\end{aligned}$$ This operator is very similar to $\Psi$ and it is bounded, as follows from the proof in appendix \[sect:Proof\]. Moreover, substituting expression (\[eq:E8\]) of ${\widehat}v_l$ in (\[eq:E10\]) we obtain after a calculation that is similar to that in appendix \[sect:Proof\] that the series in the index $l$ is convergent. Therefore, the solution of (\[eq:E10\]) is $${\widehat}w_j(\om,z) = \frac{\eps}{2} \int_{-\infty}^\infty ds \,
e^{-\beta_j(\om) |s|} \mbox{sgn}(s) g_j^\eps(\om,z+s) + O(\eps^2)
\label{eq:E11}$$ and more explicitly, $$\begin{aligned}
\partial_z {\widehat}v_j(\om,z) &=& \eps \sqrt{\beta_j(\om)} \sum_{l=1}^N
{\widehat}a_l(\om,z) e^{i \beta_l z} \int_{-\infty}^\infty ds \,
e^{-\beta_j(\om)|s|+ i \beta_l(\om) s} \mbox{sgn}(s)
C_{jl}^{(1)}(\om,z+s) + \nonumber \\ && {\eps}{\sqrt{\beta_j(\om)}}
\sum_{l=1}^N {\widehat}b_l(\om,z) e^{-i \beta_l z}\int_{-\infty}^\infty ds
\, e^{-\beta_j(\om)|s|- i \beta_l(\om) s} \mbox{sgn}(s)
\overline{C_{jl}^{(1)}(\om,z+s)} + O(\eps^2)\,.
\label{eq:E14}\end{aligned}$$
The closed system of equations for the propagating modes {#sect:Closed}
--------------------------------------------------------
The substitution of equations (\[eq:E8\]) and (\[eq:E14\]) in (\[eq:WD3\]) and (\[eq:WD4\]) gives the main result of this section: a closed system of differential equations for the propagating mode amplitudes. We write it in compact form using the $2N$ vector $${\bX}_\om(z) = \left[ \begin{array}{c} {\widehat}{\itbf a}(\om,z) \\ {\widehat}{\itbf b}(\om,z)
\end{array} \right]\, ,
\label{eq:C1}$$ obtained by concatenating vectors ${\widehat}{\itbf a}(\om,z)$ and ${\widehat}{\itbf b}(\om,z)$ with components ${\widehat}a_j(\om,z)$ and ${\widehat}b_j(\om,z)$, for $j = 1, \ldots, N$. We have $$\partial_z {\bX}_\om(z) = \eps {\bf H}_\om(z){\bX}_\om(z) + \eps^2
{\bf G}_\om(z){\bX}_\om(z) + O(\eps^3)\,,
\label{eq:C2}$$ with $2N \times 2 N$ complex matrices given in block form by $$\label{eq:defH}
{\bf H}_\omega(z) =
\left[ \begin{array}{cc}
{\bf H}^{(a)}_\omega(z) & {\bf H}^{(b)}_\omega(z) \\
\overline{{\bf H}^{(b)}_\omega}(z) & \overline{{\bf H}^{(a)}_\omega}(z)\\
\end{array} \right] \, , \ \ \ \ \
{\bf G}_\omega(z) =
\left[ \begin{array}{cc}
{\bf G}^{(a)}_\omega(z) & {\bf G}^{(b)}_\omega(z) \\
\overline{{\bf G}^{(b)}_\omega}(z) & \overline{{\bf G}^{(a)}_\omega}(z)\\
\end{array} \right].$$ The entries of the blocks in ${\bf H}_\om$ are $$\begin{aligned}
\label{defHja}
H^{(a)}_{\omega,jl} (z) = i C_{jl}^{(1)}(\om,z) e^{ i
(\beta_l-\beta_j) z} \, ,\ \ \ \ \ \ H^{(b)}_{\omega,jl} (z) = i
C_{jl}^{(1)}(\om,z) e^{-i(\beta_l+\beta_j)z} \, ,
\label{eq:Hja}\end{aligned}$$ and the entries of the blocks in ${\bf G}_\om$ are $$\begin{aligned}
G^{(a)}_{\omega,jl} (z) &=& i e^{ i (\beta_l-\beta_j) z}
C_{jl}^{(2)}(\om,z) + i e^{ i (\beta_l-\beta_j) z} \hspace{-0.1in}
\sum_{l'=N+1}^\infty \frac{M_{jl'}^{(1)}(\om,z)}{ 2 \sqrt{\beta_j
\beta_{l'}}} \int_{-\infty}^\infty ds \, e^{-\beta_{l'} |s| + i
\beta_l s} C_{l'l}^{(1)}(\om,z+s) + \nonumber \\ && i e^{ i
(\beta_l-\beta_j) z}\hspace{-0.1in}\sum_{l'=N+1}^\infty
\frac{Q_{jl'}^{(1)}(\om,z)}{2 \sqrt{\beta_j \beta_{l'}}}
\int_{-\infty}^\infty ds \, e^{-\beta_{l'} |s| + i \beta_l s}
\beta_{l'} \, \mbox{sgn}(s) \, C_{l'l}^{(1)}(\om,z+s)\,, \label{eq:Gja}
\\ G^{(b)}_{\omega,jl} (z) &=& i e^{ -i (\beta_l+\beta_j) z}
C_{jl}^{(2)}(\om,z) - i e^{ -i (\beta_l+\beta_j)
z}\hspace{-0.1in}\sum_{l'=N+1}^\infty \frac{M_{jl'}^{(1)}(\om,z)}{ 2
\sqrt{\beta_j \beta_{l'}}} \int_{-\infty}^\infty ds \, e^{-\beta_{l'}
|s| - i \beta_l s} \overline{C_{l'l}^{(1)}(\om,z+s)} + \nonumber \\ &&
i e^{ -i (\beta_l+\beta_j) z}\hspace{-0.1in}\sum_{l'=N+1}^\infty
\frac{Q_{jl'}^{(1)}(\om,z)}{2 \sqrt{\beta_j \beta_{l'}}}
\int_{-\infty}^\infty ds \, e^{-\beta_{l'} |s| - i \beta_l s}
\beta_{l'}\, \mbox{sgn}(s) \, \overline{C_{l'l}^{(1)}(\om,z+s)} \, .
\label{eq:Gjb}\end{aligned}$$ The coefficients in (\[eq:Hja\])-(\[eq:Gjb\]) are defined in terms of the random functions $\nu(z)$, $\mu(z)$, their derivatives, and the following integrals, $$\begin{aligned}
c_{\nu,jl}(\om) &=& \frac{1}{2 \sqrt{\beta_j \beta_l}} \int_0^X d \xi \,
\phi_j(\xi) \left[ - 2 \partial_\xi^2 + \om^2 \xi \partial_\xi
c^{-2}(\xi)\right] \phi_l(\xi)\,,
\label{eq:cnu} \\
c_{\mu,jl}(\om) &=& \frac{1}{2 \sqrt{\beta_j \beta_l}} \int_0^X d \xi \,
\phi_j(\xi) \left[ 2 \partial_\xi^2 + \om^2 (X-\xi) \partial_\xi
c^{-2}(\xi)\right] \phi_l(\xi)\, ,
\label{eq:cmu} \\
d_{\nu,jl}(\om) &=& -\frac{1}{2 \sqrt{\beta_j \beta_l}} \int_0^X d \xi \,
\xi \, \phi_j(\xi) \partial_\xi \phi_l(\xi) \, ,
\label{eq:dnu} \\
d_{\mu,jl}(\om) &=& -\frac{1}{2 \sqrt{\beta_j \beta_l}} \int_0^X d \xi \,
(X-\xi)\, \phi_j(\xi) \partial_\xi \phi_l(\xi)\, ,
\label{eq:dmu} \end{aligned}$$ satisfying the symmetry relations $$\begin{aligned}
c_{\nu,jl}(\om) &=& c_{\nu,lj}(\om)\,, \nonumber \\ c_{\mu,jl}(\om) &=&
c_{\mu,lj}(\om)\,, \nonumber \\ d_{\nu,jl}(\om) + d_{\nu,lj}(\om) &=&
\frac{\delta_{jl}}{2 \sqrt{\beta_j(\om) \beta_l(\om)}}\,,\nonumber \\
d_{\mu,jl}(\om) + d_{\mu,lj}(\om) &=& -\frac{\delta_{jl}}{2
\sqrt{\beta_j(\om) \beta_l(\om)}}\,.
\label{eq:symmetries}\end{aligned}$$ We have from (\[eq:WD4C1\]) that $$\begin{aligned}
C_{jl}^{(1)}(\om,z) = \nu(z) c_{\nu,jl}(\om) + \left[ \nu''(z) + 2 i
\beta_l(\om) \nu'(z) \right] d_{\nu,jl}(\om) + \nonumber \\\mu(z)
c_{\mu,jl}(\om) + \left[ \mu''(z) + 2 i \beta_l(\om) \mu'(z) \right]
d_{\mu,jl}(\om)\,,
\label{eq:Cjl1}\end{aligned}$$ and from (\[eq:QM\]), (\[eq:10\]), (\[eq:11\]) that $$\begin{aligned}
\frac{Q_{jl'}^{(1)}(\om,z)}{2\sqrt{\beta_j(\om) \beta_{l'}(\om)}} &=&
2 \left[ \nu'(z) d_{\nu,jl'}(\om) + \mu'(z) d_{\mu,jl'}(\om)\right], \nonumber
\\ \frac{M_{jl'}^{(1)}(\om,z)}{2\sqrt{\beta_j(\om) \beta_{l'}(\om)}}
&=& \nu(z) c_{\nu,jl'}(\om) + \mu(z) c_{\mu,jl'}(\om)+ \nu''(z) d_{\nu,jl'}(\om) +
\mu''(z)d_{\mu,jl'}(\om)\,.\end{aligned}$$
The long range limit {#sect:diffusion}
====================
In this section we use the system to quantify the cumulative scattering effects at the random boundaries. We begin with the long range scaling chosen so that these effects are significant. Then, we explain why the backward going amplitudes are small and can be neglected. This is the forward scattering approximation, which gives a closed system of random differential equations for the amplitudes $\{{\widehat}a_j\}_{j = 1, \ldots, N}$. We use this system to derive the main result of the section, which says that the amplitudes $\{{\widehat}a_j\}_{j = 1,\ldots, N}$ converge in distribution as $\eps \to 0$ to a diffusion Markov process, whose generator we compute explicitly. This allows us to calculate all the statistical moments of the wave field.
Long range scaling {#sec:chap21_propmatr}
------------------
It is clear from that since the right hand side is small, of order $\eps$, there is no net effect of scattering from the boundaries over ranges of order one. If we considered ranges of order $1/\eps$, the resulting equation would have an order one right hand side given by ${\bf H}_\om(z/\eps) {\bX}_\om(z/\eps)$, but this becomes negligible as well for $\eps \to 0$, because the expectation of ${\bf H}_\om(z/\eps)$ is zero [@book07 Chapter 6]. We need longer ranges, of order $1/\eps^2$ to see the effect of scattering from the randomly perturbed boundaries.
Let then ${\widehat}{a}_j^\eps$, ${\widehat}{b}_j^\eps$ be the rescaled amplitudes $${\widehat}{a}_j^\eps(\omega,z) = {\widehat}{a}_j \left( \omega,\frac{z}{ \eps^{2}} \right) ,
\hspace{0.3in} {\widehat}{b}_j^\eps(\omega,z) = {\widehat}{b}_j \left(\omega,
\frac{z}{ \eps^{2}} \right) , \quad j = 1, \ldots, N,$$ and obtain from that $\bX^\eps_\omega(z) =
\bX_\om(z/\eps^2)$ satisfies the equation $$\label{eq:PP1}
\frac{d\bX^\eps_\omega(z)}{dz}= \frac{1}{\eps} {\bf
H}_\omega\left(\frac{z}{\eps^2} \right) \bX^\eps_\omega(z) + {\bf
G}_\omega\left(\frac{z}{\eps^2} \right) \bX^\eps_\omega(z)\,, \quad 0 < z < L,$$ with boundary conditions $$\label{eq:PP2}
{\widehat}a_j^\eps (\om,0) = {\widehat}a_{j,o}, \quad
{\widehat}b_j^\eps (\om,L) = 0, \quad j = 1, \ldots, N.$$ We can solve it using the complex valued, random propagator matrix $\bP^\eps_\omega(z) \in \mathbb{C}^{2N \times 2N}$, the solution of the initial value problem $$\frac{d \bP^\eps_\omega(z)}{dz}= \frac{1}{\eps} {\bf H}_\omega
\left(\frac{z}{\eps^2} \right) \bP^\eps_\omega(z) + {\bf G}_\omega
\left(\frac{z}{\eps^2} \right) \bP^\eps_\omega(z) \, \quad \mbox{for }
z > 0, ~ ~ \mbox{and } \bP^\eps_\omega(0) = {\bf I}.
\label{eq:IVPP}$$ The solution is $$\bX^\eps_\omega(z) = \bP^\eps_\omega(z) \left[ \begin{array}{c}
{\widehat}{\itbf a}_0(\omega) \\ {\widehat}{\itbf b}^\eps(\omega,0)
\end{array} \right],$$ and ${\widehat}{\itbf b}^\eps(\omega,0)$ can be eliminated from the boundary identity $$\left[ \begin{array}{c}
{\widehat}{\itbf a}^\eps (\omega,L)\\
{\bf 0}
\end{array} \right]
=
\bP^\eps_\omega(L)
\left[ \begin{array}{c}
{\widehat}{\itbf a}_0(\omega) \\ {\widehat}{\itbf b}^\eps(\omega,0)
\end{array} \right] \, .$$ Furthermore, it follows from the symmetry relations (\[eq:defH\]) satisfied by the matrices ${\bf H}_\omega$ and ${\bf G}_\omega$ that the propagator has the block form $$\label{formpropagator}
\bP^\eps_\omega(z) = \left[ \begin{array}{cc}
{\bf P}^{\eps,a}_\omega(z) & {{\bf P}^{\eps,b}_\omega(z)}\\
\overline{{\bf P}^{\eps,b}_\omega(z)} & \overline{{\bf P}^{\eps,a}_\omega(z)}\\
\end{array} \right] \, ,$$ where ${\bf P}^{\eps,a}_\omega(z)$ and ${\bf P}^{\eps,b}_\omega(z)$ are $N \times N$ complex matrices. The first block ${\bf
P}^{\eps,a}_\omega$ describes the coupling between different forward going modes, while ${\bf P}^{\eps,b}_\omega$ describes the coupling between forward going and backward going modes.
The diffusion approximation {#sect:diffThm}
---------------------------
The limit $\bP^\eps_\omega$ as $\eps \rightarrow 0$ can be obtained and identified as a multi-dimensional diffusion process, meaning that the entries of the limit matrix satisfy a system of linear stochastic equations. This follows from the application of the diffusion approximation theorem proved in [@kohler74], which applies to systems of the general form $$\label{eq:DIF1}
\frac{d {\mathbf{\mathcal X}}^\eps(z)}{dz} = \frac{1}{\eps}
{\mathbf{\mathcal F}}\left({\mathbf{\mathcal
X}}^\eps(z),{\mathbf{\mathcal Y}}\left(\frac{z}{\eps^2}
\right), \frac{z}{\eps^2} \right) + {\mathbf{\mathcal
G}}\left({\mathbf{\mathcal X}}^\eps(z),{\mathbf{\mathcal
Y}}\left(\frac{z}{\eps^2} \right), \frac{z}{\eps^2} \right)
\quad \mbox{for} ~ z > 0, \quad \mbox{and} ~ ~{\mathbf{\mathcal X}}^\eps(0) =
{\mathbf{\mathcal X}}_o,$$ for a vector or matrix ${\mathbf{\mathcal X}}^\eps(z)$ with real entries. The system is driven by a stationary, mean zero and mixing random process $ {\mathbf{\mathcal Y}}(z)$. The functions ${\mathbf{\mathcal F}}(\chi,y,\tau)$ and ${\mathbf{\mathcal
G}}(\chi,y,\tau)$ are assumed at most linearly growing and smooth in $\chi$, and the dependence in $\tau$ is periodic or almost periodic [@book07 Section 6.5]. The function ${\mathbf{\mathcal
F}}(\chi,y,\tau)$ must also be centered: For any fixed $\chi$ and $\tau$, $\EE[{\mathbf{\mathcal F}}(\chi,{\mathbf{\mathcal Y}}(0),\tau)] =
0$.
The diffusion approximation theorem states that as $\eps \to 0$, ${\mathbf{\mathcal X}}^\eps(z)$ converges in distribution to the diffusion Markov process ${\mathbf{\mathcal X}}(z)$ with generator $\mathcal L$, acting on sufficiently smooth functions $\varphi(\chi)$ as $$\begin{aligned}
\mathcal L \varphi(\chi) = \lim_{T\to \infty} \frac{1}{T} \int_0^T d
\tau \int_0^\infty dz \, \EE \left[ {\mathbf{\mathcal
F}}(\chi,{\mathbf{\mathcal Y}}(0),\tau) \cdot \nabla_\chi
\left[ {\mathbf{\mathcal F}}(\chi,{\mathbf{\mathcal
Y}}(z),\tau) \cdot \nabla_\chi \varphi(\chi) \right] \right] + \nonumber \\
\frac{1}{T} \int_0^T d \tau \, \EE \left[ {\mathbf{\mathcal
G}}(\chi,{\mathbf{\mathcal Y}}(0),\tau) \cdot \nabla_\chi
\varphi(\chi) \right] \, .\end{aligned}$$ To apply it to the initial value problem for the complex $2N \times 2N $ matrix ${\bf P}_\om^\eps(z)$, we let ${\mathbf{\mathcal X}}^\eps(z)$ be the matrix obtained by concatenating the absolute values and phases of the entries in ${\bf
P}_\om^\eps(z)$. The driving random process ${\mathbf{\mathcal Y}}$ is given by $\mu(z), \nu(z) $ and their derivatives, which are stationary, mean zero and mixing by assumption. The expression of functions ${\mathbf{\mathcal F}}$ and ${\mathbf{\mathcal G}}$ follows from and the chain rule. The dependence on the fast variable $\tau = z/\eps^2$ is in the arguments of $\cos$ and $\sin$ functions, the real and imaginary parts of the complex exponentials in -.
The forward scattering approximation {#secforward}
------------------------------------
When we use the diffusion-approximation theorem in [@kohler74], we obtain that the limit entries of ${\bf P}^{\eps,b}_\omega(z)$ are coupled to the limit entries of ${\bf P}^{\eps,a}_\omega(z)$ through the coefficients $${\widehat}\cR_\nu(\beta_j + \beta_l) = 2 \int_{0}^\infty dz \, \cR_\nu(z)
\cos[( \beta_j+\beta_l )z]\, , \ \ \ \
{\widehat}\cR_\mu(\beta_j + \beta_l) = 2 \int_{0}^\infty dz \, \cR_\mu(z)
\cos[( \beta_j+\beta_l )z]\, ,$$ for $j,l=1,\ldots, N$. Here ${\widehat}\cR_\nu$ and ${\widehat}\cR_\mu$ are the power spectral densities of the processes $\nu$ and $\mu$, the Fourier transform of their covariance functions. They are evaluated at the sum of the wavenumbers $\beta_j + \beta_l$ because the phase factors present in the matrix ${\bf H}^{(b)}_\omega(z)$ are $\pm(\beta_j+\beta_l)z$. The limit entries of ${\bf
P}^{\eps,a}_\omega(z)$ are coupled to each other through the power spectral densities evaluated at the difference of the wavenumbers, ${\widehat}\cR_\nu(\beta_j - \beta_l)$ and ${\widehat}\cR_\mu(\beta_j -
\beta_l)$, for $j,l=1,\ldots, N$, because the phase factors in the matrix ${\bf H}^{(a)}_\omega(z)$ are $\pm(\beta_j-\beta_l)z$. Thus, if we assume that the power spectral densities are small at large frequencies, we may make the approximation $$\label{validforward}
{\widehat}\cR_\nu(\beta_j + \beta_l) \approx 0\,, \qquad {\widehat}\cR_\mu(\beta_j + \beta_l) \approx 0 \,, \quad \mbox{for} ~ ~j,l = 1,
\ldots, N,$$ which implies that we can neglect coupling between the forward and backward propagating modes as $\eps \to 0$. The forward going modes remain coupled to each other, because at least some combinations of the indexes $j,l$, for instance those with $|j-l|=1$, give non-zero coupling coefficients ${\widehat}\cR_\nu(\beta_j-\beta_l)$ and ${\widehat}\cR_\mu( \beta_j-\beta_l)$.
Because the backward going mode amplitudes satisfy the homogeneous end condition ${\widehat}b_{j}^\eps (\om,L) = 0$, and because they are asymptotically uncoupled from $\{{\widehat}a_j^\eps\}_{j = 1, \ldots, N}$, we can set them to zero. This is the forward scattering approximation, where the forward propagating mode amplitudes satisfy the closed system $$\label{evola}
\frac{d {\widehat}{\itbf a}^\eps}{dz} = \frac{1}{\eps} {\bf H}^{(a)}_\omega
\left(\frac{z}{\eps^2} \right) {\widehat}{\itbf a}^\eps + {\bf
G}^{(a)}_\omega \left(\frac{z}{\eps^2} \right) {\widehat}{\itbf a}^\eps \,
\quad \mbox{for} ~ z > 0, ~ ~ \mbox{and} ~ {\widehat}{a}^\eps_j(\omega,z=0)=
{{\widehat}{a}_{j,o}}(\omega).$$
\[rem.1\] Note that the matrix ${\bf H}^{(a)}_\omega$ is not skew Hermitian, which implies that for a given $\eps$ there is no conservation of energy of the forward propagating modes, over the randomly perturbed region, $$\sum_{j=1}^N | {\widehat}{a}_j^\eps (L) |^2 \ne \sum_{j=1}^N | {\widehat}{a}_{j,o}
|^2.$$ This is due to the local exchange of energy between the propagating and evanescent modes. However, we will see that the energy of the forward propagating modes is conserved in the limit $\eps \to 0$.
The coupled mode diffusion process {#subseccoupledpower}
----------------------------------
We now apply the diffusion approximation theorem to the system (\[evola\]) and obtain after a long calculation that we do not include for brevity, the main result of this section:
\[propdiff\]The complex mode amplitudes $\{{\widehat}{a}_j^\eps(\omega,z)
\}_{j=1,\ldots,N}$ converge in distribution as $\eps \rightarrow 0$ to a diffusion Markov process process $\{{\widehat}{a}_j(\omega,z)
\}_{j=1,\ldots,N}$ with generator ${\cal L}$ given below.
Let us write the limit process as $${\widehat}{a}_j(\omega,z) = P_j(\omega,z)^{1/2} e^{i \theta_j(\omega,z)},
\quad j=1,\ldots,N,$$ in terms of the power $|{\widehat}a_j|^2 = P_j$ and the phase $\theta_j$. Then, we can express the infinitesimal generator ${\cal
L}$ of the limit diffusion as the sum of two operators $$\begin{aligned}
\label{gendiffa}
{\cal L} &=& {\cal L}_P + {\cal L}_\theta . \end{aligned}$$ The first is a partial differential operator in the powers $$\begin{aligned}
\label{gendiff2P}
{\cal L}_P =\sum_{{\scriptsize \begin{array}{c}j, l = 1 \\
j \ne l \end{array}} }^N
\Gamma_{jl}^{(c)}(\omega) \left[ P_l P_j \left(
\frac{\partial}{\partial P_j} -\frac{\partial}{\partial P_l} \right)
\frac{\partial}{\partial P_j} + (P_l-P_j) \frac{\partial}{\partial
P_j} \right] \, ,\end{aligned}$$ with matrix $\bGamma^{(c)}(\om)$ of coefficients that are non-negative off the diagonal, and sum to zero in the rows $$\label{defgamma1b}
\Gamma_{jj}^{(c)}(\omega) = - \sum_{l \neq j}
\Gamma_{jl}^{(c)}(\omega)\, .$$ The off-diagonal entries are defined by the power spectral densities of the fluctuations $\nu$ and $\mu$, and the derivatives of the eigenfunctions at the boundaries, $$\begin{aligned}
\Gamma_{jl}^{(c)}(\omega) = \frac{X^2}{4 \beta_j(\om) \beta_l(\om)}
\left\{ \left[\partial_\xi \phi_j(\om,X) \partial_\xi
\phi_l(\om,X)\right]^2 {\widehat}\cR_\nu[\beta_j(\om)-\beta_l(\om)] +
\right. \nonumber \\ \left. \left[\partial_\xi \phi_j(\om,0)
\partial_\xi \phi_l(\om,0)\right]^2 {\widehat}\cR_\mu[\beta_j(\om)-\beta_l(\om)]\right\} \, \label{defgamma1} .\end{aligned}$$ The second partial differential operator is with respect to the phases $$\begin{aligned}
\nonumber
{\cal L}_\theta = \frac{1}{4} \sum_{{\scriptsize \begin{array}{c}j, l = 1 \\
j \ne l \end{array}}}^N
\Gamma_{jl}^{(c)}(\omega) \left[ \frac{P_j}{P_l} \frac{\partial^2}{
\partial \theta_l^2} + \frac{P_l}{P_j} \frac{\partial^2}{ \partial
\theta_j^2} + 2 \frac{\partial^2}{\partial \theta_j \partial
\theta_l} \right] + \frac{1}{2} \sum_{j , l=1}^N
\Gamma_{jl}^{(0)}(\omega) \frac{\partial^2}{\partial \theta_j \partial
\theta_l} + \\ \frac{1}{2} \sum_{{\scriptsize \begin{array}{c}j, l = 1 \\
j \ne l \end{array}} }^N
\Gamma_{jl}^{(s)}(\omega) \frac{\partial}{\partial \theta_j} + \sum_{j=1}^N
\kappa_j(\om) \frac{\partial}{\partial \theta_j} \,, \label{gendiff2T}\end{aligned}$$ with nonnegative coefficients $$\begin{aligned}
\Gamma_{jl}^{(0)}(\omega) = \frac{X^2}{4 \beta_j(\om) \beta_l(\om)}
\left\{ \left[\partial_\xi \phi_j(\om,X) \partial_\xi
\phi_l(\om,X)\right]^2 {\widehat}\cR_\nu(0) +
\right. \nonumber \\ \left. \left[\partial_\xi \phi_j(\om,0)
\partial_\xi \phi_l(\om,0)\right]^2 {\widehat}\cR_\mu(0)\right\} \, \label{defgamma10} ,\end{aligned}$$ and $$\begin{aligned}
\Gamma_{jl}^{(s)}(\omega) = \frac{X^2}{4 \beta_j(\om) \beta_l(\om)}
\left\{ \left[\partial_\xi \phi_j(\om,X) \partial_\xi
\phi_l(\om,X)\right]^2 \gamma_{\nu,jl}(\om) + \right. \nonumber
\\ \left. \left[\partial_\xi \phi_j(\om,0) \partial_\xi
\phi_l(\om,0)\right]^2 \gamma_{\mu,jl}(\om)\right\}
\, \label{defgamma1s} , \end{aligned}$$ for $j \ne l$, where $$\begin{aligned}
\label{eq:gammanumu}
\gamma_{\nu,jl} (\omega)&=& 2\int_{0}^\infty dz \, \sin \left[
(\beta_j(\omega)-\beta_l(\omega))z \right] \cR_\nu(z) \,
,\\ \gamma_{\mu,jl} (\omega)&=& 2\int_{0}^\infty dz \, \sin \left[
(\beta_j(\omega)-\beta_l(\omega))z \right] \cR_\mu(z) \, .\end{aligned}$$ The diagonal part of $\Gamma^{(s)}(\om)$ is defined by $$\label{eq:defgamma1s}
\Gamma_{jj}^{(s)}(\om) = - \sum_{\l \ne j} \Gamma_{jl}^{(s)}(\om).$$ All the terms in the generator except for the last one in are due to the direct coupling of the propagating modes. The coefficient $\kappa_j$ in the last term is $$\kappa_j(\om) = \kappa_j^{(a)}(\om) + \kappa_j^{(e)}(\om),
\label{eq:kappa1}$$ with the first part due to the direct coupling of the propagating modes and given by $$\begin{aligned}
\nonumber \kappa_j^{(a)} &=& \cR_\nu(0) \left\{
\int_0^X \hspace{-0.03in} d \xi \left[ \frac{\om^2}{4 \beta_j} \xi^2
\phi_j^2 \, \partial_\xi^2 c^{-2} - \frac{3}{2 \beta_j}
(\partial_\xi \phi_j)^2 \right] +
\hspace{-0.03in} \sum_{l \ne j, l = 1}^N (\beta_l + \beta_j) \big[
d_{\nu,jl}^2 (\beta_l^2-\beta_j^2) +2 d_{\nu,jl}c_{\nu,jl}\big]
\right\} - \nonumber \\ && \cR_\nu''(0) \left\{ \frac{1}{4 \beta_j} -
\frac{1}{2 \beta_j} \int_0^X d \xi \, \xi^2 ( \partial_\xi \phi_j )^2
+
\hspace{-0.03in} \sum_{l \ne j, l = 1}^N (\beta_l-\beta_j) d_{\nu,jl}^2
\right\} ~ ~ + ~ ~ \mu \mbox{ terms, }
\label{eq:kappaa}\end{aligned}$$ with the abbreviation “$\mu$ terms” for the similar contribution of the $\mu$ process. The coupling via the evanescent modes determines the second term in (\[eq:kappa1\]), and it is given by $$\begin{aligned}
\kappa_j^{(e)} &=& \sum_{l=N+1}^\infty \frac{ X^2 \left[ \partial_\xi
\phi_j(X) \partial_\xi \phi_l(X)\right]^2}{2 \beta_j \beta_l
(\beta_j^2 + \beta_l^2)^2} \int_0^\infty ds \, e^{-\beta_l s}
\cR_\nu''(s) \left[ (\beta_l^2 - \beta_j^2)
\cos(\beta_j s) - 2 \beta_j \beta_l \sin(\beta_j s) \right] +
\nonumber \\ && \sum_{l=N+1}^\infty 2 \beta_l \left[-d_{\nu,lj}^2
\cR_\nu''(0) + \frac{c_{\nu,lj}^2}{\beta_j^2 +
\beta_l^2} \cR_\nu(0) \right] ~ ~ + ~ ~ \mu \mbox{ terms.}
\label{eq:kappa_e}\end{aligned}$$
### Discussion {#sect:discuss}
We now describe some properties of the diffusion process ${\widehat}{\itbf
a}$:
1. Note that the coefficients of the partial derivatives in $P_j$ of the infinitesimal generator ${\cal L}$ depend only on $\{P_l\}_{l
=1,\ldots,N}$. This means that the mode powers $\{|{\widehat}{a}_j^\eps(\omega,z) |^2 \}_{j=1,\ldots,N}$ converge in distribution as $\eps \rightarrow 0$ to the diffusion Markov process $\{|{\widehat}{a}_j(\omega,z) |^2=P_j(\omega,z)\}_{j=1,\ldots,N}$, with generator ${\cal L}_P$.
2. As we remarked before, the evanescent modes influence only the coefficient $\kappa_j(\om)$ which appears in ${\cal L}_\theta$ but not in ${\cal L}_P$. This means that the evanescent modes do not change the energy of the propagating modes in the limit $\eps \to
0$. They also do not affect the coupling of the modes of the limit process, because $\kappa_j$ is in the diagonal part of . The only effect of the evanescent modes is a net dispersion (frequency dependent phase modulation) for each propagating mode.
3. The generator ${\cal L}$ can also be written in the equivalent form [@book07 Section 20.3] $$\begin{aligned}
\nonumber
{\cal L} &=&
\frac{1}{4} \sum_{{\scriptsize \begin{array}{c}j, l = 1 \\
j \ne l \end{array}}} {\Gamma}_{jl}^{(c)}(\omega) \left(
A_{jl} \overline{A_{jl}} +
\overline{A_{jl}} {A_{jl}} \right) +
\frac{1}{2} \sum_{j,l=1}^N \Gamma^{(0)}_{jl}(\omega) A_{jj} \overline{A_{ll}}
\\
&&
+ \frac{i}{4}
\sum_{{\scriptsize \begin{array}{c}j, l = 1 \\
j \ne l \end{array}}} \Gamma^{(s)}_{jl}(\omega) (A_{jj} - A_{ll})
+ i
\sum_{j=1 }^N \kappa_j(\om) A_{jj} \, ,
\label{gendiffabis} \end{aligned}$$ in terms of the differential operators $$\begin{aligned}
A_{jl}&=& {\widehat}{a}_j \frac{\partial}{\partial {\widehat}{a}_l}
-\overline{{\widehat}{a}_l} \frac{\partial}{\partial
\overline{{\widehat}{a}_j}} = - \overline{A_{lj}} \, .
\label{gendiffbbis}\end{aligned}$$ Here the complex derivatives are defined in the standard way: if $z=x+iy$, then $\partial_z=(1/2)(\partial_x-i\partial_y)$ and $\partial_{\overline{z}}=(1/2)(\partial_x+i\partial_y)$.
4. The coefficients of the second derivatives in (\[gendiffabis\]) are homogeneous of degree two, while the coefficients of the first derivatives are homogeneous of degree one. This implies that we can write closed ordinary differential equations in the limit $\eps \to 0$ for the moments of any order of $\{{\widehat}a_j^\eps\}_{j = 1, \ldots, N}$.
5. Because $$\label{eq:CONS}{\cal L} \left( \sum_{l=1}^N | {\widehat}{a}_l |^2 \right)=0,$$ we have conservation of energy of the limit diffusion process. More explicitly, the process is supported on the sphere in $\CC^N$ with center at zero and radius $R_o$ determined by the initial condition $$R_o^2 = \sum_{l=1}^N |{{\widehat}{a}_{l,o}}(\omega)|^2.$$ Since ${\cal L}$ is not self-adjoint on the sphere, the process is not reversible. But the uniform measure on the sphere is invariant, and the generator is strongly elliptic. From the theory of irreducible Markov processes with compact state space, we know that the process is ergodic and thus ${\widehat}{\itbf a}(z)$ converges for large $z$ to the uniform distribution over the sphere of radius $R_o$. This can be used to compute the limit distribution of the mode powers $(|{\widehat}{a}_j|^2)_{j=1,\ldots,N}$ for large $z$, which is the uniform distribution over the set $${\cal H}_N = \Big\{
\{P_j\}_{j=1,\ldots,N} , \, P_j \geq0, \, \sum_{j=1}^N P_j =R_o^2
\Big\} \, . \label{eq:HN}$$ We carry out a more detailed analysis that is valid for any $z$ in the next section.
### Independence of the change of coordinates that flatten the boundaries {#sect:indc}
The coefficients , and of the generator ${\cal L}$ have simple expressions and are determined only by the covariance functions of the fluctuations $\nu(z)$ and $\mu(z)$ and the boundary values of the derivatives of the eigenfunctions $\phi_j(\om,\xi)$ in the unperturbed waveguide. The dispersion coefficient $\kappa_j$ has a more complicated expression -, which involves integrals of products of the eigenfunctions and their derivatives with powers of $\xi$ or $X-\xi$. These factors in $\xi$ are present in our change of coordinates $$\ell^\eps(z,\xi) = B(z) + [T(z)-B(z)]\frac{\xi}{X} = \xi + \eps
\left[(X-\xi) \mu(z) + \xi \nu(z)\right],
\label{eq:ell}$$ so it is natural to ask if the generator ${\cal L}$ depends on the change of coordinates. We show here that this is not the case.
Let $F^\eps(z,\xi) \in C^1\left([0,\infty) \times [0,X]\right)$ be a general change of coordinates satisfying $$\label{as1}
F^{\eps}(z,\xi)=\left\{
\begin{array}{cll}
X(1+\eps\nu(z)) &\text{for} & \xi=X\\ \eps X \mu(z)
&\text{for} & \xi=0\;
\end{array}
\right.$$ for each $\eps > 0$, and converging uniformly to the identity mapping as $\eps \to 0$, $$\begin{aligned}
\label{as2}
\sup_{z\geq0}\sup_{\xi\in[0,X]}|F^{\eps}(z,\xi)-\xi| = O(\eps), \qquad
\sup_{z\geq0}\sup_{\xi\in[0,X]}|\partial_{z}F^{\eps}(z,\xi)| = O(\eps).\end{aligned}$$ Note that is not restrictive in our context since $(\mu(z),\nu(z))$ and their derivatives are uniformly bounded. Define the wavefield $${\widehat}w(\om,\xi,z) = {\widehat}p\left(\om,F^\eps(z,\xi),z\right),
\label{eq:w}$$ and decompose it into the waveguide modes, as we did for $ {\widehat}u(\om,\xi,z) = {\widehat}p\left(\om,\ell^\eps(z,\xi),z\right). $ We have the following result proved in appendix \[ap:coordc\].
\[thm.2\] The amplitudes of the propagating modes of the wave field converge in distribution as $\eps \to 0$ to the same limit diffusion as in Theorem \[propdiff\].
### The loss of coherence of the wave field {#subseccoupledpower2}
From Theorem \[propdiff\] and the expression of the generator we get by direct calculation the following result for the mean mode amplitudes.
\[prop.mean\] As $\eps \to 0$, $\EE[ {\widehat}a_j^\eps(\om,z) ]$ converges to the expectation of the limit diffusion ${\widehat}a_j(\om,z)$, given by $$\EE[{\widehat}a_j(\om,z)] = {\widehat}a_{j,o}(\om) \, \mbox{\em exp}\left \{
\Big[\frac{ \Gamma_{jj}^{(c)}(\om) - \Gamma_{jj}^{(0)}(\om)
}{2}\Big] z + i \Big[ \frac{\Gamma^{(s)}_{jj}(\om)}{2} + \kappa_j(\om)
\Big] z\right\}\, .
\label{eq:mean}$$
As we remarked before, $\Gamma_{jj}^{(c)} -
\Gamma_{jj}^{(0)}$ is negative, so the mean mode amplitudes decay exponentially with the range $z$. Furthermore, we see from and that $\Gamma_{jj}^{(c)} -
\Gamma_{jj}^{(0)}$ is the sum of terms proportional to $\left(
\partial_\xi \phi_j(X) \right)^2/\beta_j$ and $\left( \partial_\xi
\phi_j(0) \right)^2/\beta_j$. These terms increase with $j$, and they can be very large when $j \sim N$. Thus, the mean amplitudes of the high order modes decay faster in $z$ than the ones of the low order modes. We return to this point in section \[sect:comparisson\], where we estimate the net attenuation of the wave field in the high frequency regime $N \gg 1$.
That the mean field decays exponentially with range implies that the wave field loses its coherence, and energy is transferred to its incoherent part, the fluctuations. The incoherent part of the amplitude of the $j-$th mode is ${\widehat}a_j^\eps - \EE[{\widehat}a_j^\eps ]$, and its intensity is given by the variance $\EE[ |{\widehat}a_j^\eps|^2] -
\left|\EE[{\widehat}a_j^\eps]\right|^2$. The mode is incoherent if its mean amplitude is dominated by the fluctuations, that is if $$\left[\EE[ |{\widehat}a_j^\eps|^2] - \left|\EE[{\widehat}a_j^\eps]\right|^2\right]^{1/2} \gg \left|\EE[{\widehat}a_j^\eps]\right|.$$ We know that the right hand side converges to as $\eps
\to 0$. We calculate next the limit of the mean powers $\EE[ |{\widehat}a_j^\eps|^2]$.
### Coupled power equations and equipartition of energy
As we remarked in section \[sect:discuss\], the mode powers $|
{\widehat}{a}^\eps_j(\omega,z)|^2$, for $j = 1, \ldots, N$, converge in distribution as $\eps \rightarrow 0$ to the diffusion Markov process $(P_j(\omega,z))_{j=1,\ldots,N}$ supported in the set , and with infinitesimal generator ${\cal L}_P $. We use this result to calculate the limit of the mean mode powers $${P}^{(1)}_j (\omega,z) = \EE [ P_j(\omega,z)]= \lim_{\eps
\rightarrow 0} \EE [ | {\widehat}{a}_j^\eps(\omega,z)|^2] \, .$$
\[propmom20\] As $\eps \to 0$, $\EE[ | {\widehat}{a}_j^\eps(\omega,z)|^2 ]$ converge to ${P}^{(1)}_j (\omega,z)$, the solution of the coupled linear system $$\label{eqP1}
\frac{d {P}^{(1)}_j}{dz } = \sum_{j=1}^N \Gamma_{jn}^{(c)}(\omega) \left(
{P}^{(1)}_n- {P}^{(1)}_j \right) , \quad z >0\, ,$$ with initial condition ${P}^{(1)}_j(\omega,z=0)= |
{{\widehat}{a}_{j,o}}(\omega)|^2$, for $j=1,\ldots,N$.
Matrix $\bGamma^{(c)}(\omega)$ is symmetric, with rows summing to zero, by definition. Thus, we can can rewrite in vector-matrix form $$\frac{d {\itbf P}^{(1)}(z)}{dz} = \bGamma^{(c)}(\om) {\itbf P}^{(1)}(z),
\quad z > 0, ~ ~\mbox{and} ~ ~ {\itbf P}^{(1)}(0) = {\itbf P}^{(1)}_o,$$ with ${\itbf P}^{(1)}(z) = \left({P}^{(1)}_1, \ldots,
{P}^{(1)}_n\right)^T$ and ${\itbf P}^{(1)}_o$ the vector with components $| {{\widehat}{a}_{j,o}}(\omega)|^2$, for $j = 1, \ldots, N$. The solution is given by the matrix exponential $${\itbf P}^{(1)}(z) = \mbox{exp} \left[ \bGamma^{(c)}(\om) z\right]
{\itbf P}^{(1)}_o.
\label{eq:P1EXP}$$ We know from that the off-diagonal entries in $\bGamma^{(c)}$ are not negative. If we assume that they are strictly positive, which is equivalent to asking that the power spectral densities of $\nu$ and $\mu$ do not vanish at the arguments $\beta_j-\beta_l$, for all $j,l = 1, \ldots, N$, we can apply the Perron-Frobenius theorem to conclude that zero is a simple eigenvalue of $\boldsymbol{\Gamma}^{(c)}(\omega)$, and that all the other eigenvalues are negative, $$\Lambda_{N(\omega)}(\omega) \leq \cdots \leq
\Lambda_2(\omega)<0.$$ This shows that as the range $z$ grows, the vector ${\itbf P}^{(1)}(z)$ tends to the null space of $\bGamma^{(c)}$, the span of the vector $(1,\ldots,1)^T$. That is to say, the mode powers converge to the uniform distribution in the set at exponential rate $$\label{eq:equip}
\sup_{j=1,\ldots,N(\omega)} \Big| {P}^{(1)}_j (\omega,z) -
\frac{R_o^2(\omega)}{N(\omega)} \Big| \leq C e^{-|\Lambda_2(\omega)| z}
\, .$$ As $z \to \infty$, we have equipartition of energy among the propagating modes.
### Fluctuations of the mode powers {#secfluc}
To estimate the fluctuations of the mode powers, we use again Theorem \[propdiff\] to compute the fourth order moments of the mode amplitudes: $${P}^{(2)}_{jl}(\omega,z) = \lim_{\eps \rightarrow 0} \EE \left[
|{\widehat}{a}_j^\eps(\omega,z) |^2 |{\widehat}{a}_l^\eps(\omega,z) |^2 \right]
=\EE [ P_j(\omega,z) P_l(\omega,z) ] \, .$$ Using the generator ${\cal L}_P$, we get the following coupled system of ordinary differential equations for limit moments $$\begin{aligned}
\frac{d {P}^{(2)}_{jj}}{dz} &=& \sum_{{\scriptsize \begin{array}{c} n
= 1 \\ n \neq j \end{array} } }^N \Gamma_{jn}^{(c)} \left( 4
{P}^{(2)}_{jn}-2 {P}^{(2)}_{jj} \right) \, , \nonumber \\ \frac{d
{P}^{(2)}_{jl}}{dz} &=& - 2 \Gamma_{jl}^{(c)} {P}^{(2)}_{jl} +
\sum_{n=1}^N \Gamma_{ln}^{(c)} \left( {P}^{(2)}_{jn} -
{P}^{(2)}_{jl}\right) + \sum_{n= 1}^N \Gamma_{jn}^{(c)}
\left( {P}^{(2)}_{ln} - {P}^{(2)}_{jl}\right),
\ \ \ \ \ j\neq l \, , \quad z > 0, \label{eq:4thmom}\end{aligned}$$ with initial conditions $${P}^{(2)}_{jl}(0)=|{\widehat}{a}_{j,o}|^2|{\widehat}{a}_{l,o}|^2.$$ The solution of this system can be written again in terms of the exponential of the evolution matrix.
It is straightforward to check that the function ${P}_{jl}^{(2)}
\equiv 1 +\delta_{jl}$ is a stationary solution of . Using the positivity of $\Gamma_{jl}^{(c)}$ for $j\neq l$, we conclude that this stationary solution is asymptotically stable, meaning that the solution ${P}_{jl}^{(2)}(z)$ converges as $z \rightarrow \infty$ to $${P}^{(2)}_{jl}(z)
\stackrel{z \rightarrow \infty}{\longrightarrow}
\left\{ \begin{array}{ll}
\displaystyle \frac{1}{N(N+1)} R_o^4 & \mbox{ if } j \neq l \, , \\
\displaystyle \frac{2}{N(N+1)} R_o^4 & \mbox{ if } j = l \, ,
\end{array}
\right.$$ where $R_o^2 = \sum_{j=1}^N |{\widehat}{a}_{j,o}|^2$. This implies that the correlation of $P_j(z)$ and $P_l(z)$ converges to $-1/(N-1)$ if $j
\neq l$ and to $(N-1)/(N+1)$ if $j=l$ as $z \to \infty$. We see from the $j \neq l$ result that if, in addition, the number of modes $N$ becomes large, then the mode powers become uncorrelated. The $j=l$ result shows that, whatever the number of modes $N$, the mode powers $P_j$ are not statistically stable quantities in the limit $z \to
\infty$, since $$\frac{{\rm Var} ( P_j(\omega,z) )}{\EE [P_j(\omega,z)]^2}
\stackrel{z \rightarrow \infty}{\longrightarrow}
\frac{N-1}{N+1} \, .$$
Estimation of net diffusion {#sect:comparisson}
===========================
To illustrate the random boundary cumulative scattering effect over long ranges, we quantify in this section the diffusion coefficients $\Gamma_{jl}^{(c)}$ and $\Gamma_{jl}^{(0)}$ in the generator ${\mathcal L}$ of the limit process. In particular, we calculate the mode-dependent net attenuation rate $${\mathcal K}_j(\om) = \frac{\Gamma_{jj}^{(0)}(\om) -
\Gamma_{jj}^{(c)}(\om)}{2}\, ,
\label{eq:EE2}$$ that determines the coherent (mean) amplitudes as shown in . The attenuation rate gives the range scale over which the $j-$th mode becomes essentially incoherent, because equations and give $$\frac{\left|\EE\left[ {\widehat}a_j(\om,z) \right]\right|}{\sqrt{\EE\left[
\left|{\widehat}a_j(\om,z)\right|^2 \right] -\left|\EE\left[ {\widehat}a_j(\om,z) \right]\right|^2}} \ll 1 \qquad \mbox{if} ~
z \gg {\mathcal K}_j^{-1}.$$ The reciprocal of the attenuation rate can therefore be interpreted as a scattering mean free path. The scattering mean free path is classically defined as the propagation distance beyond which the wave loses its coherence [@rossum]. Here it is mode-dependent.
Note that the attenuation rate ${\mathcal K}_j(\om)$ is the sum of two terms. The first one involves the phase diffusion coefficient $\Gamma_{jj}^{(0)}$ in the generator ${\mathcal L}_\theta$, and determines the range scale over which the cumulative random phase of the amplitude ${\widehat}a_j$ becomes significant, thus giving exponential damping of the expected field $\EE[{\widehat}a_j]$. The second term is the mode-dependent energy exchange rate $${\mathcal J}_j(\om) = - \frac{
\Gamma_{jj}^{(c)}(\om)}{2}\, ,
\label{eq:EE2J}$$ given by the power diffusion coefficients in the generator ${\mathcal L}_P$. Each waveguide mode can be associated with a direction of incidence at the unperturbed boundary, and energy is exchanged between modes when they scatter, because of the fluctuation of the angles of incidence at the random boundaries. We can interpret the reciprocal of the energy exchange rate as a transport mean free path, which is classically defined as the distance beyond which the wave forgets its initial direction [@rossum].
The third important length scale is the equipartition distance $1/|\Lambda_2(\om)|$, defined in terms of the second largest eigenvalue of the matrix $\bGamma^{(c)}(\om)$. It is the distance over which the energy becomes uniformly distributed over the modes, independently of the initial excitation at the source, as shown in equation .
Estimates for a waveguide with constant wave speed
--------------------------------------------------
To give sharp estimates of ${\mathcal K}_j$ and ${\mathcal J}_j$ for $j = 1, \ldots, N$, we assume in this section a waveguide with constant wave speed $c(\xi) = c_o$ and a high frequency regime $N \gg
1$. Note from that the magnitude of $\Gamma_{jj}^{(c)}$ depends on the rate of decay of the power spectral densities ${\widehat}\cR_\nu(\beta)$ and ${\widehat}\cR_\mu(\beta)$ with respect to the argument $\beta$. We already made the assumption on the decay of the power spectral densities, in order to justify the forward scattering approximation. In particular, we assumed that ${\widehat}\cR_\nu(\beta)\simeq {\widehat}\cR_\mu(\beta) \simeq 0$ for all $\beta \geq 2 \beta_N$. Thus, for a given mode index $j$, we expect large terms in the sum in for indices $l$ satisfying $$|\beta_j - \beta_l| \lesssim 2 \beta_N = \frac{2 \pi}{X} \sqrt{2 \alpha N} ,
\label{eq:EE9}$$ where we used the definition $$\beta_j(\om) = \frac{\pi}{X} \sqrt{ (N + \alpha)^2 - j^2}, \quad j =1,
\ldots, N, \quad \mbox{and} \quad \frac{kX}{\pi} = N + \alpha, \quad
\mbox{for} ~~ \alpha \in (0,1)\, .
\label{eq:EE5}$$ Still, it is difficult to get a precise estimate of $\Gamma_{jj}^{(c)}$ given by , unless we make further assumptions on $\cR_\nu$ and $\cR_\mu$. For the calculations in this section we take the Gaussian covariance functions $$\cR_\nu(z) =
\mbox{exp}\left(-\frac{z^2}{2 \ell^2_\nu} \right) \quad \mbox{and} \quad
\cR_\mu(z) =
\mbox{exp}\left(-\frac{z^2}{2 \ell^2_\mu} \right) \, ,
\label{eq:Gaussian}$$ and we take for convenience equal correlation lengths $\ell_\nu = \ell_\mu = \ell\, .$ The power spectral densities are $${\widehat}\cR_\nu(\beta) = {\widehat}\cR_\mu(\beta) = \sqrt{2 \pi} \, \ell\, \mbox{exp} \left(
-\frac{\beta^2 \ell^2}{2} \right) \, ,
\label{eq:PSGaussian}$$ and they are negligible for $ \beta \geq {3}/{\ell}$. Since $N = \left \lfloor {k
X}/{\pi} \right \rfloor$, we see that becomes $$|\beta_j-\beta_l| \leq \frac{3}{\ell} \lesssim \frac{2 \pi}{X} \sqrt{2
\alpha N} \quad \mbox{or equivalently, } \quad k \ell \gtrsim \frac{3}{2
\sqrt{2 \alpha}} \sqrt{N} \gg 1\, .
\label{eq:kllarge}$$ Thus, assumption amounts to having correlation lengths that are larger than the wavelength. The attenuation and exchange energy rates and are estimated in detailed in Appendix \[ap:estim\]. We summarize the results in the following proposition, in the case[^2] $$\sqrt{N} \lesssim k \ell \ll N.
\label{eq:assumekell}$$
\[prop.estim\] The attenuation rate ${\mathcal K}_j(\om)$ increases monotonically with the mode index $j$. The energy exchange rate ${\mathcal J}_j(\om)$ increases monotonically with the mode index $j$ up to the high modes of order $N$ where it can decay if $k\ell \gg \sqrt{N}$. For the low order modes we have $${\mathcal J}_j(\om) X
\approx
{\mathcal K}_j(\om) X \sim (k\ell)^{-1/2}
, \quad j \sim 1\, .
\label{eq:estAtt1}$$ For the intermediate modes we have $${\mathcal J}_j(\om) X
\approx
{\mathcal K}_j(\om) X \sim N^2 \frac{(j/N)^3}{\sqrt{1-(j/N)^2}}
, \quad 1 \ll j \ll N \, .
\label{eq:estAtt1med}$$ For the high order modes we have $${\mathcal J}_j(\om) X \sim \frac{N^3}{k \ell} , \quad \quad {\mathcal K}_j(\om) X \sim k \ell N^2\, , \quad j \sim N\, ,
\label{eq:estAttN}$$ for $k\ell \sim \sqrt{N}$, but when $k \ell \gg \sqrt{N}$, $${\mathcal J}_j(\om) X \ll {\mathcal K}_j(\om) X \sim k \ell N^2\, , \quad j \sim N\, .$$
The results summarized in Proposition \[prop.estim\] show that scattering from the random boundaries has a much stronger effect on the high order modes than the low order ones. This is intuitive, because the modes with large index bounce more often from the boundaries. The damping rate ${\mathcal K}_j$ is very large, of order $N^2 k\ell$ for $j \sim N$, which means that the amplitudes of these modes become incoherent quickly, over scaled[^3] ranges $z \sim X N^{-2} (k\ell)^{-1} \ll X$. The modes with index $j
\sim 1$ keep their coherence over ranges $z = O(X)$, because their mean amplitudes are essentially undamped ${\mathcal K}_j X \ll 1$ for $j \sim 1$. However, the modes lose their coherence eventually, because the damping becomes visible at longer ranges $z > X (k\ell)^{1/2}$.
Note that the scattering mean free paths and the transport mean free paths are approximately the same for the low and intermediate index modes, but not for the high ones. The energy exchange rate for the high order modes may be much smaller than the attenuation rate in high frequency regimes with $k \ell \gg \sqrt{N}$. These modes reach the boundary many times over a correlation length, at almost the same angle of incidence, so the exchange of energy is not efficient and it occurs only between neighboring modes. There is however a significant cumulative random phase in ${\widehat}a_j$ for $j \sim N$, given by the addition of the correlated phases gathered over the multiple scattering events. This significant phase causes the loss of coherence of the amplitudes of the high order modes, the strong damping of $\EE[{\widehat}a_j]$.
Note also that a direct calculation[^4] of the second largest eigenvalue of $\Gamma^{(c)}(\om)$ gives that $$|\Lambda_2(\om)| \approx |\Gamma_{11}^{(c)}(\om)| \sim (k \ell)^{-1/2}.$$ Thus, the equipartition distance is similar to the scattering mean free path of the first mode. This mode can travel longer distances than the others before it loses its coherence, but once that happens, the waves have entered the equipartition regime, where the energy is uniformly distributed among all the modes. The waves forget the initial condition at the source.
Comparison with waveguides with internal random inhomogeneities
---------------------------------------------------------------
When we compare the results in Proposition \[prop.estim\] with those in [@book07 Chapter 20] for random waveguides with interior inhomogeneities but straight boundaries, we see that even though the random amplitudes of the propagating modes converge to a Markov diffusion process with the same form of the generator as , the net effects on coherence and energy exchange are different in terms of their dependence with respect to the modes. Let us look in detail at the attenuation rate that determines the range scale over which the amplitudes of the propagating modes lose coherence. To distinguish it from , we denote the attenuation rate by $\widetilde {\mathcal K}_j$ and the energy exchange rate by $\widetilde {\mathcal J}_j$, and recall from [@book07 Section 20.3.1] that they are given by $$\label{Ktilde}
\widetilde {\mathcal K}_j = \frac{k^4 {\widehat}\cR_{jj}(0)}{8 \beta_j^2} +\widetilde {\mathcal J}_j
\, , \quad \quad \quad
\widetilde {\mathcal J}_j =
\sum_{{\scriptsize \begin{array}{c}l = 1 \\ l \ne
j \end{array}} }^N
\hspace{-0.05in}\frac{k^4}{8 \beta_j \beta_l } {\widehat}\cR_{jl}\left(\beta_j-\beta_l\right)\, .$$ Here ${\widehat}\cR_{jl}(z)$ is the Fourier transform (power spectral density) of the covariance function $\cR_{jl}(z)$ of the stationary random processes $$C_{jl}(z) = \int_0^X d x \, \phi_j(x)\phi_l(x) \nu(x,z)\, ,$$ the projection on the eigenfunctions of the random fluctuations $\nu(x,z)$ of the wave speed.
For our comparison we assume isotropic, stationary fluctuations with mean zero and Gaussian covariance function $$\cR(x,z) = \EE\left[ \nu(x,z) \nu (0,0) \right] = e^{-\frac{x^2+z^2}{2 \ell^2}}\,,$$ so the power spectral densities are $${\widehat}\cR_{jl}(\beta) \approx \frac{\pi \ell^2}{X} e^{- \frac{(k
\ell)^2}{2}\left(\frac{X \beta}{\pi N}\right)^2} \left[e^{ -
\frac{(k \ell)^2}{2}\left(\frac{j}{N}-\frac{l}{N}\right)^2} + e^{
- \frac{(k \ell)^2}{2}\left(\frac{j}{N}+\frac{l}{N}\right)^2} +
\delta_{jl} \right] \, .$$ Thus, becomes $$\begin{aligned}
\widetilde {\mathcal K}_j &=& \frac{\pi (k \ell)^2}{8 X} \frac{2
+ e^{-2 (k \ell)^2 (j/N)^2}}{ \left(1+\alpha/N\right)^2 - (j/N)^2} +
\widetilde {\mathcal J}_j , \\
\widetilde {\mathcal J}_j &=& \frac{\pi (k \ell)^2}{8 X}
\sum_{{\scriptsize \begin{array}{c}l = 1 \\ l \ne
j \end{array}} }^N
\hspace{-0.05in} \frac{e^{-\frac{(k \ell)^2}{2} \left[
\sqrt{\left(1+\alpha/N\right)^2 - (j/N)^2} -
\sqrt{\left(1+\alpha/N\right)^2 - (l/N)^2}\right]^2}}{
\sqrt{\left[\left(1+\alpha/N\right)^2 -
(j/N)^2\right]\left[\left(1+\alpha/N\right)^2 -
(l/N)^2\right]}} \left[e^{ -
\frac{(k \ell)^2}{2}\left(\frac{j}{N}-\frac{l}{N}\right)^2} +
e^{ - \frac{(k
\ell)^2}{2}\left(\frac{j}{N}+\frac{l}{N}\right)^2}\right]
\, ,\end{aligned}$$ and their estimates can be obtained using the same techniques as in Appendix \[ap:estim\]. We give here the results when $k\ell$ satisfies (\[eq:assumekell\]). For the low order modes we have $$\begin{aligned}
\widetilde {\mathcal K}_j X & \approx & \frac{\pi (k \ell)^2}{8} \left[ 2
+ e^{-2(k\ell)^2/N^2} + \frac{N \sqrt{\pi/2}}{k \ell} \right] \sim
\left[ (k \ell)^2 + N \, k \ell
\right] \sim
N \, k \ell
\gtrsim N^{3/2} , \quad j \sim 1, \\
\widetilde {\mathcal J}_j X &\approx & \frac{\pi (k \ell)^2}{8} \frac{N \sqrt{\pi/2}}{k \ell} \sim
N \, k \ell
\gtrsim N^{3/2} , \quad j \sim 1, \end{aligned}$$ and for the high order modes we have $$\begin{aligned}
\widetilde {\mathcal K}_j X &\approx& \frac{\pi N (k \ell)^2}{8 \alpha}
\left[1 + \frac{\sqrt{\pi} N}{2 \sqrt{2} k \ell} \right] =
\left[ N (k \ell)^2 + N^2 k
\ell \right] \sim N^2 k\ell \gtrsim N^{5/2} , \quad j
\sim N, \\
\widetilde {\mathcal J}_j X &\approx& \frac{\pi N (k \ell)^2}{8 \alpha}
\frac{\sqrt{\pi} N}{2 \sqrt{2} k \ell} =
N^2 k
\ell \gtrsim N^{5/2}, \quad j
\sim N.\end{aligned}$$ Thus, we see that in waveguides with internal random inhomogeneities the low order modes lose coherence much faster than in waveguides with random boundaries. Explicitly, coherence is lost over scaled ranges $$z \lesssim X\, N^{-3/2} \ll X.$$ The high order modes, with index $j \sim N$, lose coherence over the range scale $$z \lesssim X \, N^{-5/2} \ll X.$$ Moreover, the main mechanism for the loss of coherence is the exchange of energy between neighboring modes. That is to say, the transport mean free path is equivalent to the scattering mean free path for all the modes in random waveguides with interior inhomogeneities. Finally, direct (numerical) calculation shows that $$O\left((k \ell)^{-2}\right) \leq \frac{|\Lambda_2|}{|\widetilde {\mathcal J}_1|} \leq O \left( (k \ell)^{-3/2}\right)\, ,$$ so the equipartition distance is larger by a factor of at least $O\left(N^{3/4}\right)$ than the scattering or transport mean free path.
Mixed boundary conditions {#sect:mixed}
=========================
Up to now we have described in detail the wave field in waveguides with random boundaries and Dirichlet boundary conditions . In this section we extend the results to the case of mixed boundary conditions , with Dirichlet condition at $x = B(z)$ and Neumann condition at $x = T(z)$. All permutations of Dirichlet/Neumann conditions are of course possible, and the results can be readily extended.
Similar to what we stated in section \[sect:homog\], the operator $\partial_x^2 + \omega^2 c^{-2}(x)$ acting on functions in $(0,X)$, with Dirichlet boundary condition at $x=0$ and Neumann boundary condition $x=X$, is self-adjoint in $L^2(0,X)$. Its spectrum is an infinite number of discrete eigenvalues $\lambda_j(\omega)$, for $j=1,2,\dots$, and we sort them in decreasing order. There is a finite number $N(\om)$ of positive eigenvalues and an infinite number of negative eigenvalues. We assume as in section \[sect:homog\] that $N(\om) = N$ is constant over the frequency band, and that the eigenvalues are simple. The modal wavenumbers are as before, $
\beta_j(\om) = \sqrt{|\lambda_j(\om)|}\, . $ The eigenfunctions $\phi_j(\omega,x)$ are real and form an orthonormal set.
For example, in the case of a constant wave speed $c(x) = c_o$, we have $$\lambda_j = k^2 - \left[ \frac{(j-1/2) \pi}{X}\right]^2, \qquad \phi_j(x) =
\sqrt{\frac{2}{X}} \sin \left( \frac{(j-1/2) \pi x}{X} \right), \qquad
j = 1, 2, \ldots\, ,$$ and the number of propagating modes is given by $N = \left \lfloor
\frac{k X}{\pi} + \frac{1}{2} \right \rfloor.$
Change of Coordinates
---------------------
We proceed as before and straighten the boundaries using a change of coordinates that is slightly more complicated than before, due to the Neumann condition at $x = T(z)$, where the normal is along the vector $(1,-T'(z))$. We let $$p(t,x,z) = u\big( t , \cX(x,z), \cZ(x,z) \big)\, ,$$ where $$\begin{aligned}
\cX(x,z) &=& X\frac{ x-B(z)}{T(z)-B(z)} \, ,\label{eq:NX}\\ \cZ(x,z) &=& z + x
T'(z) + Q(z)\, , \quad \quad Q(z)= - \int_0^z ds \, T(s) T''(s) \, .
\label{eq:NZ}\end{aligned}$$ In the new frame we get that $ \xi = \cX(x,z) \in [0,X]$, with Dirichlet condition at $\xi = 0$ $${u}(t,\xi=0,\zeta) =0 \, .$$ For the Neumann condition at $\xi = X$ we use the chain rule, and rewrite $$\partial_\nu p(t, x=T(z),z ) = \big[\partial_x -T'(z) \partial_z
\big] p(t,x=T(z),z )=0\, ,$$ as $$\begin{aligned}
&& \partial_\xi u (t,\xi=X,\zeta=\cZ(T(z),z) ) \big[ -\partial_x \cX +T'(z)
\partial_z \cX \big](x=T(z),z) + \\ && \partial_\zeta u
(t,\xi=X,\zeta=\cZ(T(z),z) ) \big[ -\partial_x \cZ +T'(z) \partial_z \cZ
\big](x=T(z),z) =0\, . \end{aligned}$$ This is the standard Neumann condition $$\partial_\xi {u}(t ,\xi=X,\zeta)=0,$$ because $$\big[ -\partial_x \cZ +T'(z) \partial_z \cZ \big](x=T(z),z) =
-T'(z) +T'(z) \big[ 1+T(z)T''(z) +Q'(z) \big]=0 \, ,$$ and $$\big[ -\partial_x \cX + T'(z) \partial_z \cX\big](x=T(z),z) = -
\frac{X + \left[T'(z)\right]^2}{T(z)-B(z)} \ne 0\, .$$
Now, the method of solution is as before. Using that $\eps$ is small, we obtain a perturbed wave equation for ${\widehat}{u}$, which we expand as $$\begin{aligned}
{\mathcal L}_0 {\widehat}{u} + \eps {\mathcal L}_1 {\widehat}{u}
+ \eps^2 {\mathcal L}_2 {\widehat}{u} =O(\eps^3) ,
\label{eq:pertw2n}\end{aligned}$$ with leading order operator $${\mathcal L}_0 = \partial_\zeta^2 + \partial_\xi^2 +\omega^2 /c^{2}
(\xi)\, ,$$ and perturbation $$\begin{aligned}
{\mathcal L}_1 = -2 (\nu-\mu) \partial_\xi^2 +2 (X- \xi) (\nu'-\mu')
\partial_{\zeta\xi} -2 X(X-\xi ) \nu'' \partial_\zeta^2 -X(X-\xi)
\nu''' \partial_\zeta - \\ \nonumber \big[X \mu'' + \xi
(\nu''-\mu'')\big] \partial_\xi + \omega^2 (\partial_\xi c^{-2}(\xi)
) \big[X\mu +(\nu-\mu) \xi \big] \, .\end{aligned}$$
Coupled Amplitude Equations {#sec:CAEn}
---------------------------
We proceed as in section \[sect:wavedec\]. We find that the complex mode amplitudes satisfy - with $\zeta$ instead of $z$, where the $\zeta$-dependent coupling coefficients are $$\begin{aligned}
\label{def:Cjln}
C_{jl}^\eps (\zeta) &=&
\eps C_{jl}^{(1)} (\zeta)
+
\eps^2 C_{jl}^{(2)} (\zeta)
+ O(\eps^3) \, ,
\\
\nonumber
C_{jl}^{(1)} (\zeta) &=&
c_{\nu,jl} \nu(\zeta) + i \beta_l d_{\nu,jl} \nu'(\zeta)+ e_{\nu,jl} \nu''(\zeta)
+ i \beta_l f_{\nu,jl} \nu'''(\zeta) \\
&&+
c_{\mu,jl} \mu(\zeta) + d_{\mu,jl} \big(2i \beta_l \mu'(\zeta) +\mu''(\zeta)\big)
\, , \end{aligned}$$ with $$\begin{aligned}
\label{def:cnun}
c_{\nu,jl} &=& \frac{1}{2 \sqrt{\beta_j \beta_l}} \Big[ \Big(
\frac{\omega^2}{c(X)^2}- \beta_l^2\Big) \phi_j(X) \phi_l(X)
+(\beta_j^2-\beta_j^2) \int_0^X d \xi \, \xi \phi_l \partial_\xi\phi_j
\Big] \, ,\\ d_{\nu,jl} &=& \frac{1}{2 \sqrt{\beta_j \beta_l}} \Big[ 2
\int_0^2 d \xi \, (X- \xi) \phi_j \partial_\xi\phi_l \Big]\, ,\\
e_{\nu,jl} &=& \frac{1}{2 \sqrt{\beta_j \beta_l}} \Big[ - \int_0^X d
\xi \, (X- \xi) \phi_j \xi \partial_\xi\phi_l +2 \beta_l^2 \int_0^X d
\xi (X-\xi) \phi_j \phi_l \Big]\, ,\\ f_{\nu,jl} &=& \frac{1}{2
\sqrt{\beta_j \beta_l}} \Big[ - \int_0^X d \xi \, (X- \xi) \phi_j
\phi_l \Big]\, ,
\end{aligned}$$ and coefficients $c_{\mu,jl} $ and $d_{\mu,jl}$ defined by and . Similar formulas hold for $C^{(2)}_{jl}(\zeta)$.
In the following we neglect for simplicity the evanescent modes, which only add a dispersive (frequency dependent phase modulation) net effect in the problem. These modes can be included in the analysis using a similar method to that in section \[sect:elim\_evanesc\].
The Coupled Mode Diffusion Process {#subseccoupledpowern}
----------------------------------
As we have done in section \[sect:diffusion\], we study under the forward scattering approximation the long range limit of the forward propagating mode amplitudes.
First, we give a lemma which shows that the description of the wave field in the variables $(x,z)$ or $(\xi,\zeta)$ is asymptotically equivalent.
We have uniformly in $x$ $$\cX \left( x,\frac{z}{\eps^2}\right) - x \stackrel{\eps \to
0}{\longrightarrow} 0,\quad \quad \cZ\left( x,\frac{z}{\eps^2}\right)
-\frac{z}{\eps^2} - \EE[ \nu'(0)^2] z \stackrel{\eps \to
0}{\longrightarrow} 0 \mbox{ in probability} \, .$$
The convergence of $\cX$ to $x$ is evident from definitions and . Moreover, gives $$\begin{aligned}
\cZ\left( x,\frac{z}{\eps^2}\right) -\frac{z}{\eps^2} &=& x \eps X \nu'
\left(\frac{z}{\eps^2}\right) -\eps X^2\int_0^{\frac{z}{\eps^2}}
(1+\eps \nu(s) ) \nu''(s) ds\, , \end{aligned}$$ and integrating by parts and using the assumption that the fluctuations vanish at $z = 0$, we get $$\begin{aligned}
\cZ\left( x,\frac{z}{\eps^2}\right) -\frac{z}{\eps^2} &=& \eps X
\left[ (x-X) \nu' \left(\frac{z}{\eps^2}\right)- \eps
\nu\left(\frac{z}{\eps^2}\right) \nu' \left(\frac{z}{\eps^2}\right)
\right] + \eps^2 \int_0^{\frac{z}{\eps^2}} \left[\nu'(s)\right]^2ds \, .\end{aligned}$$ The first term of the right-hand side is of order $\eps$ and the second term converges almost surely to $\EE[ \nu'(0)^2] z$ which gives the result.
The diffusion limit is similar to that in section \[subseccoupledpower\], and the result is as follows.
The complex mode amplitudes $({\widehat}{a}_j^\eps(\omega,\zeta)
)_{j=1,\ldots,N}$ converge in distribution as $\eps \rightarrow 0$ to a diffusion Markov process process $({\widehat}{a}_j(\omega,\zeta)
)_{j=1,\ldots,N}$. Writing $${\widehat}{a}_j(\omega,\zeta) = P_j(\omega,\zeta)^{1/2} e^{i
\phi_j(\omega,\zeta)}, \quad j=1,\ldots,N,$$ the infinitesimal generator of the limiting diffusion process $${\mathcal L} = {\mathcal L}_P + {\mathcal L}_\theta$$ is of the form (\[gendiffa\]), but with different expressions of the coefficients given below.
The coefficients $\Gamma^{(c)}_{jl}$ in ${\mathcal L}_P$ are given by $$\Gamma_{jl}^{(c)}(\omega) = {\widehat}\cR_\mu\left(\beta_j-\beta_l\right)
Q_{\nu,jl}^2 + {\widehat}\cR_\mu\left(\beta_j-\beta_l\right) Q_{\mu,jl}^2
\quad \mbox{ if } j \neq l \, ,$$ where $$\begin{aligned}
\nonumber Q_{\nu,jl} &=& c_{\nu,jl} + d_{\nu,jl}
\beta_l(\beta_l-\beta_j) -(\beta_l-\beta_j)^2 \big[ e_{\nu,jl} +
f_{\nu,jl} \beta_l(\beta_l-\beta_j) \big] \\ &=& \frac{X}{2
\sqrt{\beta_j \beta_l}}\left[ \frac{\omega^2}{c(X)^2}- \beta_l\beta_j
\right] \phi_j(X) \phi_l(X) \, , \\ \nonumber Q_{\mu,jl} &=&
c_{\mu,jl} +d_{\mu,jl} (\beta_l^2 -\beta_j^2) = \frac{X}{2
\sqrt{\beta_j \beta_l}} \partial_\xi \phi_j(0) \partial_\xi \phi_l(0)
\, .\end{aligned}$$ The coefficients in ${\mathcal L}_\theta$ are similar, $$\Gamma_{jl}^{(0)}(\omega) = {\widehat}\cR_\mu(0) Q_{\nu,jl}^2 + {\widehat}\cR_\mu(0) Q_{\mu,jl}^2 \quad \forall j, l \, ,$$ and $$\Gamma_{jl}^{(s)}(\omega) = \gamma_{\nu,jl} Q_{\nu,jl}^2 +
\gamma_{\mu,jl} Q_{\mu,jl}^2 \quad \mbox{ if } j \neq l \, ,$$ with $\gamma_{\nu,jl}$ and $\gamma_{\mu,jl}$ defined by .
We find again that these effective coupling coefficients depend only on the behaviors of the mode profiles close to the boundaries. In the case of Dirichlet boundary conditions, the mode coupling coefficient $\Gamma_{jl}^{(c)}(\omega)$ depends on the value of $ \partial_\xi
\phi_j \partial_\xi \phi_l$ at the boundaries. In the case of Neumann boundary conditions, the mode coupling coefficient $\Gamma_{jl}^{(c)}(\omega)$ depends on the value of $ \phi_j(X)
\phi_l(X)$.
Given the generator, the analysis of the loss of coherence, and of the mode powers is the same as in sections \[subseccoupledpower2\]-\[secfluc\].
Summary {#sect:summary}
=======
In this paper we obtain a rigorous quantitative analysis of wave propagation in two dimensional waveguides with random and stationary fluctuations of the boundaries, and either Dirichlet or Neumann boundary conditions. The fluctuations are small, of order $\eps$, but their effect becomes significant over long ranges $z/\eps^2$. We carry the analysis in three main steps: First, we change coordinates to straighten the boundaries and obtain a wave equation with random coefficients. Second, we decompose the wave field in propagating and evanescent modes, with random complex amplitudes satisfying a random system of coupled differential equations. We analyze the evanescent modes and show how to obtain a closed system of differential equations for the amplitudes of the propagating modes. In the third step we analyze the amplitudes of the propagating modes in the long range limit, and show that the result is independent of the particular choice of the change of the coordinates in the first step. The limit process is a Markov diffusion with coefficients in the infinitesimal generator given explicitly in terms of the covariance of the boundary fluctuations. Using this limit process, we quantify mode by mode the loss of coherence and the exchange (diffusion) of energy between modes induced by scattering at the random boundaries.
The long range diffusion limit is similar to that in random waveguides with interior inhomogeneities and straight boundaries, in the sense that the infinitesimal generators have the same form. However, the net scattering effects are very different. We quantify them explicitly in a high frequency regime, in the case of a constant wave speed, and compare the results with those in waveguides with interior random inhomogeneities. In particular, we estimate three important length scales: the scattering mean free path, the transport mean free path and the equipartition distance. The first two give the distances over which the waves lose their coherence and forget their direction, respectively. The last is the distance over which the cumulative scattering distributes the energy uniformly among the modes, independently of the initial conditions at the source.
We obtain that in waveguides with random boundaries the lower order modes have a longer scattering mean free path, which is comparable to the transport mean free path and, remarkably to the equipartition distance. The high order modes lose coherence rapidly, they have a short scattering mean free path, and do not exchange energy efficiently with the other modes. They also have a transport mean free path that exceeds the scattering mean free path. In contrast, in waveguides with interior random inhomogeneities, all the modes lose their coherence over much shorter distances than in waveguides with random boundaries. Moreover, the main mechanism of loss of coherence is the exchange of energy with the nearby modes, so the scattering mean free paths and the transport mean free paths are similar for all the modes. Finally, the equipartition distance is much longer than the distance over which all the modes lose their coherence.
These results are useful in applications such as imaging with remote sensor arrays. Understanding how the waves lose coherence is essential in imaging, because it allows the design of robust methodologies that produce reliable, statistically stable images in noisy environments that we model mathematically with random processes. An example of a statistically stable imaging approach guided by the theory in random waveguides with internal inhomogeneities is in [@borcea].
Acknowledgments {#acknowledgments .unnumbered}
===============
The work of R. Alonso was partially supported by the Office of Naval Research, grant N00014-09-1-0290 and by the National Science Foundation Supplemental Funding DMS-0439872 to UCLA-IPAM. The work of L. Borcea was partially supported by the Office of Naval Research, grant N00014-09-1-0290, and by the National Science Foundation, grants DMS-0907746, DMS-0934594.
Proof of Lemma \[lem.1\] {#sect:Proof}
========================
The proof given here relies on explicit estimates of the series in (\[eq:E6\]), obtained under the assumption that the background speed is constant $c(\xi) = c_o$. We rewrite (\[eq:E6\]) as $$\left[\Psi {\widehat}{\itbf v}\right](\om,z) = \left[\Psi_1 {\widehat}{\itbf
v}\right](\om,z) + \left[\Psi_2 {\widehat}{\itbf
v}\right](\om,z)
\label{eq:PE1}$$ with linear integral operators $\Psi_1$ and $\Psi_2$ defined component wise by $$\begin{aligned}
\big[ \Psi_1{\widehat}{\itbf v} \big]_{j}(\om,z) &=& \sum_{l= N+1}^\infty
\frac{1}{2\beta_{j}}\int^{\infty}_{-\infty}
(M^{\eps}_{jl}-\partial_{z}Q^{\eps}_{jl})(z+s){\widehat}{v}_{l}(\om,z+s)
e^{-\beta_{j}|s|}ds ,
\label{eq:Psi1} \\
\big[\Psi_2 {\widehat}{\itbf v} \big]_{j}(\om,z)
&=&
\sum_{l = N+1}^\infty
\frac{1}{2}\int^{\infty}_{-\infty} Q^{\eps}_{jl}(z+s)
{\widehat}{v}_{l}(\om, z+s)e^{-\beta_{j}|s|}ds.
\label{eq:Psi2}\end{aligned}$$ The coefficients have the explicit form $$\begin{aligned}
M_{jl}^\eps(z) &=& \left\{ 2 \left[\nu(z)-\mu(z)\right]
\left(\frac{\pi j}{X}\right)^2 + \frac{\nu''(z)-\mu''(z)}{2} \right\}
\delta_{jl} + (1-\delta_{jl}) \left[ \nu''(z)-\mu''(z)\right]
\frac{2lj}{j^2-l^2} - \nonumber \\ && (1-\delta_{jl}) \nu''(z)
\frac{2lj}{j^2-l^2} \left[ 1 - (-1)^{l+j} \right] + O(\eps) ,
\label{eq:PE2}\\
Q_{jl}^\eps(z) &=& \left[\nu'(z) - \mu'(z) \right] \delta_{jl} +
(1-\delta_{jl}) \left[ \nu'(z)-\mu'(z)\right]
\frac{4lj}{j^2-l^2} - \nonumber \\ && (1-\delta_{jl}) \nu'(z)
\frac{4lj}{j^2-l^2} \left[ 1 - (-1)^{l+j} \right] + O(\eps).
\label{eq:PE3}\end{aligned}$$
Let $\ell^{2}_{1}(\mathbb{Z};L^{2}(\mathbb{R}))$ be the space of square summable sequences of $L^{2}(\mathbb{R})$ functions with linear weights, equipped with the norm $$\|\textit{\textbf{v}}\|_{\ell_1^2}:=\Big[\sum_{j\in\mathbb{Z}}(j\;\|v_{j}
\|_{L^{2}(\mathbb{R})})^{2}\Big]^{1/2}.$$ We prove that $\Psi:\ell^{2}_{1}(\mathbb{Z};L^{2}(\mathbb{R}))\rightarrow
\ell^{2}_{1}(\mathbb{Z};L^{2}(\mathbb{R}))$ is bounded. The proof consists of three steps:\
**Step 1**: Let $T$ be an auxiliary operator acting on sequences $\textit{\textbf{v}}=\{v_{l}\}_{l\in\mathbb{Z}}$, defined component wise by $$[T \textit{\textbf{v}}]_{j}=\sum_{l \neq \pm j} \frac{j\;l}{j^{2}-l^{2}}\;v_{l} =
\sum_{l \neq \pm j} \left( \frac{l/2}{j+l}+\frac{l/2}{j-l} \right) \;v_{l} =
\frac{1}{2}\left((-l\;v_{-l})\ast
\frac{1}{l}+(l\;v_{l})\ast
\frac{1}{l}\right)_{j}+\frac{1}{4}(v_{-j}-v_{j}).$$ This operator is essentially the sum of two discrete Hilbert transforms, satisfying the sharp estimates [@Gr] $$\|\textit{\textbf{v}}\ast \frac{1}{l}\|_{\ell^2}\leq \pi
\|\textit{\textbf{v}}\|_{\ell^2}.$$ Therefore, the operator $T$ is bounded as $$\label{op1}
\|T\textit{\textbf{v}}\|_{\ell^2}\leq (1/2+\pi)\;\sum_{j \in
\mathbb{Z}} \|v_j\|_{\ell_1^2}.$$\
**Step 2**: Let $\textit{\textbf{v}}(z)=\{v_{l}(z)\}_{l\in\mathbb{Z}}$ be a sequence of functions in $\mathbb{R}$ and define the operator $$Q:\ell^{2}_{1}(\mathbb{Z};L^{2}(\mathbb{R})) \to
\ell^{2}_{1}(\mathbb{Z};L^{2}(\mathbb{R})), \qquad
[Q\textit{\textbf{v}}]_{j}(z)=[T\textit{\textbf{v}}]_{j}\ast
e^{-\beta_{j}|s|}(z)\;1_{\{j>N\}},
\label{op01}$$ where $$\beta_{j}=\sqrt{\left(\frac{\pi
j}{X}\right)^2-\left(\frac{\omega}{c_0}\right)^2} \geq
\frac{j\;\pi}{X}\;\sqrt{1-\left(\frac{\omega X/(\pi
c_0)}{N+1}\right)^{2}}=: j\;C(\omega), \quad \mbox{for} ~ j > N.
\label{op02}$$ Using Young’s inequality $$\begin{aligned}
\|
[Q\textit{\textbf{v}}]_{j}\|_{L^2(\mathbb{R})}=\|[T\textit{\textbf{v}}]_{j}\ast
e^{-\beta_{j}|s|}\|_{L^2(\mathbb{R})} \leq
\|[T\textit{\textbf{v}}]_{j}\|_{L^{2}(\mathbb{R})}\|e^{-\beta_{j}|s|}\|_{L^{1}(\mathbb{R})}=
\frac{2}{\beta_{j}}\;\| [T\textit{\textbf{v}}]_{j}
\|_{L^{2}(\mathbb{R})},\label{op03}\end{aligned}$$ we obtain from - that $\|Q\| \le (1 + 2 \pi)/C(\om)$, because $$\begin{aligned}
\sum_{j\in\mathbb{Z}} \left(j\;\|
[Q\textit{\textbf{v}}]_{j}\|_{L^2(\mathbb{R})}\right)^{2}&\leq&
\frac{4}{C(\omega)^{2}}\;\sum_{j\in\mathbb{Z}}\|
[T\textit{\textbf{v}}]_{j}
\|^{2}_{L^{2}(\mathbb{R})}=\frac{4}{C(\omega)^{2}}\;\int_{\mathbb{R}}\;
\sum_{j\in\mathbb{Z}} |[T\textit{\textbf{v}}]_{j}(z) |^{2}
dz\nonumber \\ &\leq& \frac{4}{C(\omega)^{2}} (1/2+\pi)^{2}
\int_{\mathbb{R}}\; \sum_{j\in\mathbb{Z}} | j\;v_{j}(z)
|^{2}dz =\frac{4(1/2+\pi)^{2}}{C(\omega)^{2}}\sum_{j\in\mathbb{R}}
\left( j \|v_{j}\|_{L^{2}(\mathbb{R})}\right)^2. \qquad
\label{est1}\end{aligned}$$
This estimate applies to the operator $\Psi_2$. Indeed, let us express $\Psi_2$ in terms of the operator $Q$ using and , $$\label{defT}
[\Psi_{2} \textit{\textbf{v}}]_{j}(z)=
\frac{1}{2}((\nu'-\mu')v_{j})\ast
e^{-\beta_{j}|s|}(z)1_{\{j>N\}} - 2[Q \mu'
\;v_{l}]_{j}(z) +2(-1)^{j}[Q \nu'(-1)^{l}\;v_{l}]_{j}(z).$$ That the sum in $\Psi_2$ is for $l> N$ is easily fixed by using the truncation $v_{l}={\widehat}{v}_{l}\;1_{\{l>N\}}$. Thus, using estimate for the last two terms, we obtain $$\| \Psi_{2} {\widehat}{\textit{\textbf{v}}} \|_{\ell^{2}_{1}}\leq
\frac{5+8\pi}{C(\omega)}\left(\|\mu\|_{W^{1,\infty}(\mathbb{R})}+
\|\nu\|_{W^{1,\infty}(\mathbb{R})}\right)\|{\widehat}{\textit{\textbf{v}}}\|_{\ell^{2}_{1}}.$$\
**Step 3**: It remains to show that the operator $\Psi_1$ is bounded. We see from (\[eq:Psi1\]), and that for any $j>N$ $$[\Psi_{1}{\widehat}{\textit{\textbf{v}}}]_{j}(z)=\frac{\pi^2 j^2}{\beta_{j}
X^{2}}((\nu-\mu){\widehat}{v}_{j})\ast
e^{-\beta_{j}|s|}(z)1_{\{j>N\}}-\frac{1}{\beta_{j}}[\tilde{\Psi}_{2}
{\widehat}{\textit{\textbf{v}}}]_{j}(z),$$ where $\tilde{\Psi}_{2}$ is just like the operator $\Psi_{2}$, with the driving process $(\nu', \mu')$ replaced by its derivative $(\nu'',\mu'')$. Using again Young’s inequality, we have $$\begin{aligned}
\| [\Psi_{1}{\widehat}{\textit{\textbf{v}}}]_{j}\|_{L^{2}(\mathbb{R})} &\leq
2\left(\frac{\pi}{X
C(\omega)}\right)^{2}\|(\nu-\mu){\widehat}{v}_{j}\|_{L^{2}(\mathbb{R})} +\frac{1}{j
C(\omega) }\| [\tilde{\Psi}_{2}
{\widehat}{\textit{\textbf{v}}}]_{j}\|_{L^{2}(\mathbb{R})}.\end{aligned}$$ Now multiply by $j$ and use the triangle inequality to obtain that $\Psi_1$ is bounded, $$\begin{aligned}
\|\Psi_{1}{\widehat}{\textit{\textbf{v}}}\|_{\ell^{2}_{1}}&\leq
\left[ \frac{2 \pi^2}{C^2(\om) X^2} \left( \|\nu\|_{L^\infty} +
\|\mu\|_{L^\infty}\right) + \frac{\left(5 +
8 \pi\right)}{C^2(\om)}\left( \|\nu\|_{W^{2,\infty}} +
\|\mu\|_{W^{2,\infty}}\right) \right]
\|{\widehat}{\textit{\textbf{v}}}\|_{\ell^{2}_{1}}.\end{aligned}$$
Independence of the change of coordinates {#ap:coordc}
=========================================
We begin the proof of Theorem \[thm.2\] with the observation that $${\widehat}{w}(\omega,\xi,z)={\widehat}{u}\left(
\omega,\ell^{\eps, -1}(z,F^{\eps}(z,\xi)),z\right),$$ where $\ell^{\eps, -1}$ is the inverse of $\ell^{\eps}$, meaning that ${\widehat}w$ and ${\widehat}u$ are related by composition of the change of coordinate mappings. Clearly, the composition inherits the uniform convergence property $$\label{et1}
\sup_{z\geq0}\sup_{\xi\in[0,X]}|\ell^{\eps, -1}(z,F^{\eps}(z,\xi))-\xi|=
O(\eps).$$
For the sake of simplicity we neglect the evanescent modes in the proof, but they can be added using the techniques described in section \[sect:elim\_evanesc\]. Using the propagating mode representation of ${\widehat}{u}(\omega,\xi,z)$, $$\begin{aligned}
\label{e1}
{\widehat}{w}(\omega,\xi,z)
&=\sum^{N}_{l=1}\phi_{l}(\omega,\xi)
{\widehat}{u}_{l}(\omega,z)+\sum^{N}_{l=1}\tilde{\phi}_{l}(\omega,\xi,z)
{\widehat}{u}_{l}(\omega,z),\end{aligned}$$ where we let $$\begin{aligned}
\tilde{\phi}_{l}(\omega,\xi,z) &=\phi_{l}
\left(\omega,\ell^{\eps, -1}(z,F^{\eps}(z,\xi))\right)-
\phi_{l}(\omega,\xi)\\ &=\int^{1}_{0}\left(\ell^{\eps, -1}
(z,F^{\eps}(z,\xi))-\xi\right)\partial_{\xi}\phi_{l}\left(\omega,s\;
\ell^{\eps, -1}(z,F^{\eps}(z,\xi)) +(1-s)\;\xi\right)\;ds.\end{aligned}$$ But we can also carry out the mode decomposition directly on ${\widehat}w$ and obtain $$\label{e2}
{\widehat}{w}(\omega,\xi,z)=\sum^{N}_{l=1}\phi_{l}(\omega,\xi){\widehat}{w}_{l}(\omega,z),$$ because the number of propagating modes $N$ and the eigenfunctions $\phi_j$ in the ideal waveguide are independent of the change of coordinates. Here ${\widehat}{w}_{l}(\omega,z)$ are the amplitudes of the propagating modes of ${\widehat}{w}$. Equating identities and , multiplying by $\phi_{j}(\omega,\xi)$ and integrating in $[0,X]$ we conclude that $$\label{e3}
{\widehat}{w}_{j}(\omega,z)={\widehat}{u}_{j}(\omega,z)+\sum^{N}_{l=1}
\tilde{c}_{lj}(\omega,z){\widehat}{u}_{l}(\omega,z),$$ where we introduced the random processes, $$\tilde{c}_{lj}(\omega,z)=
\int^{X}_{0}\phi_{j}(\omega,\xi)\int^{1}_{0}
\partial_{\xi}\phi_{l}\left(\omega,s\;\ell^{\eps, -1}(z,F^{\eps}(z,\xi))+(1-s)
\;\xi\right)\left(\ell^{\eps, -1}(z,F^{\eps}(z,\xi))-\xi\right) ds d\xi.$$ In addition, differentiating equation $\eqref{e3}$ in $z$, we have $$\label{e4}
\partial_{z}{\widehat}{w}_{j}(\omega,z)=\partial_{z}{\widehat}{u}_{j}(\omega,z)+
\sum^{N}_{l=1}\partial_{z}\tilde{c}_{lj}(\omega,z){\widehat}{u}_{l}(\omega,z)+
\tilde{c}_{lj}(\omega,z)\partial_{z}{\widehat}{u}_{l}(\omega,z).$$
Now, let us recall from the definition of the forward and backward propagating modes that $$i\beta_{j}{\widehat}{u}_{j}(\omega,z)+\partial_{z}{\widehat}{u}_{j}(\omega,z)=
2i\sqrt{\beta_{j}}\;{\widehat}{a}_{j}(\omega,z)e^{i\beta_{j}z}.$$ We conclude from and that $$\begin{gathered}
\label{e5}
{\widehat}{a}^{w}_{j}(\omega,z)={\widehat}{a}_{j}(\omega,z) +
\frac{1}{2}\sum^{N}_{l=1}\tilde{c}_{lj}(\omega,z)\left(\frac{\beta_{j}+
\beta_{l}}{\sqrt{\beta_j
\beta_{j}}}\;{\widehat}{a}_{l}(\omega,z)e^{-i(\beta_{j}-\beta_{l})z}+
\frac{\beta_{j}-\beta_{l}}{\sqrt{\beta_j\beta_{j}}}\;
{\widehat}{b}_{l}(\omega,z)e^{-i(\beta_{j}+\beta_{l})z}\right)\\ +
\frac{i}{2}\sum^{N}_{l=1}\frac{\partial_{z}\tilde{c}_{lj}(\omega,z)}{\sqrt{\beta_{j}
\beta_{l}} }
\left({\widehat}{a}_{l}(\omega,z)e^{-i(\beta_{j}-\beta_{l})z}+
{\widehat}{b}_{l}(\omega,z)e^{-i(\beta_{j}+\beta_{l})z}\right)\, ,\end{gathered}$$ where $\{{\widehat}{a}^{w}_{j}(\omega,z)\}_{j=1, \ldots, N}$ are the amplitudes of the forward propagating modes of ${\widehat}{w}(\omega,\xi,z)$. A similar equation holds for the backward propagating mode amplitudes $\{{\widehat}{b}^{w}_{j}(\omega,z)\}_{j=1, \ldots, N}$.
The processes $\tilde{c}_{lj}(\omega,z)$ can be bounded as $$\begin{aligned}
\max_{1\leq j,l\leq N}\{\sup_{z\geq0}|\tilde{c}_{lj}(\omega,z)|\}
\leq X \max_{1\leq j,l\leq N}\{ \sup_{\xi\in[0,X]}
|\phi_{j}(\omega,\xi) |\sup_{\xi\in[0,X]}
|\partial_{\xi}\phi_{l}(\omega,\xi)| \} \; \times \nonumber\\
\sup_{z\geq0}\sup_{\xi\in[0,X]}
|\ell^{\eps, -1}(z,F^{\eps}(z,\xi))-\xi| = O(\eps).\label{ct1}\end{aligned}$$ For the processes $\partial_{z}\tilde{c}_{lj}(\omega,z)$ we find a similar estimate. Indeed, note that $$\begin{aligned}
&&\partial_{z}\left[\partial_{\xi}\phi_{l}\left(\omega,s\; \ell^{\eps,
-1}(z,F^{\eps}(z,\xi))+(1-s)\;\xi\right) \left(\ell^{\eps,
-1}(z,F^{\eps}(z,\xi))-\xi\right)\right] =
\\&& \hspace{0.2in} -\lambda_{l}\;\phi_{l}(\omega,s\;\ell^{\eps, -1}
(z,F^{\eps}(z,\xi))+(1-s)\;\xi) \;s\;\partial_{z}[\ell^{\eps,
-1}(z,F^{\eps}(z,\xi))]\; (\ell^{\eps,
-1}(z,F^{\eps}(z,\xi))-\xi) + \\ && \hspace{1.9in}
\partial_{\xi}\phi_{l}(\omega,s\;\ell^{\eps,
-1}(z,F^{\eps}(z,\xi))+ (1-s)\;\xi)\;\partial_{z}[\ell^{\eps,
-1}(z,F^{\eps}(z,\xi))].\end{aligned}$$ A direct calculation shows that $$\begin{aligned}
\partial_{z} &
\left[\ell^{\eps, -1}(z,F^{\eps}(z,\xi))\right]=\partial_{z}\left[
\frac{ X( F^{\eps}(z,\xi)-\eps\mu(z) ) }{
X(1+\eps\nu(z))-\eps\mu(z) }
\right]\\ &=X\frac{(\partial_{z}F^{\eps}(z,\xi)-
\eps\mu'(z))(X(1+\eps\nu(z))-\eps\mu(x))-(
F^{\eps}(z,\xi)-\eps\mu(z) )\;\eps\;( \nu'(z)-\mu'(z)
)}{(X(1+\eps\nu(z))-\eps\mu(x))^{2}}.\end{aligned}$$ Hence, using condition for $\partial_{z}F^{\eps}(z,\xi)$ $$\sup_{z\geq0}\sup_{\xi\in[0,X]} \left|\partial_{z}\left
[\ell^{\eps, -1}(z,F^{\eps}(z,\xi))\right]\right|\leq
C(\|v\|_{W^{1,\infty}},\|\mu\|_{W^{1,\infty}})\;\eps.$$ Therefore, $$\begin{aligned}
\max_{1\leq j,l\leq
N}\{\sup_{z\geq0}|\partial_{z}\tilde{c}_{lj}(\omega,z)|\}\leq
X\max_{1\leq j,l\leq N}\{ \lambda_{l} \sup_{\xi\in[0,X]}
|\phi_{j}(\omega,\xi) |\sup_{\xi\in[0,X]} |\phi_{l}(\omega,\xi)|
\}\;O(\eps^{2}) + \nonumber\\ X\max_{1\leq j,l\leq N}\{
\sup_{\xi\in[0,X]} |\phi_{j}(\omega,\xi) |\sup_{\xi\in[0,X]}
|\partial_{\xi}\phi_{l}(\omega,\xi)| \}\;O(\eps).\label{ct2}\end{aligned}$$
Let ${\widehat}{\itbf a}^w(\omega,z)$ and ${\widehat}{\itbf b}^w(\omega,z)$ be the vectors containing the forward and backward propagating mode amplitudes and define the joint process of propagating mode amplitudes $\bX_\om^{
w}(z)=({\widehat}{\itbf a}^w(\omega,z),{\widehat}{\itbf b}^w(\omega,z))^{T}$. Let us the long range scaled process be $\bX_\om^{\eps, w}(z) =
\bX_\om^{ w}(z/\eps^2).$ Equation implies that $$\label{e6}
\bX_\om^{\eps,w}(z)=
\bX_\om^{\eps}(z)+
\bold{M}_{\eps}\left(\omega,\bold{C}\Big(\omega,\frac{z}{\eps^2}\Big),
\partial_{z}\bold{C}\Big(\omega,\frac{z}{\eps^{2}}\Big),\frac{z}{\eps^{2}}\right)
\bX_\om^\eps(z),$$ where $\bold{C}(\omega,z):=(\tilde{c}_{lj}(\omega,z))_{j,l=1,\ldots,N}$ and $\partial_{z}\bold{C}(\omega,z):=(\partial_{z}\tilde{c}_{lj}(\omega,z))_{j,l=1,\ldots,N}$. The subscript $\eps$ in the matrix $\bold{M}_{\eps}(\cdot)$ denotes the fact that this matrix depends explicitly on $\eps$ and, due to estimates and , we have $$\label{e7}
\sup_{z\geq0}\|\bold{M}_{\eps}(\omega,\bold{C}(\omega,z),\partial_{z}
\bold{C}(\omega,z),z)\|_{\infty}=O(\eps).$$
Let us prove then, that the processes $\bX_\om^{\eps, w}(z)$ and $\bX_\om^\eps(z)$ converge in distribution to the same diffusion limit. Denote by $Q(\bX_{0},L)$ the $2N$-dimensional cube with center $\bX_0$ and side $L$. The probability that $\bX_\om^{\eps, w}(z)$ is in this cube can be calculated using , $$\begin{aligned}
\label{dl1}
\mathbb{P}[\bX^{\eps,w}_\om(z)\in
Q(\bX_0,L)]&=\int_{\{ \bx \in
Q(\bX_{0},L)\}}\;d\mathbb{P}^{w}\left(\bx,
\frac{z}{\eps^{2}}\right)\nonumber\\ &=\int_{\{\bx \in
({\bf I}+\bold{M}_{\eps}(\bold{C},\partial_{z}\bold{C},z))^{-1}
Q( \bx_{0},L) \}}\;d\mathbb{P}
\left(\bx,\bold{C},\partial_{z}\bold{C},\frac{z}{\eps^{2}}\right).\end{aligned}$$ Here $\mathbb{P}^{w}(\bx,z)$ is the probability distribution of the process $\bX^{w}_\om(z)$ and $\mathbb{P}\left(\bx,\bold{C},\partial_{z}\bold{C},z\right)$ is the joint probability distribution of the processes $(\bX_\om(z),\bold{C}(\omega,z),\partial_{z}\bold{C}(\omega,z))$. We can take the inverse of ${\bf I}+\bold{M}_{\eps}(\bold{C},\partial_{z}\bold{C},z)$ by . The same estimate also implies that for every $\delta>0$ there exists $\eps_{0}$ such that for $\eps\leq\eps_0$, $$\label{dl2}
\{\bx\in Q(\bx_{0},(1-\delta)L)\}\subseteq\{\bx\in
({\bf I}+\bold{M}_{\eps}(\bold{C},\partial_{z}\bold{C},z))^{-1}
Q(\bx_{0},L)\}\subseteq\{\bx\in
Q(\bx_{0},(1+\delta)L)\}.$$ Denote the diffusion limits by $$\begin{aligned}
\tilde{\bX}_\om (z)=\lim_{\eps\rightarrow0}
\bX^\eps_\omega(z), \qquad \tilde{\bX}^{w}_\omega(z)=\lim_{\eps\rightarrow0}
\bX^{\eps, w}_\omega(z).\end{aligned}$$ We conclude from and that for any $\delta>0$, $$\begin{aligned}
\mathbb{P}[\tilde{\bX}_\om(z)\in Q(\bX_0,(1-\delta)L)]\leq
\mathbb{P}[\tilde{\bX}_\om^{w}(z)\in Q(\bX_0,L)] \leq
\mathbb{P}[\tilde{\bX}_\om(z)\in Q(\bX_0,(1+\delta)L)].\end{aligned}$$ Sending $\delta\rightarrow 0$, we have that for any arbitrary cube $
Q(\bx_0,L)$ $$\mathbb{P}[\tilde{\bX}_\omega(z)\in
Q(\bX_0,L)]=\mathbb{P}[\tilde{\bX}^{w}_\omega(z)\in
Q(\bX_0,L)].$$ This proves that the limit processes have the same distribution and therefore, the same generator.
Proof of Proposition \[prop.estim\] {#ap:estim}
===================================
Recall the expression of the wavenumbers. The first term in follows from : $$\Gamma_{jj}^{(0)} = \left( \frac{\pi}{X} \right)^2 \left[ {\widehat}\cR_\nu(0) + {\widehat}\cR_\mu(0) \right] \frac{j^4}{(N+\alpha)^2 - j^2}
\approx \frac{(2 \pi)^{3/2}}{X} \frac{k \ell}{N}
\frac{j^4}{(N+\alpha)^2 - j^2} \,.
\label{eq:EE6}$$ It increases monotonically with $j$, with minimum value $$\begin{aligned}
\Gamma_{11}^{(0)} \approx \frac{(2 \pi)^{3/2}}{X} \frac{k \ell }{N^3}
\ll 1\,,
\label{eq:EE6_1}\end{aligned}$$ and maximum value $$\begin{aligned}
\Gamma_{NN}^{(0)} \approx \frac{(2 \pi)^{3/2}}{2 \alpha X} k \ell N^2 \gg 1 \, .
\label{eq:EE_2}\end{aligned}$$
The second term in , which is in (\[eq:EE2J\]), follows from , and , $$\begin{aligned}
- \Gamma_{jj}^{(c)}(\om) &\approx& \frac{(2 \pi)^{3/2} j^2 }{X
\sqrt{(N+\alpha)^2 - j^2}}\hspace{-0.05in} \sum_{{\scriptsize
\begin{array}{c}l = 1 \\ l \ne j \end{array}}}^N
\hspace{-0.05in}\frac{l^2 k \ell}{N\sqrt{ (N+\alpha)^2 - l^2}}
e^{ -\frac{(k \ell)^2}{2} \left( \sqrt{1 - j^2/(N+\alpha)^2} -
\sqrt{1-l^2/(N+\alpha)^2}\right)^2} \, . \quad
\label{eq:EE8}\end{aligned}$$
If $0< j/N <1$, then we can estimate (\[eq:EE8\]) by using the fact that the main contribution to the sum in $l$ comes from the terms with indices $l$ close to $j$, provided that $k \ell$ is larger than $N^{1/2}$ and smaller than $N$. We find after the change of index $l=j+q$: $$\begin{aligned}
- \Gamma_{jj}^{(c)}(\om) &\approx& \frac{(2 \pi)^{3/2} j^4 k \ell}{X
((N+\alpha)^2 - j^2) N} \sum_{q\neq 0 }
e^{ -\frac{(k \ell)^2}{2} \frac{j^2}{(N+\alpha)^2-j^2} \frac{q^2}{(N+\alpha)^2}} \end{aligned}$$ Interpreting this sum as the Riemann sum of a continuous integral, we get $$\begin{aligned}
- \Gamma_{jj}^{(c)}(\om) &\approx& \frac{(2 \pi)^{3/2} j^4 k \ell}{X
((N+\alpha)^2 - j^2)} \int_{-\infty}^\infty
e^{ -\frac{(k \ell)^2}{2} \frac{j^2}{(N+\alpha)^2-j^2} s^2 } ds =
\frac{(2 \pi)^2 j^3}{X \sqrt{(N+\alpha)^2 - j^2}} .
\label{eq:EE8b}\end{aligned}$$ By comparing with (\[eq:EE6\]) we find that the coefficient $- \Gamma_{jj}^{(c)}(\om)$ is larger than $\Gamma_{jj}^{(0)}$ when $k \ell$ satisfies $\sqrt{N}\ll k \ell \ll N$.\
To be complete, note that:\
- If $k \ell \sim N$, then $- \Gamma_{jj}^{(c)}(\om)$ is larger than $\Gamma_{jj}^{(0)}$ if and only if $j/N < (1+(k\ell /N)^2 ) ^{-1/2}$.\
- If $k \ell$ is larger than $N$, then the main contribution to the sum in $l$ comes only from one or two terms with indices $l=j\pm 1$, and it becomes exponentially small in $ (k \ell)^2/ N^2$. In these conditions $- \Gamma_{jj}^{(c)}(\om)$ becomes smaller than $\Gamma_{jj}^{(0)}$.\
For $j \sim 1$ we can estimate again by interpreting the sum over $l$ as a Riemann sum approximation of an integral that we can estimate using the Laplace perturbation method. Explicitly, for $j=1$ we have $$\begin{aligned}
- \Gamma_{11}^{(c)}(\om) &\approx& \frac{(2 \pi)^{3/2}}{X} \frac{1}{N}
\sum_{l = 2}^N
\hspace{-0.05in}\frac{(l/N)^2 k \ell}{\sqrt{ (1+\alpha/N)^2 -
(l/N)^2}} e^{ -\frac{(k \ell)^2}{2} \left( 1-
\sqrt{1-(l/N)^2}\right)^2} \, \nonumber \\ &\approx & \frac{(2
\pi)^{3/2}k \ell }{X} \int_{0}^1 ds \frac{s^2}{\sqrt{1 -
s^2}} e^{-\frac{(k \ell)^2}{2} \left(1 - \sqrt{1-s^2}\right)^2}.
\label{eq:EE10}\end{aligned}$$ We approximate the integral with Watson’s lemma [@bender Section 6.4], after changing variables $ \zeta = (1-\sqrt{1-s^2})^2$ and obtaining that $$\int_{0}^1 ds \frac{s^2}{\sqrt{1 -
s^2}} e^{-\frac{(k \ell)^2}{2} \left(1 - \sqrt{1-s^2}\right)^2}
\approx \int_0^1 d \zeta \varphi(\zeta) e^{-\frac{(k \ell)^2}{2} \zeta}, \qquad
\varphi(\zeta) = \frac{\zeta^{-1/4}}{\sqrt{2}} + O(\zeta^{1/4})\, .$$ Watson’s lemma gives $$\int_{0}^1 ds \frac{s^2}{\sqrt{1 - s^2}} e^{-\frac{(k \ell)^2}{2}
\left(1 - \sqrt{1-s^2}\right)^2} \approx \frac{\Gamma(3/4)
2^{1/4}}{(k \ell)^{3/2}} \, ,$$ and therefore by and , $$\begin{aligned}
- \Gamma_{11}^{(c)}(\om) &\approx& \frac{(2 \pi)^{3/2} \Gamma(3/4)
2^{1/4}}{X (k \ell)^{1/2}} \, .\end{aligned}$$ By comparing with (\[eq:EE6\_1\]) we find that the coefficient $- \Gamma_{11}^{(c)}(\om)$ is larger than $\Gamma_{11}^{(0)}$.\
For $j\sim N$ only the terms with $l \sim N$ contribute to the sum in . If $k \ell \sim \sqrt{N}$, then we find that $$- \Gamma_{NN}^{(c)} (\omega) \approx \frac{(2\pi)^{3/2} N^2 k \ell}{2 \sqrt{\alpha} X} \sum_{q=1}^\infty \frac{1}{\sqrt{\alpha+q}}
e^{ -\frac{(k\ell)^2}{2 N} (\sqrt{q+\alpha}- \sqrt{\alpha} )^2 }
\sim \frac{(2\pi)^{3/2} N^3} {2 C(\alpha) k\ell X} ,$$ up to a constant $C(\alpha)$ that depends only on $\alpha$. By comparing with (\[eq:EE\_2\]) we can see that it is of the same order as $\Gamma_{NN}^{(0)}$. If $k \ell \gg \sqrt{N}$, then we find that $$- \Gamma_{NN}^{(c)} (\omega) \approx \frac{(2\pi)^{3/2} N^2 k \ell}{2 \sqrt{\alpha(1+\alpha)} X}
e^{ -\frac{(k\ell)^2}{2 N} (\sqrt{1+\alpha}- \sqrt{\alpha} )^2 } ,$$ which is very small because the exponential term is exponentially small in $(k\ell)^2/ N$. In these conditions $- \Gamma_{NN}^{(c)} (\omega) $ is smaller than $\Gamma_{NN}^{(0)}$.
[20]{}
M. Asch, W. Kohler, G. Papanicolaou, M. Postel, and B. White, Frequency content of randomly scattered signals, SIAM Rev. [**33**]{} (1991), 519-625.
C. Bender and S.A. Orszag, [*Advanced mathematical methods for scientists and engineers*]{}, McGraw-Hill, Inc., 1978.
L. Borcea, L. Issa, and C. Tsogka, Source localization in random acoustic waveguides, SIAM Multiscale Modeling Simulations, [**8**]{} (2010), 1981-2022.
L. B. Dozier and F. D. Tappert, Statistics of normal mode amplitudes in a random ocean, J. Acoust. Soc. Am. [**63**]{} (1978), 353-365; J. Acoust. Soc. Am. [**63**]{} (1978), 533-547.
J.-P. Fouque, J. Garnier, G. Papanicolaou, and K. Sø lna, [*Wave propagation and time reversal in randomly layered media*]{}, Springer, New York, 2007.
J. Garnier, Energy distribution of the quantum harmonic oscillator under random time-dependent perturbations, Phys. Rev. E [**60**]{} (1999), 3676-3687.
J. Garnier, The role of evanescent modes in randomly perturbed single-mode waveguides, Discrete and Continuous Dynamical Systems B [**8**]{} (2007), 455-472.
J. Garnier and G. Papanicolaou, Pulse propagation and time reversal in random waveguides, SIAM J. Appl. Math. [**67**]{} (2007), 1718-1739.
J. Garnier and K. S[ø]{}lna, Effective transport equations and enhanced backscattering in random waveguides, SIAM J. Appl. Math., **68** (2008), 1574-1599.
C. Gomez, Wave propagation in shallow-water acoustic random waveguides, Commun. Math. Sci., **9** (2011), 81-125.
L. Grafakos, An elementary proof of the square summability of the discrete Hilbert transform, The American Mathematical Monthly, **101**, No. 5, 456–458 (May, 1994).
W. Kohler, Power reflection at the input of a randomly perturbed rectangular waveguide, SIAM J. Appl. Math. [ **32**]{} (1977), 521-533.
W. Kohler and G. Papanicolaou, Wave propagation in randomly inhomogeneous ocean, in Lecture Notes in Physics, Vol. 70, J. B. Keller and J. S. Papadakis, eds., Wave Propagation and Underwater Acoustics, Springer Verlag, Berlin, 1977.
W.A. Kuperman, W. S. Hodkiss, H. C. Song, T. Akal, C. Ferla, and D.R. Jackson, Experimental demonstration of an acoustic time-reversal mirror, Journal of the Acoustical Society of America, **103** (1998), 25-40.
H. J. Kushner, [*Approximation and weak convergence methods for random processes*]{}, MIT Press, Cambridge, 1984.
D. Marcuse, [*Theory of dielectric optical waveguides*]{}, Academic Press, New York, 1974.
A. Nachbin and G. Papanicolaou, Water waves in shallow channels of rapidly varying depth, J. Fluid Mech. [**241**]{} (1992), 311-332.
G. Papanicolaou and W. Kohler, Asymptotic theory of mixing stochastic differential equations, Commun. Pure Appl. Math. [**27**]{} (1974), 641-668.
G. Papanicolaou and W. Kohler, Asymptotic analysis of deterministic and stochastic equations with rapidly varying components, Comm. Math. Phys. [**45**]{} (1975), 217-232.
M. C. W. van Rossum and Th. M. Nieuwenhuizen, Multiple scattering of classical waves: microscopy, mesoscopy, and diffusion, Rev. Mod. Phys. [**71**]{} (1999), 313-371.
[^1]: Explicitly, they are $\varphi$-mixing processes, with $\varphi \in L^{1/2}(\RR^+)$, as stated in [@kushner 4.6.2].
[^2]: The case $k \ell \gtrsim N$ is also discussed in Appendix \[ap:estim\].
[^3]: Recall from section \[sec:chap21\_propmatr\] that the range is actually $z/
\eps^2$.
[^4]: By direct calculation we mean numerical calculation of the eigenvalue. We find that for $N \geq 20$ and for $k \ell \gtrsim \sqrt{N}$, $|\Lambda_2(\om)| \approx |\Gamma_{11}^{(c)}(\om)|$ with a relative error that is less than $1\%$.
|
---
abstract: 'This paper defines and discusses a set of rectangular all-sky projections which have no singular points, notably the Tesselated Octahedral Adaptive Spherical Transformation (or TOAST) developed initially for the WorldWide Telescope (WWT). These have proven to be useful as intermediate representations for imaging data where the application transforms dynamically from a standardized internal format to a specific format (projection, scaling, orientation, etc.) requested by the user. TOAST is strongly related to the Hierarchical Triangular Mesh (HTM) pixelization and is particularly well adapted to the situations where one wishes to traverse a hierarchy of increasing resolution images. Since it can be recursively computed using a very simple algorithm it is particularly adaptable to use by graphical processing units.'
author:
- Thomas McGlynn
- Jonathan Fay
- Curtis Wong
- Philip Rosenfield
bibliography:
- 'toast.bib'
title: 'Octahedron-Based Projections as Intermediate Representations for Computer Imaging: TOAST, TEA and More.'
---
Introduction {#sec:intro}
============
Hundreds of map projections have been developed over the course of many centuries [e.g., see @Snyder1993] trying to represent a spherical surface despite the physical realities of publishing information on flat sheets of paper and monitors. Different projections are designed to meet different goals: the Mercator projection is designed to ease navigation, it accurately represents the directions between nearby points so that a sailor can steer a boat properly. For this purpose, the projection’s gross distortion of polar regions is a minor nit. The Molleweide and Aitoff projections give pleasing all sky images with limited distortion. The tangent plane or gnomonic projection is frequently used in astronomy since it often approximates the behavior of real small scale astronomical images. Similarly the orthographic or sine projection arises naturally in images derived from interferometry. The Hierarchical Equal Area Pixelization’s (HEALPix, @Gorski2005) equal-area, and iso-latitude characteristics make it very attractive for the computation of the spherical harmonics used in the analysis of the cosmic microwave background. Computers have made it much easier to both define and use projections. The widely used WCSLIB[^1] supports dozens of different projections. With modern computers, users can rapidly project and transform images from one projection and coordinate system to another. Computers can also provide access to very large images. Systems can cache images at multiple resolutions, with a low resolution image of a large area (perhaps the entire sky) split into multiple tiles of higher resolution. These tiles may themselves be split into higher resolution sub-tiles, forming a hierarchy of ever finer resolution.
The ability of software to rapidly transform data into any given output projection and coordinate system has established a new driver for projections: providing a flexible internal representation of the data that can be easily transformed into users’ desired outputs. It is possible to build systems which can transform directly from and to each of a set of known projections. However, the number of transformations required goes up as the square of the number of projections involved. A more practical approach can be to consider one fiducial representation and transform this to and from the other representations. The intermediate frame is not intended to be ‘seen’ by users. The form of this intermediate projection would have different goals from those that have driven the development of earlier sky projections.
This paper discusses a class of projections that have some particularly desirable characteristics for this purpose. The remainder of this introduction discusses the characteristics we would like for an ideal intermediate projection. In the next section we describe a set of projections based on transformation from the sphere to cubes. These were derived in a fashion similar to the projections we introduce below and can help us understand how our new projections relate to older approaches.
Section 3 shows a ‘topological’ framework for how we build our class of projections, which use octahedrons rather than cubes. This addresses how we cut the sphere to produce our image in the projection plane.
The fourth section discusses three specific realizations of this topology including the Tessellated Octahedral Adaptive Spherical Transformation (TOAST). Section 5 discusses specific aspects of the TOAST projection and why it is especially well adapted for use in GPU-based computations. Our conclusions note some of the areas where these projections are being used and are available and briefly explores the possibility of projections using other regular solids.
Consider images which we have stored in some standard intermediate projection where the data is going to be resampled for display into some other projection, scale, or coordinate system to meets a user’s particular goals. What makes a good intermediate projection? We suggest four basic criteria.
1. The projection should be able to represent the entire sky in a single image. If multiple images are required to represent the sky then software needs to worry about the cross-over points and which image to use for which point in the sky. The projection should be easily adaptable to all sizes of image from the entire sky to a tiny area around a point source.
2. The all-sky projection should be representable as a finite square or rectangle. Storage can be allocated efficiently and much of our software assumes this organization. If there are curvilinear boundaries then pixels along the border may be very difficult to deal with properly: part of the pixel in the projected region and part outside. If we wish to tile it is trivial to split a rectangle into sub-tiles so a hierarchy of tiles at various resolutions can be easily supported.
3. There should be no discontinuities or singularities in the projection. If there is a point at which the transformation between the sphere and the projection is singular it is going to be much more difficult to accurately transform the region around the singularity. One way to find discontinuities is looking at the Tissot Indicatrices for projections. These show how small circles on the sphere are rendered in the projection plane. If the area of the projected circles goes to 0 or infinity we have one class of singularity. We can also have a singularity when the circles become infinitely long and thin, even though they may preserve area.
4. The projection should be continuous across all boundaries – including the outer boundaries of an all sky image. E.g., consider a standard Cartesian projection: While an all sky image projects to a finite rectangle, one can trivially extend the east and west boundaries by repeating the data cyclically. Even if we happen to be near the eastern or western edges of a Cartesian map we can be assured that there is no real discontinuity at that edge.
These four simple criteria rule out the projections in common use. Table \[table:projections\] lists the projections in Table 13 from @Calabretta2002 and notes where they meet these criteria.
The orthographic/sine and gnomonic/tangent plane projections can only map half the sky. The Mercator projection cannot include the poles. Most of the pleasing all-sky projections, Aitoff, Molleweide and the like are not rectangular. The Plate-Caree or Cartesian projection meets that criterion but is singular at the poles as are the other cylindrical projections. A stereographic image does still a little better, it is singular at only one pole, but it is not finite. The quadralaterilized cube projections nearly meet all of the criteria but map the sky to multiple rectangles, not a single one. None of the commonly used (and this table includes some uncommon ones) sky projections meet our criteria.
Due to its popularity we have added one projection to the list in the @Calabretta2002, the Hierarchical Equal-Area Pixelization[^2] (HEALPix) suggested by @Gorski2005, [@Calabretta2007]). The HEALPix representation can represent the entire sky and has no singularities, but the standard rendering is not rectangular. The most compact representation is as a rectangle with serrated edges.
[lcccc]{}\[table:projections\] Zenithal Perspective (AZP) & Yes & No & No & No\
Slant Zenithal Perspective (SZP) & Yes & No & No & No\
Gnomonic or Tangent (TAN) & No & No & No & No\
Stereographic (STG) & No & No & No & No\
Slant Orthographic or Sine (SIN) & No & No & No & No\
Zenithal Equidistant (ARC) & Yes & No & No & Yes\
Zenithal Polynomial (ZPN) & Varies & No & No & Yes\
Airy (AIR) & No & No & Yes & Yes\
Cylindrical perspective (CYP) & Yes & Yes & No & Yes\
Cylindrical equal area (CEA) & Yes & Yes & No & Yes\
Plate carree or Cartesian (CAR) & Yes & Yes & No & Yes\
Mercator (MER) & No & No & No & No\
Molleweide (MOL) & Yes & No & Yes & Yes\
Hammer Aitoff (AIT) & Yes & No & Yes & Yes\
Conic Perspective (COP) & Yes & No & Yes & No\
Conic Equal Area (COE) & Yes & No & Yes & No\
Conic Orthomorphic (COO) & Yes & No & Yes & No\
Bonne’s Equal Area (BON) & Yes & No & Yes & No\
Polyconic (PCO) & Yes & No & Yes & No\
Tangential Spherical Cube (TSC) & Yes & No & Yes & No\
COBE Quadrilateralized Spherical Cube (CSC) & Yes & No & Yes & No\
Quadrilateralized Spherical Cube (QSC) & Yes & No & Yes & No\
HEALPix & Yes & No & Yes & No\
Lessons of cube-based projections
=================================
We noted that one set of projections that come close to meeting the requirements are projections where the sphere is projected onto the six facets of an enclosing cube. This kind of projection was popularized with COBE Spherical Cube (CSC) projection. Several variants have been developed and three are shown in Table \[table:projections\]. The projections we discuss below have many similarities to the cubic projections, so we will discuss these in a bit more detail.
If we look at the cube projections as a class we can view them as having two elements, a common topological element where we determine which face to map a given element of the sky to, and a detailed transformation rule within the face which differs among the projections.
Determining the face that a given celestial position maps to is non-trivial when we work using coordinates, but can be done straightforwardly using the unit vectors on the sphere. The face a given position maps to is determined by the index and sign of the unit vector component with the largest magnitude. E.g., in Figure \[fig:cubic-qaud-proj\], the region in the sky where the z component of the unit vector is positive and larger than the other components maps to uppermost facet. If the x component is largest and negative the point maps to the second facet from the left centered on the Galactic anticenter.
This approach enables us to determine the corner locations easily. These are just the points at which the unit vector components are of equal magnitude. Since their squares must sum to unity, each component must be $\pm \sqrt{3}/3$ . Thus the latitude of the corners is $\sin^{-1} \sqrt{1/3}$ or about $35^{\circ}$. The distance from the center of a tile (e.g., the pole) to a corner must be $\sin^{-1} \sqrt{2/3}$ or about $55^{\circ}$.
Once we have split the sky into face squares, they can be arranged in a variety of ways. The T shown in Figure \[fig:cubic-qaud-proj\] is one example, but one can ‘roll’ the polar facets to wherever is convenient. Figure \[fig:cubic-qaud-proj\] shows that while the transformation is continuous at the cube boundaries, there is a very significant discontinuity in the derivatives there. We see major kinks in the coordinate grid at facet edges.
Another aspect of the cube projections is that they cannot be naturally extended to fill the projection plane. Although we can extend either of the lines in the T indefinitely, we cannot fill in the space inside the corner of the T. To do so a tile would need to have two identical edges.
This discussion so far applies to all of the cube projections. A complete projection needs some algorithm which takes the coordinates in a sixth of the sky and maps it to a square.
A simple approach is to visualize embedding the unit sphere in a cube with the faces of the cube tangent to the sphere. If we draw a line from the center of the sphere it passes through the sphere and then the cube mapping the position on the sky to a specific location on a specified cube face. The sky is split into six tangent planes. This is the Tangent Spherical Cube (TSC).
The two other cube projections included in @Calabretta2002 use more complex transformation rules to reduce (CSC) or eliminate (QSC) variations in pixel area within the projection (see @Chan1975 [@Oneill1976; @1992ASPC...25..379W] for details). The approach we have taken below is quite similar to the cube projections, but is based upon the octahedron rather than the cube. In the next section we discuss the overall approach to the transformation. The following section describes three specific implementations of these: a tangent plane approach similar to the TSC projection, an equal area approach which has significant similarities to the HEALPix [@Gorski2005] projection and is similar in motivation to the QSC cube projection, and a projection based on recursive averaging of vectors. This last approach, the Tessellated Octahedral Adaptive Spherical Transformation or TOAST is particularly easy to compute on hierarchical tiles and is used in the WorldWide Telescope[^3] [@Rosenfield2018]. We expound upon the properties of the TOAST projection in Section 5 and discuss the differences between the TOAST projection and the TOAST pixelization.
A topological approach based on the octahedron
==============================================
While the cube projections did not meet our original criteria the cube is not the only solid we can embed a sphere inside. If we use an octahedron, a simple rescaling enables us to generate projections that meet all of our desired criteria.
Using octahedrons is not new. An early use of an octahedral decomposition of the globe was Cahill’s Butterfly World Map [@Cahill1909] a rounded version of the projection shown in Figure \[fig:CAH-proj\].
An octahedron is 8 equilateral triangles arranged as two pyramids pointing away from each other. Cahill noted that the octahedron can be unwrapped into an elegant butterfly shape which can be arranged for an Earth projection such that the only continent significantly chopped up is Antarctica. Even this may be less problematic then the omission of most of the continent in traditional Mercator maps. In Figure \[fig:CAH-proj\] we show a variant of Cahill’s projection where each of the triangles is a tangent plane projection of one octant of the sky (using the same survey as Figure 1). While this shape may be attractive, the non-convex nature violates one of our desired characteristics - that the map be a simple rectangle.
To get a rectangular map from an octahedral projection, we transform the equilateral triangles into right isosceles triangles. One simple transformation squashes each triangle down from the pole (or equivalently stretches it along the base at the equator). E.g., we can look down at the octahedron from directly above. The viewer will see the facets of the octahedron appear as right triangles forming a diamond or square as shown in Figure \[fig:transform-octah-sq\]. The northern facets conceal the southern facets below them. Given 8 right triangles there are many possible arrangements that we can make of the triangles to form a rectangle. We choose to use the equator lines as hinges, and flip out the southern facets to form a square centered on the North Pole with the South Pole split to each of the four corners of the square.
Alternatively, one can envisage squashing the triangles in Figure \[fig:CAH-proj\], pressing down from the poles. As the northern hemisphere triangles flatten, the gap that splits the north at 180$^\circ$ closes. When it closes we are left with the same shape as of Figure \[fig:transform-octah-sq\].
We now have a general prescription for building a projection that meets all of our criteria so long as the – as yet unspecified – transformation between an octant of the sphere and the facet of the octahedron is singularity free. Since we only need to transform an eighth of the sky for each facet avoiding singularities is not difficult.
Finding the facet that particular coordinates in the sky map to is easier when we use an octahedron compared to the cube projections. Since we are mapping octants, the equator divides facets in latitude, and the facets divide in longitude in 90 degree segments. If we deal with unit vectors then we can use just the signs of the unit vector components to map to the facet.
Since each triangle is stretched, we anticipate that our projection should be continuous at the boundaries where we joined tiles, but it is unlikely to be differentiable there, since the stretching of different facets is in a different direction. This is similar to what we saw with the cube projections.
While the sphere projects to a finite square in the projection plane in this approach, the entire projection plane can be covered by tiling the plane with replications of the central square. Adjacent tiles are rotated by 180 degrees. There are no hard boundaries to the projection. The projection plane can be covered with interleaved diamonds of the two hemispheres.
If we recall our impetus for these projections, to support computer processing, the octahedral-based approach has several significant advantages compared to the cube-based projections: our key gain is the ability to represent the sky in a single rectangle, but we also find the mapping between sky and facets is easier and that there are no real edges to the projection.
Specific projections
====================
In discussing the transformation from sky to plane, we need only discuss one octant. If we understand the projection in the prime octant where the longitude, $\alpha$, has $0 \le \alpha \le 90^\circ$ and the latitude, $\delta$, has $0\le \delta \le 90^\circ$, then by symmetry we can compute the projection for all of the other octants.
Suppose that we have some function $(x_0,y_0) = f(\alpha_0,\delta_0)=g(u_0,v_0,w_0)$ defined over coordinates, $(\alpha_0,\delta_0)$, or a unit vector, $(u_0,v_0,w_0)$, in the prime octant. Then anywhere on the sphere we may transform from $(\alpha,\delta)$ to $(\alpha_0,\delta_0)$ or from $(u,v,w)$ to $(u_0,v_0,w_0)$ and then transform the resulting $(x_0,y_0)$ back to $(x,y)$. Table \[table:transforms\] describes one set of consistent transformations where we scale the transformation such that we are filling the square in the projection plane with $|x|, |y| \le s$ with a total area in the plane of $4s^2$. The first column numbers the octants. The next two define the octant in terms of coordinate ranges or signs of the unit vector components. The fourth and fifth columns indicate how we transform the actual coordinates or unit vectors to the corresponding values in the prime octant. The last column indicates how we transform the projected position we get from our function to the actual location in the projection plane. It includes the scaling parameter, $s$, that depends upon the scaling we choose for the projection.
[llllll]{} 0 & $0 \le \alpha \le 90^\circ$ & $+ + +$ & $(\alpha, \delta)$ & $( u, v, w)$ & $(x_0, y_0)$\
& $0 \le \delta \le 90^\circ$ & & & &\
1 & $0 \le \alpha \le 90^\circ$ & $+ + -$ & $(\alpha, -\delta)$ & $( u, v,-w)$ & $(s-y_0, s-x_0)$\
& $-90 \le \delta \le 0^\circ$ & & & &\
2 & $90 \le \alpha \le 180^\circ$ & $- + +$ & $(\alpha-90, \delta)$ & $( v,-u, w)$ & $(-y_0,x_0)$\
& $0 \le \delta \le 90^\circ$ & & & &\
3 & $90 \le \alpha \le 180^\circ$ & $- + -$ & $(\alpha-90, -\delta)$ & $( v,-u,-w)$ & $(x_0-s,s-y_0)$\
& $-90 \le \delta \le 0^\circ$ & & & &\
4 & $180 \le \alpha \le 270^\circ$ & $- - +$ & $(\alpha-180, \delta)$ & $(-u,-v, w)$ & $(-x_0,-y_0)$\
& $0 \le \delta \le 90^\circ$ & & & &\
5 & $180 \le \alpha \le 270^\circ$ & $- - -$ & $(\alpha-180,-\delta)$ & $(-u,-v,-w)$ & $(y_0-s, x_0-s)$\
& $-90 \le \delta \le 0^\circ$ & & & &\
6 & $270 \le \alpha \le 360^\circ$ & $+ - +$ & $(\alpha-270, \delta)$ & $(-v, u, w)$ & $(y_0,-x_0)$\
& $0 \le \delta \le 90^\circ$ & & & &\
7 & $270 \le \alpha \le 360^\circ$ & $+ - -$ & $(\alpha-270,-\delta)$ & $(-v, u,-w)$ & $(s-x_0,y_0-s)$\
& $-90 \le \delta \le 0^\circ$ & & & &\
Triangular octahedral tangent plane (TOT)
-----------------------------------------
One simple projection corresponds directly to TSC, the tangential spherical cube projection. We just embed the sphere inside octahedron such that sphere is tangent to each facet. We draw lines from the center of the sphere through the sphere and octahedron, mapping each position on the sphere to a unique facet and position in the polyhedron.
The tangent plane projection around an arbitrary point can be represented as $$\begin{aligned}
x &=& \frac{\cos \delta \sin(\alpha-\alpha_0 )} {\cos c} \\
y &=& \frac{ \cos\delta_0 \sin\delta - \sin\delta_0 \cos\delta \cos(\alpha-\alpha_0)}{\cosc}\end{aligned}$$ where $$\cosc= \sin\delta_0 \sin\delta+ \cos\delta_0\cos\delta\cos(\alpha-\alpha_0)$$ and $(x,y)$ is the point corresponding to the right ascension and declination $(\alpha, v)$. The tangent point of the projection is $(\alpha_0, \delta_0)$. Here $c$ is just the angular distance to the reference point for the projection.
The inverse projection from the tangent plane to the sphere is $$\begin{aligned}
\delta &=& \sin^{-1}\left(\cosc\sin\delta_0+ \frac{y \sinc \cos\delta_0}{p}\right) \\
\alpha &=& \tan^{-1}\frac{x \sinc}{p \cos\delta_0 \cosc-y \sin\delta_0 \sinc}\end{aligned}$$ where $$\begin{aligned}
p= \sqrt{x^2+y^2}
c=\tan^{-1}p \end{aligned}$$ Here $p$ is the distance in the projection plane from the reference location, while $c$ is again the distance on the celestial sphere.
If the reference point is set to the pole, i.e., $(\alpha_0, \delta_0)$= $(0,90^\circ)$ then we have a much simpler transformation: $$\begin{aligned}
x&=&\frac{\sin\alpha}{\tan\delta} \\
y&=&\frac{\cos\alpha}{\tan\delta}\end{aligned}$$ With the inverse $$\begin{aligned}
\alpha&=&\tan^{-1}(y,x) \\
\delta&=&\cot^{-1}(x^2+y^2)\end{aligned}$$
In practice one can often rotate the center of the tile to the pole and use the simpler equations. This rotation may be combined with any rotation needed to move to the prime octant.
By symmetry the tangent point for each facet when we enscribe a sphere inside an octahedron must be equidistant from each of the three vertices of the facet. We can find this as the point of the intersection of the angle bisectors for the spherical triangle defining the octant.
If we have oriented the octahedron with vertices at the poles and coordinate origin as shown, then we might see the prime octant in Figure \[fig:center-octant\]. Here the blue lines represent the boundaries of the octant, a spherical triangle with vertices at (0,0), (90,0) and (0,90). The red lines are the angle bisectors. The center of the triangle is at the point where the bisectors meet. The angle bisectors also bisect the opposite sides. One of the bisectors is simply the meridian at $45^\circ$ longitude. This gives the longitude of the center, while the latitude is clearly just the angle, $a$, along that meridian.
If we look at the triangle with angles $(\alpha,\beta,\gamma)$ we note that $\alpha$ is the result of bisecting the 90 degree angle between the prime meridian and the equator, so $\alpha=45^\circ$. The angle $\beta$ is the angle between the meridian at $45^\circ$ and the equator, so $\beta=90^\circ$. Finally by symmetry all of the angles where the bisectors meet must be equal (since we started with an equilateral triangle), so $6\gamma = 360^\circ$, or $\gamma=60^\circ$. By the law of sines for spherical triangles we have $\frac{\sina}{\sin\alpha} = \frac{\sinb}{\sin\beta} = \frac{\sinc}{\sin\gamma} $. We know all of the angles, and $c=45^\circ$. So $\frac{\sina}{\sin45^\circ} = \frac{\sin45^\circ}{\sin60^\circ}$ . Hence $\sin a=1/\sqrt{3} $ or $a \approx 35.264389^\circ$.
The distance between the center of the facet and the corners, $b$ in Figure \[fig:center-octant\], has $\sinb=\sqrt{2/3}$ so $b \approx55^\circ$. Despite there being two more facets than in the cube projections we have the same maximum distance as with the TSC projection. Squares are more efficient in packing data closer to the center. This may be more intuitive if we note that a cube has 8 vertices compared to the octahedron’s 6.
The tangent plane projection is radially distorted with projected radius growing as the tangent of the actual radius. The distortion, (the ratio of the apparent change in radius to the actual change) is just the derivative of the tangent. At the corners this is exactly 3. However, a relatively smaller fraction of the facets are in the regions of highest distortion for the octahedron compared to the cube.
In this projection (see Figure 5), straight lines correspond to great circles, but there can be a kink where we cross octant boundaries.
There are multiple scales which may be appropriate for the projection. When we project an octant onto the projection plane we get an equilateral triangle with sides of $\sqrt{6}$, or an area of $3 \sqrt{3}/2$. Thus the total projected area in all eight triangles is $12\sqrt{3}$. If we wish to preserve this area, then our projection should be a square with sides $2 (3^\frac{3}{4})$ in radians or about $261.21^\circ$. With this scaling the projection should conserve area near the center points of each tile. However the Tissot Indicatrices are still elliptical due to the squashing of the triangles.
Alternatively, we could include the squashing of the triangles as part of the scaling of the projection. The area of each triangle is reduced by a factor of $1/\sqrt{3}$ as we squash from an equilateral to an isosceles right triangle. This gives us a total area of 12 or a side of $2\sqrt{3}$ radians or about $198.478^\circ$. This area is very close to $4\pi$, so that the average areal distortion over the map is small, the expansion due to the projection almost exactly compensating for the squashing of the equilateral triangles. This does not make the map any less distorted, the projection shrinks some regions as it expands others.
Using this scaling we can calculate $f(\alpha_0,\delta_0)$ for the prime octant. First we project onto the plane using the tangent plane projection around the tile center. We shift the resulting equilateral triangle, squash it, and rotate it into position.
Let $f_p (\alpha,\delta)=(x_p,y_p )$ be the results of projecting a position in the prime octant to the tangent plane centered on the center of the octant. We have $$\begin{aligned}
x_p&=& \frac{\sqrt{3} \cos\delta \sin(\alpha-45^\circ)}{\sin\delta+\sqrt{2} \cos\delta \cos(\alpha-45^\circ)}\\
y_p&=&\frac{\sqrt{2} \sin\delta-\cos\delta \cos(\alpha-45^\circ)}{\sin\delta+\sqrt{2} \cos\delta \cos(\alpha-45^\circ)}\end{aligned}$$
When we apply these equations to projects the prime octant we get an equilateral triangle centered on the origin with the pole at $(0,\sqrt{2})$. We must shift the pole to the origin, rescale the triangle to an isosceles right triangle with the right angle at the pole vertex and then rotate the plane coordinates to the appropriate orientation to match its location in Figure 3. So $$\mathbf{f}(\alpha,\delta)= \mathbf{R} \mathbf{S} [\mathbf{f_p} (\alpha,\delta)-\mathbf{T}]$$
Where $\mathbf{S}=\left[\begin{array}{cc}1&0\\0&(1/\sqrt{3})\end{array}\right]$ does the squashing, $\mathbf{T}= \left[\begin{array}{c}0 \\ \sqrt{2}\end{array}\right]$ is the translation vector and $\mathbf{R}=\left[\begin{array}{cc} -\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{array}\right]$ does a rotation by 135$^\circ$ to get it to the appropriate location in the coordinate tile illustrated in Figure 3. The formulae in Table \[table:transforms\] show how we can address the other octants with the scaling parameter $s=\sqrt{3}$, i.e. the entire sky will be represented in the projection plane with $x,y$ values between $\pm\sqrt{3}$.
An octahedral equal-area projection
-----------------------------------
It is straightforward to define an equal area projection for each tile. E.g., if we make the parallels of latitude lines in the projection plane, then the constraint that the area north of a given latitude must be preserved defines the projection. The transformation $f(\alpha,\delta)=$(x,y) for the prime octant is just $$\begin{aligned}
t&=& 2\sqrt{\frac{1-\sin\delta}{2}} \\
u&=& \frac{2}{\pi} t \alpha \\
x&=& \sqrt{\frac{\pi}{2}} (t-u) \\
y&=& \sqrt{\frac{\pi}{2}} u \end{aligned}$$ The intermediate values $(t,u)$ range from 0 to 1 (in the prime octant). This projection maps directly to a right isosceles triangle in the appropriate orientation. For other octants we use the transformations in Table \[table:transforms\] with a scale factor of $\sqrt{\pi}$. The inverse transformation is given by $$\begin{aligned}
t&=&\sqrt{\frac{2}{\pi}}(x-y)\\
u&=&\sqrt{\frac{2}{\pi}}y\\
\delta&=& \sin^{-1}\left(1-\frac{t^2}{2}\right)\\
\alpha&=& \frac{\pi}{2} \frac{u}{t} \end{aligned}$$
This is an exact, equal area projection which we have called the Triangular octahedral Equal Area or TEA projection. This is essentially the Collignon projection over the area of each of each facet [see @Calabretta2007].
The TEA projection is closely related to the map projection for HEALPix [@Gorski2005]. Since we have constructed our projection to have the latitudes in straight lines, it is not surprising that the projection is equivalent to another equal area projection with latitudes defined to be constant in one dimension. The HEALPix projection can be rendered as map with a sawtooth triangles for latitudes $| \delta | \gtrsim 50^\circ$. These correspond to the tips of the triangles in the TEA projection. However, the TEA projection does not transition to a different projection at lower latitudes as HEALPix does. At lower latitudes, HEALPix transitions to a cylindrical equal area projection which leads to less distortion among the pixels but means that there are different classes of pixels in HEALPix.
Since the TEA projection (Figure \[fig:TEA\]) is an equal area projection, an appropriate scale for the projection is for the standard projection region to cover an area of $4\pi$ when we are working in radians. This gives an edge dimension of $2\sqrt{\pi}$ in radians. This corresponds to about 203.10825$^\circ$. The factors of $\sqrt{\pi/2}$ in the forward projection above set this scale.
A Collignon projection of the full sphere can be rendered with the hemispheres transformed to triangles sharing a base at the equator to create a single all-sky diamond-shaped tile. It is straightforward to scale the diamond to a square which then meets all of our requirements for a desireable projection: all-sky, rectangular, no singularities and extensible. To tile the projection plane, four copies of the all-sky tile meet at each pole. Near the poles there will be four points corresponding to the same point on the sky. In the TEA projection (and the other projections using this topology), there can be 2 nearby points corresponding to the same point in the sky where two diamonds representing the same hemisphere touch.
The tessellated octahedral adaptive spherical transformation
------------------------------------------------------------
The Tessellated Octahedral Adaptive Spherical Transformation (TOAST) does not use an analytic transformation between the celestial sphere and projection plane. Rather it uses a hierarchical transformation identical to those of the Hierarchical Triangular Mesh (HTM) pixelization popularized by the Sloan Digital Sky Survey [see @Kunszt2001; @Kazdhan2010].
Consider the vertices of one of the facets of the octahedron, which we now take as being inscribed inside the sphere, so that the vertices are on the surface of the sphere. Take the arcs between the three points and find the midpoint (on the sphere) between them. E.g., if we label the initial points A,B,C, we find the midpoints of AB, BC and CA (call them x, y and z respectively). Note that we used the arcs between the points so that x, y and z are also on the surface of the sphere. We may now subdivide the original triangle into four sub-triangles Axz, Byx, Czy and xyz. We can repeat this process recursively until we have as dense a mesh of points on the sphere as we desire.
Corresponding to the vertices on the sphere, we define points in the projection plane. We start with the eight right-triangles corresponding to the facets of the octahedron. As we subdivide the triangles on the sphere, we correspondingly subdivide the triangles on the plane and identify the corresponding new vertices. Figure \[fig:trans-TOAST\] shows this correspondence. We obtain a finer and finer mesh on the sphere and a corresponding mesh in the projection plane.
We may define the recursion more rigorously as:
1. Consider three points $\mathbf{p_0,p_x,p_y}$ in the projection plane at $(x,y)$, $(x+\delta, y)$, and $(x,y+\delta)$. These have already been defined as associated with three unit vectors $\mathbf{u_0,u_1,u_2}$ on unit sphere. We start the recursion with $(x,y) = (0,0), \delta=1, \mathbf{u_0}=(0,0,1), \mathbf{u_1}=(1,0,0), \mathbf{u_2}=(0,1,0)$.
2. Associate new points $\mathbf{p_{0x}}$, $\mathbf{p_{0y}}$ and $\mathbf{p_{xy}}$ with the unit vectors $\mathbf{u_{01}}$, $\mathbf{u_{02}}$, and $\mathbf{u_{12}}$ where $\mathbf{p_{0x}} = (x+\delta/2, y), \mathbf{p_{0y}}=(x,y+\delta/2)$, and $\mathbf{p_{xy}}=(x+ \delta/2, y+\delta/2)$ and $\mathbf{u_{01}}=\frac{\mathbf{u_0}+\mathbf{u_1}}{|\mathbf{u_0}+\mathbf{u_1}|}$ and similarly for $\mathbf{u_{02}}$ and $\mathbf{u_{12}}$.
3. If we need finer resolution than so far achieved, recurse in each of four subtriangles using ($\mathbf{p_{0y}}$, $\mathbf{p_{xy}}$, $\mathbf{p_y}$), ($\mathbf{p_0}$, $\mathbf{p_{0x}}$, $\mathbf{p_{0y}}$), ($\mathbf{p_{0x}}$, $\mathbf{p_x}$, $\mathbf{p_{xy}}$) and ($\mathbf{p_{xy}}$, $\mathbf{p_{0y}}$, $\mathbf{p_{0x}}$) as the entry points in step 1. At each level the magnitude of $\delta$ will halve. For the last triangle, the sign of $\delta$ is inverted.
To cover the entire sky we can use the transformation approach described in Table \[table:transforms\] to handle positions outside the prime octant, or we can start with 8 triangles directly using the appropriate points and unit vectors.
The TOAST projection is explicitly defined only at the grid points associated with the HTM recursion. These constitute a set of measure 0 in the projection plane (or on the celestial sphere). Looking only at these grid points the transformation is continuous: as we consider grid points that differ by smaller and smaller amounts, the values of the grid points converge. Since the grid can be made arbitrarily dense, we can extend the transformation to an arbitrary point by *defining* the transformation to be continuous. I.e., to find the value of the transformation at some arbitrary point not on the grid, we look at grid points sufficiently nearby to satisfy whatever precision requirements we have. Note that triangles on the sphere are not of identical size. Even though we start with equilateral triangles, the inner triangle (the last one in step 3) is different from the other three. However the variation in size of triangles is rigidly bounded. All triangles of a given level of the recursion in the projection plane are of the same size by construction. Thus TOAST is not an equal area projection. In fact, since we create different size triangles at each step, the transformation between the sky and the plane is not differentiable at the points it is calculated although it is continuous. At the largest scales– at the boundaries between the top-level triangles – these distortions can be easily visible but becomes less obvious at smaller scales (see Figure 9).
We can use the recursion to build grids of pixels in TOAST. The level 0 pixel is the entire sky. At level 1, each of the four pixels covers 90 degrees of longitude. At the next level, these longitude bands are each split into four pixels, two of which touch the poles and two of which straddle the equator. At each level we have a $2^n \times 2^n$ grid.
The computation of the TOAST/HTM boundaries is discussed in detail in @Szalay2005. Figure \[fig:toast-px\] is comparable to their Figure 3 and gives the distribution of pixel sizes as a fraction of the average size for a variety of values of n where we represent the entire sky as a $2^n \times 2^n$ grid. One slight difference with Szalay et al. is that we are giving the sizes of pixels which comprise two adjacent triangles rather than individual triangles but the essence of the figure is unchanged. All level one pixels have the same size. We see that level 2 pixels come in two distinct sizes. As we move towards higher values of $n$ the histogram stabilizes into a complex pattern. This form of the histogram takes a few levels to build up from the congruent level 1 pixels. Thus globally the TOAST projection will appear less distorted than the histograms suggest. At a given level pixels may differ in area by a maximum factor of about 2. When we work using unit vectors, the TOAST recursion is particularly simple. The initial conditions for the recursion are trivial and the recursion itself is very fast. The cost of this computation is essentially the computation of a single square root. This computation is simple enough that it may be done in graphics processors as well as a computer’s main CPU.
In the typical case where we start with a top level tile of at least 256x256 pixels, calculation of the square root can be simplified. This is used to get the length of the average of two unit vectors – which is just the cosine of the angle between them. Except for the top level tile we are generally dealing with angles less than 1 degree, so that we will only need to deal with square roots in a very small range 0.999 - 1 which can potentially be calculated using a simpler and more efficient algorithm than the general function over all positive real numbers.
Since we start with the entire sky and recurse by factors of two into increasingly fine tiles, the TOAST projection provides a very rapid way to find the positions of a hierarchical image tiles when we define the tiles to match the recursion, i.e., the tiles should comprise subsets of the $2^n \times 2^n$ all-sky grids. If we have divided the sky into a grid of pixels for some value of n, then a natural tiling is to divide the sky into a grid of $2^k \times 2^k$ tiles where each tile has $2^m \times 2^m$ pixels with $n=m+k$. The positions of pixel corners in these standard tiles can be computed at a cost of roughly a single square root per pixel. The small number of recursions needed to find the bounding box for the tile is amortized over the many pixels within the tile.
While the transformation is easy to calculate for grids that match the recursion, the TOAST projection is less straightforward generally. For an arbitrary position, we must use recursion to refine the projected position for a given set of coordinates. We transform our position to the prime octant as defined in Table \[table:transforms\]. Then we recursively split this triangle in sub-triangles always finding the triangle our desired position is in.
To determine this, we use a standard approach where we generate the three vectors which are the cross-products of pairs of the unit vectors to the vertices of the triangle. We take adjacent vertices in a counterclockwise direction. Since the cross-product is perpendicular to the plane containing the two unit vectors, it is perpendicular to the great circle connecting them. With vertices chosen counterclockwise, the cross-products point into the triangle. The dot product of our position unit vector with all three cross-products must be positive if the point is inside the triangle. If the position is outside the triangle at least one of the dot products will be negative. (This is not true in the general case where we would need to worry about spherical triangles with sides greater than 180 degrees, but our largest triangles are the octants whose sides are only 90 degrees.)
In practice roundoff errors can occasionally cause positions exactly on the boundary between two triangles to be seen as outside both of them. In the case where a point seems be outside all four candidate sub-triangles, we use the triangle where the minimum of the three dot products for that triangle (which is negative since the point was found to be apparently outside the triangle), has the smallest magnitude.
As we do the recursion on the sphere we update the position in the projection plane depending upon which subtriangle the position falls inside in this level of recursion. Since each level of recursion reduces the area of the triangles by a factor of four, it doubles the positional accuracy. To get a precision of $1\arcsec$ we can expect to need about 18 levels of recursion, or 28 levels of recursion to match within $0.\arcsec001$. This implies that we need to go much deeper in the recursion to get comparable accuracy for an arbitrary grid compared to the natural tiles discussed above. E.g., when we use the standard tiles we can go down to level 18 and we immediately have pixel corners specified to the limits of our arithmetic precision. This would require going to a recursion depth comparable to the number of bits in the mantissa to duplicate for an aribtrary grid. It is also more difficult to avoid recomputation of higher level pixels for an aribitrary grid since we cannot be sure when we cross the boundaries of the natural pixels easily. Thus the computation of an arbitrary grid within the TOAST projection will be much slower with each point requiring dozens of recursions rather than the single computation per pixel required for standard tiles.
If we have an arbitrary position in the projection plane for which we want the celestial coordinates, i.e., we need to de-project from the plane back to celestial coordinates, we invert the process. Since the triangular grid in the projection plane is regular, we can easily calculate at each level of the recursion which subtriangle will be needed, i.e., which contains the point we are trying to get the coordinates for. We start with the corner coordinates for the appropriate octant. Then we gradually refine the celestial coordinates by finding the vertices of the ever smaller bounding triangles until one of the points in the current triangle in the projection plane is close enough to requested point that we can terminate the recursion. We use the corresponding celestial coordinates.
Since there is no analytic transformation, the scale of the TOAST transformation is arbitrary. Instead we consider the size of the box we transform into. Often it is convenient to fit the entire sky into the 2x2 square centered at the origin so that the bounds of the projection are $|x|,|y|\le1$. Alternatively, sides with dimensions of $\pi$ gives an image with where the average scale over the image is close to 1 if we assume distances in radians. This gives a square with sides of 180$^\circ$. There is no clear natural choice.
Characteristics of TOAST
========================
Pixelization versus Projection
------------------------------
It is useful to distinguish between the TOAST pixelization used in the WWT and elsewhere and the TOAST projection defined in this paper. A projection is a transformation of points in the sky into some projection plane, a continuous transformation between sphere and a projection plane. A pixelization is a specific scheme for dividing the sky into pixels with defined boundaries, essentially an integer valued function on the sphere or projection plane. Often one defines a pixelization by imposing a rectangular grid over a projection plane. This is not the case with TOAST. The projection is defined in terms of a particular pixelization process which we extend beyond the grid points by assuming that the projection is continuous. Thus, while it is possible in principle to define a pixelization of the TOAST projection plane different from the defining HTM pixelization, it is unlikely ever to be useful since the computation of the pixel edges will involve orders of magnitude more computation than for the fiducial HTM boundaries.
The next section describes some of the characteristics of this particular pixelization that make it effective for the role of a machine intermediary representation of hierarchical resolution image data. For the standard TOAST pixelization, the borders of each pixel are segments of great circles so that the boundaries are easy to compute. However line segments other than pixel borders will not generally correspond to great circles, reflecting the special nature of the fiducial pixelization.
TOAST and GPUs
--------------
One of the motivations for using TOAST in the HTM pixelization is that it works efficiently with modern computers. There are some terms specific to 3D accelerated graphics systems that we refer to here. For those not familiar with the field we start with a brief glossary of those terms. (For more detailed discussion see texts on image rendering, e.g., @AkenineMoller2018.)
**Graphics Processing Unit (GPU)**. A GPU is a specialized math co-processor, usually directly connected to an output frame buffer, that takes the input data in the form of small programs called shaders, lists of 2D or 3D graphics points and metadata known as vertex buffers, and graphics bitmaps known as textures and produces output as a rendered bitmap image. The shader programs run the same code against many input vertices in parallel thus they are much faster than graphics computed on the CPU. This image is often directly displayed from the internal frame buffer to the user’s display. GPUs are very common, and are often built-in to modern chips such as the Intel Core I5 and Core I7 chips used in Apple Mac and Windows PCs. These are also common on the System On Chip (SOC) components that make up tablets and cell phones.
**Texture**: A GPU based hosted image type made up of a pyramid of sub-sampled images. Textures are addressed by a set of normalized textual coordinates labeled U and V, collectively referred to as UV coordinates, with values between floating point values of 0 and 1. Textures are most often accessed through a texture sampler that uses a filter that may sample values in the surrounding region and at various levels of the pyramid to get the best representative value for the sample. Rarely does a sampler use just a single pixel value by itself. Samplers help prevent image aliasing that would occur in naively taking the sample from the nearest neighbor to a UV coordinate. GPUs typically draw images by interpolating the corners of a triangle in screen space and sampling textures using interpolated UV coordinates.
**Vertex Buffer**: Vertex Buffers are a memory buffer hosted on the GPU that has a list of vertices and their associated metadata. By keeping this data in GPU memory, it is extremely fast to reuse and eliminates the need to specify the vertices to draw each frame.
**Index Buffer**: Index Buffers are a set of integer values that index into the Vertex buffer to define geometry, usually triangles. When vertices are used more than once, such as the corner of a cube that will be used by at least 6 different triangles needed to draw the cube, then it is wasteful to compute multiple copies of the vertex as it is transformed over and over. By using an index buffer the transformations can be computed once per vertex, and then used multiple times in drawing the various triangles that include that vertex by specifying geometry as a list of indices, rather than repeating the vertex values.
TOAST has some nice attributes that help it fill the need as an intermediate transmission format for large Internet delivered tiled-multi-resolution images rendered using GPUs. The raw TOAST pixels are of very little use when displaying naively, but when transformed into the spherically projected environment by GPU rendering, tiles can be rendered efficiently and correctly projected.
The key attribute is the continuity of adjacency of pixels. For each tile level and across the TOAST projection, when UV texture maps are created with 3D meshes there are no discontinuities between adjacent pixels. Because GPUs use filters to sample pixels across multiple resolutions, when there are discontinuities in the input images, they cannot be used without the filter taking image data from unrelated areas of the sky for samples causing the output to be contaminated with unrelated data.
For example, some red pixels from a discontinuous part of the sky might be adjacent to blue pixels and the sampler would mix these two unrelated pixels to result in a violet color instead of red or blue. This would appear as visible artifacts at boundaries. Mitigating this would require more complex segmentation of images. Each discontinuous part of the sky would have to be broken up into different textures. This would reduce performance by requiring more separate draw operations on smaller vertex buffers and cause delays.
Segmentation can also cause problems when panning the image. Normally as a user pans across tile boundaries the program can defer to lower resolution tiles at some higher level in the tile hierarchy which will already be in memory since the antecedent lower resolution tiles are kept to hand. The user gets an approximate image until the appropriate high detail tiles are retrieved. However if the image has multiple root segments then there may be no information available on the region being panned into and the display may stutter until the new root and higher resolution data can be retrieved.
Because TOAST is continuous, images tiles from the top level down can be simply drawn as a batch with a single input image with no need to segment them.
Since TOAST is a fully determined projection with no free parameters (as opposed to something like a tangent plane projection, where the tangent point may vary), the mesh for any specific tile address is a deterministic set of 3D coordinates and UV coordinates regardless of the image content. This allows the use of pre-backed image meshes to be used either at tile creation time, or from a mesh cache.
Additionally, as tiled multi-resolution tile pyramids are generally drawn from the top down, each decedent tile can be calculated from their parent tile by simple mathematical subdivision. This allows meshes to be computed either in the GPU or CPU efficiently with simple and efficient algorithms. In a segmented projection, this code would normally need to determine the segment being used to access the offsets and functions appropriate for the specific segment. TOAST’s recursion requires no such tracking, since the algorithm depends solely upon the corners of the pixels being subdivided.
In practice, it is a fairly trivial matter to output a particular TOAST tile using GPU vertex and index buffers in such 3D systems as Direct3d, OpenGL and WebGL. For efficiency, the tile pyramid is always evaluated from the root tile. This is a single tile comprising of 8 triangles. For each triangle vertex there are coordinates in 3 space (on the sphere), and in the 2D image space. As we noted above, for each tile, the corners of the tile footprint and the directionality of the triangle bisecting the quad are determined from parent tile, and the tile triangle is recursively subdivided a number of times. Each triangle is subdivided into four triangles with the new points being calculated as the geometric mean of the edge segments.
The tile has four quadrants representing the coverage of that tile, and four vertex and index buffers. Each tile can draw either its entire contents, or any combination of its quadrants, or have one of its children draw itself in place of that quadrant. In three dimensions, these coordinates are treated as vectors and normalized so they are on the surface of a unit sphere. In two dimensons, image coordinates are the 2D midpoint of the segments in image space where the units correspond to the UV texture coordinates for the image tile in the range of 0-1.
Once the recursion is done to create a mesh of sufficient density there is minimal error in final projection of the texture image for that tile. Tiles are evaluated to ensure two conditions: 1) are they visible in the view frustum and 2) are they of sufficient size to represent image textures at approximately 1:1 ratio. Tiles that are not in the view frustum are culled. Tiles that are not of sufficient image density have their children render that quadrant of the image. Having the root tile draw itself, will have the entire visible image drawn and projected onto the screen with the proper resolution.
Unless the entire image pyramid is already loaded, there is an intermediate step where a parent tile knows it needs its child tile to draw, but the image data is not yet available. The parent tile requests a tile cache management service to query a tile and draw the currently available lower resolution data until the correct resolution data can be downloaded and initialized by the tile cache manager.
The tile cache manager can handle background downloading, prioritizing the queue, and removing old tiles from memory when they have not been rendered for a long time.
TOAST is not the only projection that meets the requirement for densely rendering the sky in a single rectangle. We have encountered several others in this paper and there are doubtless more. TOAST was used in the WWT for a combination of historical and algorithmic advantages. The availability of the SDSS catalog data in HTM and the clear existing definition of the recursive process were attractive at the time of WWT’s creation. In practice the recursive nature of the determination of pixel boundaries where the next generation is defined not in terms of some global function but in terms of the current set of pixel locations is very convenient. Having great circle boundaries for the pixels is also very helpful when assessing the edges of each pixel.
Discussion
==========
Figures \[fig:TOT\], \[fig:TEA\] and \[fig:TOA\] show images of the sky with a coordinate grid in three realizations of the octahedral projections. In all cases there are significant features at the boundaries of the original tiles as our experience with the cubic projections led us to expect. The TOAST projection is more rounded towards the poles than the either the TOT or TEA projections. In each there are discontinuities in the slopes of lines even at the pole due to the squashing of the equilateral octahedron facets.
The unconventional placement of the pole at the center of the projection, and the features at the boundaries may make these projections less suitable for direct display than more conventional all sky projection. This has little bearing on their utility as intermediate representations.
The TTN and TEA projections are mathematically straightforward, have continuous derivatives except at the tile boundaries and are easily invertible. The TOAST projection is much more difficult to define in general, but it is particularly straightforward to calculate when a grid of $2^n \times 2^n$ pixels (or some contiguous subset thereof) is to be computed. For this special case – which happens to correspond precisely to the needs for hierarchical imaging – the TOAST projection is very easily computed. Essentially only a single square root function needs to be evaluated for each point.
The WorldWide Telescope uses TOAST as its projection for storing survey data. The *SkyView* Virtual Telescope[^4] supports all three of the projections defined here.
The FITS WCS conventions [@Calabretta2002] use a three-letter abbreviation for each projection. We have suggested TOT for the Triangular Octahedral Tangent plane projection, TEA for the Triangular octahedral Equal Area projection, and TOA for the Tessellated Octahedral Adaptive spherical transformation (TOAST) projection. The modified Cahill projection shown in Figure \[fig:CAH-proj\] has also been implemented in *SkyView*. By analogy with the TSC projection we suggest the abbreviation TSO for Tangential Spherical Octahedron projection.
The utility of the cube and octahedron may suggest that we might wish to consider the other regular solids for projections. @Tegmark1996 has contemplated using the icosahedron as a basis for a partitioning into hexagonal pixels. However we have seen no obvious natural approach that would enable us to transform this or the other geometric solids into a projection that meets our original criteria.
We would like to thank Drs Aniruddha Thakar and Alex Szalay for their help in providing figures for HTM triangulation of the sphere and pointers to the Geomview package which remains a very useful tool despite the limited support it has had for many years. Conversations with Dr Gregory McGlynn were very helpful in understanding the differentiability of the TOAST projection. The *SkyView* system to illustrate the projections in the paper has been supported by a series of NASA Astrophysics Data Program and Astrophysics Applied Information Systems Research Program grants and is now hosted at NASA’s High Energy Astrophysics Science Archive Research Center (HEASARC). We gratefully acknowledge NASA’s support.
By convention in WorldWide Telescope and most other TOAST systems, TOAST image pyramids consist of multiple levels of tiles. Each tile is a 256x256 image in equatorial coordinates for sky images and panoramas, the North celestial pole is in the center, RA 0h on the right, 6h on top, 12h on the left, and 18h on the bottom. For Planetary surfaces the 0 degrees is left, 90 degrees on the bottom, 180 degrees on the right and 270 degrees at the top.
Tiles are quad-trees and are conventionally accessed by either a triple of level, x, and y, or a quad-tree key. The first tile level is zero and consists of the root tile, with a coordinate of 0, 0. At each successive level the tile count doubles on each axis. A tile key is empty for level 0, then for each new level of subdivision, the address of the quadrant ID for each level deeper in the subdivision is appended to the key. For example, a tile with address level=2, x=3, y= 3, the quad-tree key would be 33.
Tile trees are assumed to be accessible on the Internet through a URL. The URL access pattern can be provided to a TOAST browser application through a pattern string that allows the substitution of the tile address for either level, x, and y, the tile key, or by pattern substitution found in the pattern substitution table. Then the image at the calculated URL can be downloaded. If the image does not exist, then that means there is no further data available for that quadrant or below it in the pyramid.
<?xml version='1.0' encoding='UTF-8'?>
<Folder Name="WWT" Group="View">
<ImageSet
Generic="False"
DataSetType="Sky"
BandPass="Visible"
Name="Digitized Sky Survey (Color)"
Url="http://cdn.worldwidetelescope.org/wwtweb/dss.aspx?q={L},{X},{Y}"
BaseTileLevel="0"
TileLevels="12"
BaseDegreesPerTile="180"
FileType=".png"
BottomsUp="False"
Projection="Toast"
QuadTreeMap="0123"
CenterX="0" CenterY="0"
OffsetX="0" OffsetY="0"
Rotation="0"
Sparse="False"
ElevationModel="False"
StockSet="True">
<ThumbnailUrl>http://www.worldwidetelescope.org/thumbnails/DSS.png
</ThumbnailUrl>
<Credits>Copyright DSS Consortium</Credits>
<CreditsUrl>http://gsss.stsci.edu/Acknowledgements/DataCopyrights.htm</CreditsUrl>
</ImageSet>
</Folder>
WorldWide Telescope uses a XML format called WTML[^5] that defines a dataset for TOAST (see Example 1), once a TOAST pyramid is calculated and made available on the Internet. A description of the metadata can be communicated to a TOAST browser through a WTML ImageSet definition.
In this example XML, a 12-level deep (approximately 1 Terapixel image), of .png file type is defined. Substitution parameters (enclosed in braces) can be placed anywhere in the URL to rewrite the URL to refer to the tile needed for access. This allows both statically mapped tiles in a filesystem, or programmatically accessed tiles to be used as sources.
[^1]: http://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/index.html
[^2]: http://healpix.jpl.nasa.gov
[^3]: http://www.worldwidetelescope.org
[^4]: http://skyview.gsfc.nasa.gov
[^5]: A full definition of WTML is available at http://www.worldwidetelescope.org/docs/WorldWideTelescopeDataFilesReference.html.
|
---
abstract: 'Recently, gamma-ray emission at TeV energies has been detected from the starburst galaxies NGC253 (Acero et al., 2009) \[1\] and M82 (Acciari et al., 2009 \[2\]. It has been claimed that pion production due to cosmic rays accelerated in supernova remnants interacting with the interstellar gas is responsible for the observed gamma rays. Here, we show that the gamma-ray pulsar wind nebulae left behind by the supernovae contribute to the TeV luminosity in a major way. A single pulsar wind nebula produces about ten times the total luminosity of the Sun at energies above 1 TeV during a lifetime of $10^5$ years. A large number of $3\times 10^4$ pulsar wind nebulae expected in a typical starburst galaxy at a distance of 4 Mpc can readily produce the observed TeV gamma rays.'
address: |
Universität Würzburg\
Institut f[ü]{}r Theoretische Physik und Astrophysik, Campus Hubland Nord, Emil-Fischer-Str. 31, D-97084 W[ü]{}rzburg, Germany\
Corresponding author: Karl Mannheim (mannheim@astro.uni-wuerzburg.de)
author:
- 'Karl Mannheim, Dominik Els[ä]{}sser, and Omar Tibolla'
bibliography:
- '<your-bib-database>.bib'
title: 'Gamma-rays from pulsar wind nebulae in starburst galaxies'
---
gamma rays, pulsar wind nebulae, starburst galaxies
Introduction
============
Supernova ejecta plowing through the interstellar gas form shock waves which have long been suspected as being responsible for the acceleration of cosmic rays \[1-3\]. The energetic cosmic ray particles traveling through the interstellar medium in a random walk can tap the shock wave energy by repeated scatterings on both sides of the shock in the diffusive-shock acceleration process. This has prompted the interpretation that the observed gamma rays are due to cosmic rays interacting with local interstellar gas and radiation \[4\]. However, a closer look at our Milky Way galaxy shows the importance of sources of a different origin for the total gamma ray luminosity at very high energies, and this poses the question of their contribution to the observed gamma ray emission from starburst galaxies.\
A scan of the inner Galaxy performed with the H.E.S.S. array at TeV energies \[5\] revealed the striking dominance of pulsar wind nebulae (PWNe), although some faint diffuse emission could also be detected. Studies of the total (i.e. due to sources and diffuse) gamma ray emission from the Galaxy with Fermi-LAT show a flattening above 10 GeV energies due to the increasing contribution from unresolved sources with rather hard spectra \[6\]. Late-phase PWNe show weak X-ray emission but bright TeV emission up to ages of $10^5$ years, as determined from the spin-down power of the pulsars \[7\]. The X-ray emitting electrons have shorter lifetimes than the electrons producing the TeV gamma rays by inverse-Compton scattering. The gamma-ray lifetime is finally terminated by adiabatic losses, breakup and diffusion into the interstellar medium \[8\]. This [*ancient PWN*]{} paradigm provides the most elegant solution of the riddle of TeV sources lacking X-ray counterparts such as HESS J1507-622, HESS J1427-608 and HESS J1708-410. A PWN toy model has been shown to explain the salient observational features of the off-plane source HESS J1507-622 \[9\]. Many unidentified sources have already been identified as PWNe after their discovery (such as HESS J1857+026 or HESS J1303-631); many other unidentified sources are considered to be very likely PWNe (such as HESS J1702-420); and also in sources that have several plausible counterparts, the PWNe contribution cannot be avoided (such as hot spot B in HESS J1745-303 or HESS J1841-055).\
In the following, we compare the gamma-ray luminosities associated with PWNe and cosmic rays, respectively, in star-forming galaxies.
PWN luminosity at TeV energies
===============================
Prior to the Fermi era, a total of 60 galactic PWNe were known in X-rays and TeV gamma rays; according to \[10\] 33 of them have measured TeV fluxes. Future observations will have to confirm some of the more controversial identifications reported in \[10\]. Significant progress for a number of sources has recently been achieved by the MILAGRO, VERITAS, MAGIC, and HESS collaborations and is ongoing. Moreover, new TeV PWNe surrounding known pulsars have been discovered \[11\]. Fermi is now probing deeper into the population of galactic sources at GeV energies, confirming pulsars in some of the suspected PWN. In those cases where the pulsar is not found, it could already have spun down or emit preferentially into a solid angle off the line of sight, still in line with a PWN association. The properties of the subsample of 28 PWNe from \[10\] which are younger than $$t_{\rm cool}({\rm 1~TeV}) = 1.3\times 10^5\ (\rm B/10\ \mu \rm G)^{-2}~{\rm years}$$ have been considered here as representative for the putative PWNe population in starburst galaxies. Their average luminosity is given by $\bar L = 2.75\times 10^{34}~\rm erg~s^{-1}$ and their average differential photon index by $\Gamma=2.3$ ($dN/dE\propto E^{-\Gamma}$) as shown in Fig. 1. If the decrease of the counts at low luminosities is due to the limited flux sensitivity of the TeV observations, $\bar L$ would be overestimated only within factors of order unity for reasonable assumptions. Observationally, only pulsars with a spin-down power of $\dot E_{\rm c}>4\times 10^{36}$ erg s$^{-1}$ develop detectable PWNs \[12\]. The spin-down power [*at birth*]{} is higher than $\dot E_{\rm c}$ for canonical rotation energies of $10^{49}$ erg \[13\]. The high PWN luminosities are a natural consequence of this scenario.
![image](Figure1.eps){width="14cm"}
The dominance of the PWN population on the TeV sky relates to the fact that they show hard spectra and long TeV life-times compared with shell-type supernova remnants. As long as the pulsar winds remain enclosed by the preceding supernova bubble, their properties will be largely independent on the interstellar medium. For high kick velocities, the PWN evolution into the interstellar medium in starburst galaxies with their higher density and stronger magnetic field might be somewhat altered, but we ignore this aspect here and leave the details of this problem to future work and further observational constraints.\
The number of PWN can be coarsely estimated from the core-collapse supernova rate. Measurements of $^{26}$Al in the MeV range are consistent with a supernova rate of $R=0.02$ per year in the Milky Way galaxy assuming a Kroupa-Scalo initial mass function \[14\]. The maximum number of TeV-emitting PWN in the Milky Way galaxy is given by $$N_{\rm PWN} = 2.6\times 10^3 \left(R/0.02~\rm year^{-1}\right)$$ if we neglect other final states that do not develop a PWN such as black holes or low-magnetic field neutron stars. Some neutron stars might produce the PWN with a delay given by the rise time of the magnetic field that was initially submerged under the neutron star surface. The corresponding number of shell-type supernova remnants, assuming a life-time of $10^4$ years for them, is given by $N_{\rm SNR}=200$, and this compares to the number of eight SNR which have been detected at TeV energies. If the same discovery fraction of $1:25$ is applied to the PWN, ignoring the somewhat different observational biases for the two types of sources for the sake of simplicity, we expect a number of 100, which is actually close to the number of known TeV-emitting PWN plus the unidentified TeV sources. Although these estimates must be treated with extreme caution, they show that the scenario is at least plausible.\
Estimating the PWN luminosity of starburst galaxies is now straightforward. Since the starburst ages are typically of the order of $10^6 - 10^7$ years \[15\] and thus much longer than the cooling time of the TeV electrons, the number of PWN contributing to the TeV luminosity can be obtained from the current supernova rate (assuming steady-state activity). Multiplying the rate with the cooling age, the luminosity $\bar L$ per PWN, and the differential spectrum with $\Gamma=2.3$, the total luminosity above 1 TeV is given by $$\parindent=0cm
L_{\rm PWN} \left(>E\right) = 7\times 10^{38}\left(R\over 0.2~{/ \rm year}\right)\left(E\over 1~\rm TeV\right)^{-0.3}\rm erg~s^{-1}$$ As shown in Fig. 2, the TeV luminosities evaluated for the starburst galaxies with the above formula are in fair agreement with the observed values. At lower energies ($E<1$ TeV), the spectral index changes by $\Delta\Gamma= -0.5$ at the cooling break, and this effect can be seen in the synchrotron radiation component \[16\]. The transition from the harder to the softer spectrum will be rather broad for a realistic distribution of magnetic field strengths. Since the observed gamma-ray spectra of the starburst galaxies require a continuation of the spectrum with $\Gamma\approx 2.3$ towards lower energies, additional sources such as cosmic rays seem to be required to explain their GeV luminosities.
Comparison with cosmic-ray induced gamma ray emission
=====================================================
The total cosmic ray luminosity from shell-powered supernova remnants is given by $$L_{\rm CR} = 6\times 10^{41}\left(R\over 0.2~{\rm year^{-1}}\right) \left(E_{\rm SNR}\over 10^{51}~{\rm erg} \right)
\left(\varepsilon\over 0.1\right)~{\rm erg~s}^{-1}$$ where $E_{\rm SNR}$ denotes the kinetic energy and $\varepsilon$ the acceleration efficiency. If the cosmic rays are efficiently stored, they loose their energy by inelastic interactions with the interstellar medium, and the calorimetric gamma ray (and neutrino) luminosity becomes a significant fraction of the luminosity Eq.(4) \[26,27\]. In fact, cosmic ray heating of the dense star-forming clouds in starburst galaxies has been directly observed \[28\], and the observed gamma ray spectra at GeV energies are indeed quite flat. In the absence of other loss processes, the fraction of the cosmic ray luminosity that ends up in GeV gamma rays energies can been determined numerically to be $\sim0.25$ \[29\], and so we expect $L_{\rm CR,\gamma}({\rm GeV})\simeq 1.5\times 10^{41}(R/0.2{\rm year^{-1}})(\varepsilon/0.1)~{\rm erg~s}^{-1}$ in the calorimetric limit. With the mean distances from the NED 3.8 Mpc for M82 and 3.1 Mpc for NGC253 the observed luminosities are $L_{\gamma,\rm M82}({\rm GeV})=2.2\times 10^{40}\ {\rm erg\ s^{-1}}$ and $L_{\gamma,\rm NGC253}({\rm GeV})=5.6\times 10^{39}\ {\rm erg\ s^{-1}}$, implying an efficiency of $\varepsilon\simeq 0.01$ for $R_{\rm M82}=0.25$ year$^{-1}$ and $R_{\rm NGC253}=0.07$ year$^{-1}$ (see caption of Fig.2 for references). The values are lower than the values of $\varepsilon\simeq 0.1$ in the Ginzburg-Syrovatskii scenario for the origin of cosmic rays in the Milky Way galaxy \[3\], perhaps due to adiabatic losses in the overpressured starburst region or the hotter interstellar medium compared to the Milky Way.
![image](Figure2.eps){width="15cm"} -1cm
Pion production energy losses further compete with advection and diffusion. Since the diffusive escape time $t_{\rm diff}$ decreases with increasing energy, there is an energy $E_{\rm diff}$ at which the pion production time scale $t_\pi=2.5\times 10^5 (n/200~{\rm cm^{-3}})^{-1}$ years scaled with the mean density of the interstellar gas $n$ becomes larger than $t_{\rm diff}$. The gamma ray luminosity in the optically thin range, i.e. at $E>E_{\rm diff}$, is given by $$L_{\rm CR,\gamma}(E)={L_{\rm CR}t_{\rm diff}(E)\over t_\pi}$$ In the Milky Way Galaxy, $t_{\rm diff}$ can be determined from the $^{10}$Be/$^9$Be isotope ratio in cosmic rays, showing the energy dependence $t_{\rm diff}\propto E^{-0.5}$ above 1 GeV. Since the energy dependence results from universal properties of turbulent transport, we can assume the same energy dependence for the escape time in starburst galaxies. A similar energy dependence in starburst galaxies would mean that the cosmic ray particles must diffuse out of the high-density star forming region and enter the low-density wind zone at sufficiently high energies. Here, advective transport becomes dominant which further reduces the efficiency of the cosmic rays to produce TeV gamma rays. The diffusion coefficient inferred from radio measurements of starburst galaxies in the wind zone has a value of $D=2\times 10^{29}$ cm$^2$ s$^{-1}$ at GeV energies \[30\], and even larger values might be appropriate for a hot, turbulent, and highly magnetized star-forming region. The transition from diffusive to advective propagation occurs at a scale height \[31\] of $$z = 650~{D/2\times 10^{29}~\rm cm^2~s^{-1}\over v/1000~\rm km~s^{-1}} ~{\rm pc }$$ considering that the wind speed reaches $\sim 1000$ km s$^{-1}$ \[32-34\]. The corresponding conservative estimate of the diffusive escape time at GeV energies is thus given by $$t_{\rm diff} = 6\times 10^5 {( z/650~{\rm pc})^2\over D/2\times 10^{29}~\rm cm^2~s^{-1}}\left(E\over {\rm GeV}\right)^{-0.5} ~{\rm years.}$$ The diffusion time scale becomes shorter than $t_\pi$ above $E_{\rm diff}\approx 10$ GeV. The steady-state spectrum due to cosmic ray interactions displays the energy dependence of the injection spectrum $\Gamma_{\rm i}=2.2$ for $E< 10$ GeV $$\begin{aligned}
&L_{\rm CR,\gamma}(E)=\nonumber \\
&1.5\times 10^{41}\left(R\over 0.2~{\rm year^{-1}}\right)\left(\epsilon\over0.1\right) \left(E\over 1~{\rm GeV}\right)^{-0.2}~{\rm erg~s}^{-1}\end{aligned}$$ Above $E_{\rm diff} \approx 10$ GeV, the spectrum steepens according to Eq.(5) $$\begin{aligned}
&L_{\rm CR,\gamma}(E)=\nonumber\\
& 9\times 10^{40}\left(R\over 0.2~{\rm year^{-1}}\right)\left(\epsilon\over0.1\right) \left(E\over 10~{\rm GeV}\right)^{-2.7}~{\rm erg~s}^{-1}\end{aligned}$$ Due to this steepening, the PWNe with their flatter spectrum $\bar\Gamma=2.3$ come into play towards higher energies. Combining Eq.(3) and Eq.(8) and adopting $\varepsilon=0.01$ we obtain $$\begin{aligned}
&\Gamma({\rm GeV-TeV})=\nonumber\\
&2+{1\over 3}\left(\log\left[L_{\rm CR,\gamma}({\rm GeV})\right]-\log\left[L_{\rm PWN}({\rm TeV})\right]\right)\simeq2.4\end{aligned}$$ in fair agreement with the observations given the crude assumptions. The agreement becomes somewhat better by including the cosmic ray component from Eq.(9) at TeV energies. The result is robust, i.e. independent of the supernova rate. Currently, the observations are too sparse at 10 GeV to 1 TeV to show the shallow dip that might emerge due to the steepening of the cosmic-ray induced component and the onset of the PWN component.
Discussion and conclusions
==========================
Starburst galaxies such as M82 and NGC253 show a much harder GeV gamma-ray spectrum than the Milky Way galaxy, and this is in line with the high gas density in the star forming regions and a cosmic-ray origin of the gamma rays. However, diffusive-advective escape of the cosmic rays from the star-forming regions should lead to a steepening of the cosmic-ray induced gamma-ray spectrum above $\approx 10$ GeV, in which case the cosmic rays would fall short in explaining the high TeV luminosities. The effects of the escaping cosmic rays can be seen on larger scales in the radio emission associated with the fast superwinds. The PWNe associated with core-collapse supernovae in star-forming regions can readily explain the observed high TeV luminosities. Since the injection spectrum of cosmic rays, which determines the slope of the observed spectrum between 100 MeV and 10 GeV, has the slope $\Gamma=2.2-2.3$ and the PWN also have a spectrum with $\Gamma=2.3$ above $\approx 100$ GeV, the total spectrum from GeV to TeV shows little change across a wide band. Measurements of the TeV luminosities of galaxies such as LMC or Arp220 will be important to better understand the relative contributions of the gamma-ray emitting components in starburst galaxies.\
Acknowledgements {#acknowledgements .unnumbered}
================
We are indepted to the memory of Okkie de Jager who encouraged us to put forward this research paper. O.T. acknowledges support by the BMBF under contract 05A08WW1.
[00]{}
Acero, F., et al. (H.E.S.S.) [*Detection of Gamma Rays from a Starburst Galaxy*]{}, Science [**326**]{}, 1080-1082 (2009) Acciari, V.A., et al. (VERITAS) [*A connection between star formation activity and cosmic rays in the starburst galaxy M82*]{}, Nature [**462**]{}, 770-772 (2009) Ginzburg, V. L. & Syrovatskii, S. I. [*The Origin of Cosmic Rays*]{}, New York: Macmillan (1964) Persic, M., Rephaeli, Y., Arieli, Y. [*Very-high-energy emission from M82*]{}, Astron. & Astrophys. [**486**]{}, 143-149 (2008) Aharonian et al. (H.E.S.S.) [*A New Population of Very High Energy Gamma-Ray Sources in the Milky Way*]{} Science [**307**]{},1938-1942 (2005) Strong, A., et al. (Fermi): Fermi Symposium [*Contributions of source populations to the Galactic diffuse emission*]{}, P4-139, 2-5 Nov. 2009, Washington, USA (2009) Mattana, F., Falanga, M., Götz, D., Terrier, R., Esposito, P., Pellizzoni, A., De Luca, A., Marandon, V., Goldwurm, A., and Caraveo, P. A. [*The Evolution of the γ- and X-Ray Luminosities of Pulsar Wind Nebulae*]{}, Astrophys. J. [**694**]{}, 12-17 (2009) de Jager, O. C., et al. [*Unidentified Gamma-Ray Sources as Ancient Pulsar Wind Nebulae, 31st International Cosmic Ray Conference*]{}, 31$^{\rm st}$ ICRC, Lodz, Poland, 2009 (astro-ph/0906.2644) Acero et al. [*Discovery and follow-up studies of the extended, off-plane, VHE gamma-ray source HESS J1507-622*]{}, Astron. & Astrophys. [**525**]{},45 (2011) Kargaltsev, O., Pavlov, G. [*Pulsar Wind Nebulae in X-rays and TeV gamma-rays*]{}, in: Procc of ”X-ray Astronomy 2009“ conference, Bologna, Italy, September, 2009 published by AIP (astro-ph/1002.0885) Abramowski et al. [*Detection of very-high-energy gamma-ray emission from the vicinity of PSRB1706−44 and G343.1−2.3 with H.E.S.S.*]{}, 2011, Astron. & Astrophys., in press (astro-ph/1102.0773) Gotthelf, E.V. [A Spin-down Power Threshold for Pulsar Wind Nebula Generation?]{}, in: Proc. of ”Young Neutron Stars and Their Environments“, IAU Symposium [**218**]{}, eds. F. Camilo and B.M. Gaensler, pp. 225 – 228,. San Francisco: Astron. Soc. Pac. (2004) Gaensler, B.M., Slane, P.O. [*The Evolution and Structure of PulsarWind Nebulae*]{}, Annu. Rev. Astro. Astrophys. [**44**]{}, 17-47 (2006) Diehl, R. et al. (INTEGRAL) [*Radioactive 26Al from massive stars in the Galaxy*]{}, Nature [**439**]{}, 45-47 (2006) Thornley, M.D., et al. [*Massive Star Formation and Evolution in Starburst Galaxies: Mid-infrared Spectroscopy with the ISO Short Wavelength Spectrometer*]{}, Astrophys. J. [**539**]{}, 641-657 (2000) de Jager, O. et al. [*Estimating the Birth Period of Pulsars through GLAST LAT Observations of Their Wind Nebulae*]{}, Astrophys. J. Lett. [**678**]{}, L113-L116 (2008) Colina, L., Perez-Olea, D. [*On the origin of the radio emission in IRAS galaxies with high and ultrahigh luminosity - The starburst-AGN controversy*]{} MNRAS [**259**]{}, 709-724 (1992) van Buren, D., & Greenhouse, M. A. [*A more direct measure of supernova rates in starburst galaxies*]{}, Astrophys. J. [**431**]{}, 640-644 (1994) Paglione, T. et al. [*Diffuse Gamma-Ray Emission from the Starburst Galaxy NGC 253*]{}, Astrophys. J. [**460**]{}, 295- 302 (1996) Ulvestad, J. & Antonucci, R. [*VLA Observations of NGC 253: Supernova Remnants and H II Regions at 1 Parsec Resolution*]{} Astrophys. J. [**488**]{}, 621 (1997) Huang, Z. P., Thuan, T. X., Chevalier, R. A., Condon, J. J., Yin, Q. F. [*Compact radio sources in the starburst galaxy M82 and the Sigma-D relation for supernova remnants*]{}, Astrophys. J. [**424**]{}, 114-125 (1994) Kronberg, P. P., Biermann, P., Schwab, F. R. [*The nucleus of M82 at radio and X-ray bands - Discovery of a new radio population of supernova candidates*]{}, Astrophys. J. [**291**]{}, 693-707 (1985) Lonsdale, C. J. et al.[*VLBI Images of 49 Radio Supernovae in Arp 220*]{}, Astrophys. J. [**647**]{}, 185-193 (2006) Albert, J. et al. (MAGIC) [*First Bounds on the Very High Energy γ-Ray Emission from Arp 220*]{}, Astrophys. J. [**658**]{}, 245-248 (2007) Abdo, A.A., et al. (Fermi Collaboration),[*Observations of the Large Magellanic Cloud with Fermi* ]{} Astron. & Astrophys. [**512**]{}, A7 (2010) de Cea del Pozo, E., Torres, D.F., Rodríguez, A.Y. [*Model analysis of the very high Energy detections of the starburst galaxies M82 and NGC253*]{}, Fermi Symposium, Washington, D.C., Nov. 2-5 (2009) Abdo, A.A., et al. (Fermi) [*Detection of Gamma-Ray Emission from the Starburst Galaxies M82 and NGC 253 with the Large Area Telescope on Fermi*]{}, Astrophys. J. [**709**]{}, L152-L157 (2010) Bradford, C.M., Nikola, T., Stacey, G.J., Bolatto, A.D., Jackson, J.M., Savage, M.L., Davidson, J.A., Higdon, S.J. [*CO (J=76) Observations of NGC253: Cosmic-ray-heated Warm Molecular Gas*]{} Astrophys. J. [**586**]{}, 891-901 (2003) Lacki, B.C., et al., Astrophys. J. [**734** ]{}, 107 (2010) Heesen, V., Beck, R., Krause, M., Dettmar, R.-J. [*Cosmic rays and the magnetic field in the nearby starburst galaxy NGC 253. I. The distribution and transport of cosmic rays*]{}, Astron. & Astrophys. [**494**]{}, 563-577 (2009) Breitschwerdt, D., Dogiel, V. A., and V[ö]{}lk, H. J. [*The gradient of diffuse gamma-ray emission in the Galaxy*]{}, Astron. and Astrophys. [**385**]{}, 216-238 (2002) Westmoquette , M.S., Gallagher, J.S., Smith, L.J., Trancho, G., Bastian, N., and Konstantopoulos, I.S. [*The optical structure of the starburst galaxy M82. II. Nebular properties of the disk and inner-wind*]{}, Astrophys. J. [**706**]{}, 1571-1587 (2009) Bauer, M., Pietsch, W., Trinchieri, G., Breitschwerdt, D., Ehle, M., Freyberg, M. J., Read, A. M. [*XMM-Newton observations of the diffuse X-ray emission in the starburst galaxy NGC 253*]{}, Astron. and Astrophys. [**489**]{}, 1029-1046 (2008) Strickland, D.K. & Heckman, T.M. [*Supernova Feedback Efficiency and Mass Loading in the Starburst and Galactic Superwind Exemplar M82*]{}, Astrophys. J. [**697**]{}, 2030-2056 (2009)
|
---
abstract: 'Theoretical results for giant resonances in the three doubly magic exotic nuclei $^{78}$Ni, $^{100}$Sn and $^{132}$Sn are obtained from Hartree-Fock (HF) plus Random Phase Approximation (RPA) calculations using the D1S parametrization of the Gogny two-body effective interaction. Special attention is paid to full consistency between the HF field and the RPA particle-hole residual interaction. The results for the exotic nuclei, on average, appear similar to those of stable ones, especially for quadrupole and octupole states. More exotic systems have to be studied in order to confirm such a trend. The low energy of the monopole resonance in $^{78}$Ni suggests that the compression modulus in this neutron rich nucleus is lower than the one of stable ones.'
author:
- |
S. Péru${^1}{^*}$, J.F. Berger$^1$, and P.F. Bortignon$^2$\
,\
\
$^*$corresponding author.\
[*e-mail address :*]{} sophie.peru-desenfants@cea.fr.\
title: |
Giant resonances in exotic spherical nuclei\
within the RPA approach with the Gogny force
---
=-3truecm =-1.truecm
1000
PACS: 21.10.Re, 21.60.Jz, 23.20.Lv
Introduction
============
Giant multipole resonances (GR) are collective excitations of nuclei that lie at excitation energies above the nucleon separation energy (8-10 MeV), have different multipolarities and carry different spin-isospin quantum numbers. They have been observed for stable nuclei throughout the mass table with large cross sections, close to the maximum allowed by sum rule arguments, implying that a large number of nucleons participate in a very collective nuclear motion [@book1; @book2]. It is a challenge both to experimentalists and theorists to study the properties of these states for nuclei far from the valley of stability. Not too much has been done from the experimental side yet: let us just mention the two measurements of the electric dipole GR (GDR) made in neutron-rich oxygen isotopes [@GSI; @MSU]. Beside GR, there are also low-lying collective excitations, in particular quadrupole and octupole states, which reflect much more than the GR the detail of shell structure. More experimental data are available for such states [@q2] in the case of unstable nuclei, giving us information on the modifications of the shell structure far from stability.
From the theoretical side, more and more calculations of GR and low-lying states are performed nowadays in the framework of microscopic HF+RPA or HFB+QRPA approaches. The effective nucleon-nucleon interactions used are taken as non-relativistic effective two-body potentials [@hiro1] or relativistic Lagrangians for meson exchange [@Vre]. Such microscopic approaches, although less accurate than more phenomenological ones, usually describe reasonably well the properties of these states in stable nuclei.
Among the effective forces used in the non-relativistic approaches, the Gogny force [@ref4; @ref5] is one of those which has been extensively employed for the description of GR and low-lying states in doubly closed shell nuclei with the RPA method [@ref2; @ref6; @ref3]. Recently, this force has been used for the first time in full Quasi-Particle RPA (QRPA) calculations. Chains of isotopes in the oxygen, nickel and tin regions have been studied in order to derive the properties of low-lying states [@milan].
The purpose of this paper is to present the results of calculations performed in three spherical exotic nuclei: $^{78}$Ni, $^{100}$Sn and $^{132}$Sn, and to compare them with those obtained in stable nuclei. More precisely, GR and low-lying states in these nuclei will be analyzed and comparisons will be made with systematics and with analogous quantities in the well-known $^{208}$Pb. The latter nucleus will serve as a reference and, for this reason, results for $^{208}$Pb will be displayed along with those of the three exotic nuclei in most Tables and Figures. Let us point out that the results presented here for $^{208}$Pb are new. They have been derived with the D1S parameterization of the Gogny force which is the one currently used now. They slightly differ from those of Ref. [@ref2] where the older parameterization D1 was employed.
A point we pay special attention to in the present work is the effect of the full consistency of the residual particle-hole (p-h) interaction with the mean field produced by the same force, as allowed by the use of consistently combined HF and RPA approaches. In order to analyze this effect, we present results where different components of the residual p-h interaction such as those generated by the spin-orbit or the Coulomb force are switched off. As will be seen, the influence of these often omitted components are far from being negligible.
In the following Section details concerning the parameters of the two-body force, the numerical methods used for solving the RPA equations are briefly recalled along with a few useful formulas. Results are presented and discussed in Section 3. The main conclusions of this work are summarized in Section 4. Let us mention that a preliminary account of the present results has appeared in the workshop Proceedings of Ref. [@curie].
The HF+RPA approach with the Gogny force
========================================
The RPA approach employed here is described in Refs. [@ref6; @ref2; @ref3]. The effective force D1S proposed by Gogny [@ref4; @ref5] is used. This finite-range density-dependent interaction describes the mean field of the nucleus, and the residual interaction in the RPA calculations is obtained via the functional second derivative of the mean field with respect to the one-body density matrix. We want to stress that all the terms of the effective force are considered in the HF mean-field and in the residual p-h interaction, including the spin–spin component, the Coulomb force and the terms produced by the two-body spin-orbit interaction. Only the two-body terms coming from the two-body center of mass correction are not included in the RPA matrix elements. Therefore, they have been also left out from the mean field calculations. In order to get equivalent binding energies and radii, the coefficient of the spin-orbit component of D1S has been reduced from 130 MeV to 115 MeV. Such a procedure was previously employed in calculations with the D1 force, as explained in Ref. [@ref4]. The Gogny force D1S including this change of the spin-orbit strength will be called D1S’.
In the results presented here, spherical symmetry is imposed. Consequently nuclear states can be characterized by their angular momentum J and their parity $\pi$. The individual Hartree-Fock wave functions are expanded on finite sets of spherical harmonic oscillator (HO) wave-functions containing 15 major shells for all nuclei. For each nucleus, the value of the parameter $\hbar\omega$ of the HO basis is taken as the one minimizing the HF total nuclear energy.
The RPA equations are solved in matrix form in the p-h representation. RPA energies do not appear very sensitive to the value adopted for the HO parameter of the basis. For instance, by changing the optimal HF value $\hbar\omega=8.7$ MeV in $^{208}$Pb by $10\%$, the variation of the ISGMR energy ($13.46$ MeV) is less than $.5\%$ and the energy of the first $2^+$ at $4.609$ MeV is changed by less than $5$ keV.
Electric transition operators are defined according to: $$\widehat{Q}_{JM} =\dspt\frac{e}{2} \sum_{i}^{A} \left( 1 -\tau_z( i ) \right)
j_J(q r_i) \, Y_{JM} (\theta_i,\phi_i),$$ where $j_J$ is a spherical Bessel function of order $J$, $q$ a transferred momentum, $\tau_z$ the third component of the nucleon isospin and $Y_{JM}$ the usual spherical harmonics.
The degree of collectivity of the excited states is measured from their contribution to the Energy Weighted Sum Rule (EWSR) $$M_1( \widehat{Q}_{JM} ) = \sum_N ( E_N -E_0 ) \vert \langle N \vert \widehat{Q}_{JM}\vert
0 \rangle \vert ^2
\label{e4}$$ where $\vert 0 \rangle$ and $\vert N \rangle$ are the RPA correlated ground state and excited states, respectively and $E_N -E_0$ their excitation energies. Eq.(\[e4\]) can also be expressed as the average in the HF ground state $\vert HF \rangle$ of a double commutator [@Lipparini]: $$\dspt M_1( \widehat{Q}_{JM} )=\frac{1}{2}\langle HF \vert \left[
\widehat{Q}_{JM}, \left[ \widehat{H},
\widehat{Q}_{JM} \right] \right]\vert HF \rangle.
\label{e3}$$
Therefore, exact values of $M_1( \widehat{Q}_{JM} )$ can be computed from expression (\[e3\]) whereas smaller values will be obtained from (\[e4\]), reflecting the finiteness of the particle-hole space used in the RPA calculations.
A comparison between the values calculated from (\[e4\]) and (\[e3\]) is shown in Figure \[EWSR78Ni\] for $^{78}$Ni as an example. As can be seen, with the 15 major shell basis employed, RPA calculations are able to describe with a reasonable accuracy the nuclear response for $J^\pi=0^+$, $2^+$, $3^-$, $4^+$ and $5^-$ up to transferred momenta $q$=1.5 fm$^{-1}$.
Results
=======
First, we will discuss the validity of the doubly-magic nature of these exotic nuclei. The single-particle neutron spectra obtained in $^{78}$Ni, $^{100}$Sn and $^{132}$Sn are shown in Figure \[nivneutron\]. The N=50 gap in $^{78}$Ni and $^{100}$Sn and the N=82 one in $^{132}$Sn are of the order of 5 MeV, which is less than 20$\%$ smaller than the gaps obtained for stable spherical nuclei with same neutron numbers. The same is true for the proton gaps at Z=28 in $^{78}$Ni and at Z=50 in tin isotopes. That is, no significant reduction of the magic gaps are observed in these nuclei. Therefore, the three exotic nuclei are still doubly magic ones and the HF+RPA method is applicable to them.
In what follows, results for states with multipolarities $0^+$, $2^+$, $1^-$ and $3^-$ are presented for four nuclei $^{78}$Ni, $^{100}$Sn $^{132}$Sn and $^{208}$Pb, the latter nucleus being included as a reference.
The strengths shown in the Figures are given in percentage of the EWSR calculated in the long wavelength limit $q \rightarrow 0$. The relevant formulas to be used in this limit for the different values of $J$ are given in the appendix of Ref. [@ref2].
In the present calculations the continuum spectrum of the HF Hamiltonian is approximated by a discrete one. As a consequence, the RPA strength functions appear in the form of discrete peaks. In order to make comparisons with experiments more meaningful, energy centroids will be defined in terms of the moments $$M_k\left( \widehat{Q}_{JM} \right) = \sum_N ( E_N -E_0 )^k \vert
\langle N \vert \widehat{Q}_{JM}\vert 0 \rangle \vert ^2 .
\label{eb4}$$ of the strength function. Two of these centroids will be used in the following: the mean value of the energy $M_1/
M_0$, and the so-called “hydrodynamic" energy $\sqrt{M_1/ M_{-1}}$ for isoscalar monopole resonances.
As experimental data on GR energies is scarce in exotic nuclei, comparisons will often be made with the systematic $A^{-1/3}$ empirical laws approximately verified in stable nuclei [@book2]. Values from these systematics as well as available experimental data are given in the Tables.
Monopole states
---------------
Figure \[J0\] and Table \[tabd1sp\] display the results obtained for the Isoscalar Giant Monopole Resonance (ISGMR).
As is well known, the excitation energies of this resonance strongly depends on the compression modulus $K_{nm}$ calculated in infinite nuclear matter [@blaizot]. One observes in Table \[tabd1sp\] that the theoretical energies in $^{208}$Pb, although in good agreement with the empirical $ 80
A^{-1/3}$ law, are 5% lower than the experimental value of Ref. [@youn]. This difference is consistent with the compression modulus found in infinite nuclear matter with D1S’, $K_{nm}$=209 MeV, which is slightly outside the interval 220-235 MeV that explains the bulk of experimental data within non-relativistic approaches [@colo].
Concerning the three exotic nuclei, we note that resonance energies significantly differ from the empirical law only in $^{78}$Ni. It must be noted that, of all three nuclei, $^{78}$Ni is the one where the squared neutron-proton asymetry $\left(\left(N-Z\right)/A\right)^2$ most differs from the one of the stable isotope: $\left(\left(N-Z\right)/A\right)^2-\left(\left(N-Z\right)/A\right)_{stable}^2$=0.78, 0.36 and -0.23 in $^{78}$Ni, $^{132}$Sn and $^{100}$Sn, respectively. It is therefore tempting to correlate the $\simeq$ 1.5 MeV lowering of the ISGMR found in $^{78}$Ni with this large neutron excess, the contribution of the symmetry term $K_{sym}$ to the finite nucleus incompressibility $K_A$ being negative [@colo1; @hiro].
The strengths displayed in Figure \[J0\] show that the major part of the EWSR is concentrated in a single peak in all four nuclei. This feature explains why the two sets of theoretical energies listed in Table \[tabd1sp\] are very close to each other. One notes that the fragmentation of the strength is almost zero in the $N$=$Z$ nucleus $^{100}$Sn, whereas it is slightly bigger in the other three nuclei which have neutron-proton asymmetry $(N-Z)/A$ in the range .21–.28.
In Table \[tab2\], we show the values of the mean monopole energies $M_1/M_0$ obtained when different terms of the residual particle-hole (p-h) interaction are left out of the RPA calculation. Columns $(1)$, $(2)$ and $(3)$ refer to the mean energies calculated by leaving out the spin-orbit and the Coulomb terms, the Coulomb term and the spin-orbit term, respectively.
One observes that the spin-orbit part of the residual interaction gives a contribution to ISGMR energies ranging from 8% in $^{78}$Ni to 5% in $^{208}$Pb. In contrast, the Coulomb contribution is larger in Pb (3%) and almost negligible in Ni. These results are consistent with those discussed in Ref. [@colo] where $^{40}$Ca, $^{90}$Zr and $^{208}$Pb were analyzed with the SLy4 interaction. In the latter work, the inclusion in the constrained HF (CHF) of the Coulomb force and of the spin-orbit component of the Skyrme interaction was proved to be essential in order to reconcile the value of $K_{nm}$ obtained with the Skyrme and Gogny forces.
Quadrupole states
-----------------
Figure \[J2\] and Tables \[tab2D1Sp\], \[tab2D1Spp\] and \[tab2p\] display the results obtained for isoscalar quadrupole states. Figure \[J2\] shows that in all four nuclei the quadrupole strength is divided essentially between two states: the isoscalar Giant Quadrupole Resonance (ISGQR) exhausting $\simeq$ 80% of the EWSR with an energy in the range 12-16 MeV and a lower-lying state at $\simeq$ 3-5 MeV carrying $\simeq$ 10%-15% of the quadrupole strength. We will label the latter $2^+_1$.
The theoretical ISGQR energies are calculated using $M_1/M_0$ excluding the $2^+_1$ state. The results shown in Table \[tab2D1Sp\] are seen to be higher than the $A^{-1/3}$ systematics by 1.0–1.5 MeV. As the latter agrees well with the experimental value in $^{208}$Pb, it is difficult to draw definite conclusions concerning the behaviour of our results in the three exotic nuclei. Let us mention that such large ISGQR energies can be understood from a too large spreading of the particle-hole spectrum in the $2^+$ channel at high energies. Such spreading is a consequence of the value of the effective mass of the D1S’ interaction (m$^*$/m = 0.7) which is the one giving correct single-particle properties in mean-field calculations. As is well known, taking into account the coupling of RPA configurations to 2-particle–2-hole (2p-2h) states would reduce this disagreement [@npa371; @rmp]. Clearly, such a coupling should be introduced in the present calculations before reliable predictions for the ISGQR in exotic nuclei can be made [@Ghi]. Let us mention that the same is true for the other giant resonances, with some dependence on the mode quantum numbers [@rmp]. Nevertheless, few results have been obtained up to now with such a coupling and it is difficult to foresee the magnitude of energy shifts, except for quadrupole and dipole states.
Our theoretical results for low-lying $2^+_1$ states are presented in Table \[tab2D1Spp\]. For these states, experimental data exist both for $^{208}$Pb [@Zie] and $^{132}$Sn [@OR]. As can be seen, a fair agreement between experiment and theory is found in $^{208}$Pb and an even better one in $^{132}$Sn, with B(E2) values being of the same order of magnitude as experimental ones. Let us point out that QRPA calculations applied to quadrupole states have been made recently with the D1S interaction for a series of tin isotopes including $^{132}$Sn [@milan]. In these calculations, the spin-orbit part and the coulomb part of the residual interaction were omitted for simplicity reasons. The $2^+$ energies were found larger than the experimental ones by 400 keV in $^{102}$Sn and 1 Mev in $^{132}$Sn. The corresponding theoretical B(E2) values were lower than experimental ones by at least a factor of two.
These results are consistent with those shown in Table \[tab2p\] where the same quantities as those of Table \[tab2D1Spp\] are displayed. They have been calculated by leaving out from the D1S’ p-h interaction the spin-orbit and the Coulomb terms, the Coulomb term, the spin-orbit term and no term, respectively. One observes that, as previously for monopole vibrations, taking into account the spin-orbit part of the residual interaction is essential to get results consistent with experimental data.
Going back to Table \[tab2D1Spp\], $2^+_1$ energies are similar in $^{100}$Sn and $^{132}$Sn, whereas a comparatively low value is predicted in $^{78}$Ni. Let us note that the $2^+_1$ state in $^{78}$Ni is still higher than the one in $^{56}$Ni, the other doubly magic Ni isotope, where the experimental value of the $2_1^+$state is 2.7 MeV and the RPA calculated one is 2.42 MeV with D1S’.
The collectivity of this $2^+$ state appears larger in $^{100}$Sn than in $^{132}$Sn and rather weak in $^{78}$Ni. Figure \[denstr\] displays the transition density $\rho_{TR}$ of this first $2^+_1$ state in $^{78}$Ni. The definition of the transition density is the same as the one given in appendix of Ref. [@ref2]. One observes that the two transition densities are in phase and that the neutron transition density is higher than the proton one and displaced to a larger radius. This mode can therefore be interpreted as an isoscalar surface mode dominated by neutron excitation.
Dipole states
-------------
Results for the isovector dipole resonance (IVGDR) are presented in Figure \[J1\] and Table \[tabDip\]. $^{100}$Sn is the nucleus where the giant dipole mode is the least fragmented with 70% of the strength concentrated into two peaks. The dipole responses of $^{208}$Pb and $^{132}$Sn and to a lesser extent of $^{78}$Ni also appear concentrated into two main energy regions. It is expected, that the fragmentation is somewhat reduced by the coupling of the RPA modes to 2p–2h states, producing smoother strength functions, as in Refs. [@colo2; @colo3] where Skyrme forces were used.
In $^{100}$Sn the mean value $M_1 / M_0 =$ 19.98 MeV is 3 MeV larger than the systematic 79$A^{-1/3}$ law (17.02 MeV). The EWSR value given in Thomas-Reiche-Kuhn (TRK) unit is 1.59, which is large compared to typical experimental values [@refexp]. The IVGDR in $^{132}$Sn is more fragmented than in $^{100}$Sn. As in $^{100}$Sn the mean energy value, 18.33 MeV, is much larger than systematics (79$A^{-1/3}=$ 15.52 MeV) and the EWSR value is 1.58. In the case of $^{78}$Ni, the IVGDR is quite fragmented with one major peak and smaller ones at higher energy. The mean energy value, 20.31 MeV, remains higher than systematics (79$A^{-1/3}=$ 18.49 MeV) and the EWSR in TRK unit is 1.57.
It must be said that IVGDR excitation energies calculated with the Gogny force usually overestimate experimental data. In the case of $^{208}$Pb, the calculated mean value is 16.50 MeV, which is quite large compared to experiment (13.43 MeV [@refexp]), but smaller than the result of Ref. [@ref2]. Let us note that, ignoring the higher part of the IVGDR response by keeping only the strength around the main lower energy peak, considerably improves the agreement with systematic estimations : mean energy values become 19.28 MeV, 18.16 MeV, 16.81 MeV and 14.99 MeV in $^{78}$Ni, $^{100}$Sn, $^{132}$Sn and $^{208}$Pb, respectively.
In fact, calculated IVGDR energies and EWSR appear quite sensitive to the energy interval considered and also to the components of the effective interaction included in the p-h residual interaction. This is shown in Table \[newtable\] where mean IVGDR energies and EWSR in $^{208}$Pb are listed for three energy integration intervals and for RPA calculations where Coulomb and/or spin-orbit terms are not included in the RPA matrix elements. One can see that the overestimation obtained with the Gogny force decreases by $\simeq$ 700 keV when the Coulomb and the spin-orbit forces are ignored, which is usually done in RPA calculations employing Skyrme forces, see however Ref. [@Jun]. By taking all the terms of the Gogny force and considering the largest energy interval, the calculated EWSR given is 1.59 in TRK units. This value is higher than the experimental one obtained for a 10-20 MeV energy interval (1.37) [@refexp] but lower than the one obtained for a energy interval going up to 140 MeV (1.78) [@refexp26]. In this case, however, another mechanism, the “quasideuteron effect”, is expected to play a major role in the photon absorption [@refexp26].
It is of great interest, beyond nuclear physics itself, to study the amount of excited low-lying dipole strength, that is the often called “pygmy” resonances. In terms of EWSR, we obtain much less than 1% strength below 10 MeV in Ni and Sn nuclei, and about that amount in $^{208}$Pb. The result for Pb is in agreement with the data of Ref. [@Rich]. The absence of collective states in the low-lying region is at variance with the results of relativistic RPA calculations [@Vre], but agrees with the arguing in Ref. [@hiro1]. There, it is pointed out that the soft dipole strength should decrease in nuclei displaying a neutron skin, compared to that in light halo nuclei because of a more efficient coupling to the IVGDR. On the other hand, the coupling to 2p–2h can significantly increase the amount of low-lying strength [@colo2; @colo3].
By introducing a very small renormalization factor (1.01-1.03) of the residual interaction the isoscalar spurious mode can be made to appear at zero frequency. This factor is introduced only in the $J^{\pi}=1^-$ subspace. In Table \[tab1\], the values of the energy of this state are shown as calculated with or without different parts of the D1S’ p-h interaction. For each nucleus the same renormalisation factor is used in the four cases. The symbol $\in \Im$ means that the RPA eigenvalue is imaginary. These results show, as expected, that the consistency between the HF field and the residual interaction is important for the treatment of the spurious states.
Octupole states
---------------
As shown in Figure \[J3\], the $J^\pi=3^-$ states belong to two well-separated energy regions. Only the component at energies larger than $\simeq$15 MeV can be considered as a genuine giant resonance, the High Energy Octupole Resonance (HEOR). Keeping only high energy regions (19-35 MeV for $^{100}$Sn, 22-31 MeV for $^{132}$Sn, 22-44 MeV for $^{78}$Ni and 13-28 MeV for $^{208}$Pb), the mean calculated HEOR energies are 28.16 MeV, 26.06 MeV, 29.51 MeV and 23.20 MeV, respectively. These values give systematics $E_0 A^{-1/3}$, with $E_0 =$ 130, 132, 126, and 137 in the four nuclei, to be compared with the usual estimate $110 A^{-1/3}$ [@refexp2]. Previous studies in stable nuclei [@ref2] gave values between $130 A^{-1/3}$ and $140 A^{-1/3}$ for heavy nuclei and around $120 A^{-1/3}$ in lighter ones. We therefore do not observe a strongly different behaviour of HEOR energies in exotic nuclei compared to the one previously obtained along the valley of stability.
The characteristics of the low energy $3^-$ states are reported in Table \[tab3D1Sp\]. The influence of the different components of the D1S’ force included in the p-h interaction is also shown. The effect of the spin-orbit term appears to be smaller than for the quadrupole states in Table 5, especially for $^{78}$Ni.
Isovector strength
------------------
In Figures \[JV0\]–\[JV3\], the fractions of the isovector EWSR carried by the $J^\pi=$ $0^+$, $2^+$, $3^-$ states is drawn. In this case, systematics for stable nuclei are not yet well known [@book2] and is not reported. Note that only the transition operator is changed compared to the isoscalar case in Figures \[J0\], \[J2\] and \[J3\]. From the comparison between the two sets of figures, a much larger fragmentation of the strength is found in the isovector case, and a mixed (isoscalar-isovector) character of several states appears, as expected, in particular in $^{78}$Ni.
Conclusion
==========
To summarize, we have presented the results obtained for different giant resonances in three doubly magic exotic nuclei, using the HF+RPA approach and the Gogny force. The largest difference with usual doubly magic nuclei inside the valley of stability occurs in $^{78}$Ni where the ISGMR appears significantly lower than systematics. This seems to be due to the large proton-neutron asymmetry of this nucleus.
The fragmentation of the isovector dipole strength has to be explored further in order to see the correlation or the no-correlation with proton-neutron radius differences. In particular, the nature of the double-peaks obtained in tin isotopes remains to be determined.
Results obtained in the three exotic nuclei for the ISGQR and HEOR resonances are similar to those of $^{208}$Pb, but more exotic systems have to be studied to confirm such a trend.
Low energy states and B(E2) values appear to be well reproduced within the present approach, in particular the first $2^+$ in $^{132}$Sn.
From a more general point of view, we have found that the spin-orbit component of the p-h residual interaction plays a very important role in the structure of the low-lying quadrupole and octupole states, as it strongly influences both excitation energies and transition probabilities. Similarly, our results show that including the Coulomb force in the RPA p-h matrix elements significantly affects IVGDR energies and EWSR.
Acknowledgments
===============
The authors want to thank D. Gogny for his interest in this work and useful comments. P.F.B. acknowledges the Service de Physique Nucléaire, CEA/DAM–Ile–de–France at Bruyères–le–Châtel for financial support and warm hospitality during the periods in which parts of this work were performed.
[99]{}
P.F. Bortignon, A. Bracco, R.A. Broglia, Giant Resonances. Nuclear Structure at Finite Temperature, Harwood Ac. Publ., New York, 1998.
M.N. Harakeh, A. van der Woude, Giant Resonances: Fundamental High-energy Modes of Nuclear Excitation, Oxford Un. Press, Oxford, 2001.
A. Leistenschneider et al., Phys. Rev. Lett. [**86**]{} (2001) 5442.
E. Tryggestad et al., Phys. Rev. C [**67**]{} (2003) 064309.
Cf., e.g., O. Sorlin et al. in the Proc. of the Int. Conf. on the Labirinth in Nuclear Structure AIP Conference Proceedings 701 (2004) 31.
H. Sagawa, H. Esbensen, Nucl. Phys. A [**693**]{} (2001) 448.
D. Vretenar, N. Paar, P. Ring, G.A. Lalazissis, Nucl. Phys. A [**692**]{} (2001) 496.
J. Dechargé and D. Gogny, Phys. Rev. C [**21**]{} (1980) 1568.
J.F.Berger, M.Girod, and D.Gogny, Comp. Phys. Comm. [**63**]{} (1991) 365.
J. Dechargé and L.Sips, Nucl. Phys. A [**407**]{} (1983) 1.
J.P. Blaizot and D. Gogny, Nucl. Phys. A [**284**]{} (1977) 429.
D. Gogny and J. Dechargé, Journal de Physique [**C4**]{} (1984) 221.
G. Giambrone et al., Nucl. Phys. A [**726**]{} (2003) 3.
S. Péru and J.F. Berger, International Journal of Modern Physics E, Vol. [**13**]{} (2004) 175.
E. Lipparini and S. Stringari, Physics Reports, [**175**]{} (1989) 103.
J.P. Blaizot, J.F. Berger, J. Dechargé, M.Girod, Nucl. Phys. A [**591**]{} (1995) 435.
D. H. Youngblood, H. L. Clark, Y.-W. Lui, Phys. Rev. Lett. [**82**]{} (1999) 691.
G. Colò, Nguyen Van Giai, Nucl. Phys. A [**731**]{} (2004) 15.
G. Colò• et al., Phys. Rev. C [**70**]{} (2004) 024307
I. Hamamoto, H. Sagawa, X.Z. Zhang, Phys. Rev. C [**56**]{} (1997) 3121; H. Sagawa, I. Hamamoto, X.Z. Zhang, J. Phys. G [**24**]{} (1998) 1445.
P.F. Bortignon, R.A. Broglia, Nucl. Phys. A [**371**]{} (1981) 405.
G.F. Bertsch, P.F. Bortignon, R.A. Broglia, Rev. Mod. Phys.. [**55**]{} (1983) 287.
F. Ghielmetti, G. Colò, P.F. Bortignon, R.A. Broglia, E. Vigezzi, Phys. Rev. C [**54**]{} (1996) R2143.
J.F. Ziegler, G. A. Peterson, Phys. Rev. [**165**]{} (1968) 1337; W.J. Vermeer et al., Aust. J. Phys. [**37**]{} (1984) 123.
J.R. Beene et al.,Nucl. Phys. A [**746**]{} (2004)471c.
G. Colò•, P.F. Bortignon, Nucl. Phys. A [**696**]{} (2001) 427.
D. Sarchi, P.F. Bortignon, G. Colò•, Phys. Lett. B [**601**]{} (2004) 27.
B.L. Berman, S.C. Fultz, Rev. Mod. Phys. [**47**]{} (1975) 713.
J. Terasaki et al., Phys. Rev. C [**71**]{} (2005) 034310. A.Leprêtre et al. Nucl. Phys. A [**367**]{} (1981)237.
N. Ryezayeva et al., Phys. Rev. Lett. [**89**]{} (2002) 272502.
F.E. Bertrand, Nucl. Phys. A [**354**]{} (1981) 129c.
R.H. Spear et al., Phys. Lett. B [**128**]{} (1983) 29.
$0^+$ $T$=0 $\dspt \frac{M_1}{M_0}$ $\dspt\sqrt{\frac{M_1}{M_{-1}}}$ 80 A$^{-1/3}$ Exp
------------- ------------------------- ---------------------------------- --------------- -----------------
$^{78}$Ni 17.17 17.07 18.72
$^{100}$Sn 17.22 17.18 17.23
$^{132}$Sn 15.29 15.22 15.72
$^{208}$Pb 13.46 13.42 13.50 14.17$\pm$ 0.28
: Mean ISGMR energies in MeV obtained by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term.[]{data-label="tab2"}
$M_1/M_0$ (1) (2) (3) (tot)
------------ ------- ------- ------- -------
$^{78}$Ni 18.55 17.10 18.59 17.17
$^{100}$Sn 18.19 16.81 18.54 17.22
$^{132}$Sn 16.07 15.06 16.26 15.29
$^{208}$Pb 13.73 13.05 14.10 13.46
: Mean ISGMR energies in MeV obtained by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term.[]{data-label="tab2"}
ISGQR D1S’ 64$A^{-1/3}$ Exp.
------------ ------- -------------- -------
$^{78}$Ni 15.94 14.98
$^{100}$Sn 15.13 13.79
$^{132}$Sn 13.79 12.57
$^{208}$Pb 11.98 10.80 10.60
: Energies in MeV and B(E2) of $2^+_1$ states obtained by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term.[]{data-label="tab2p"}
------------ ------- ------- ------------ ------------
Experiment
$2^+_1$ E B(E2) E(MeV) B(E2)
$^{78}$Ni 2.73 466
$^{100}$Sn 3.84 1431
$^{132}$Sn 3.97 1134 4.041 1400 (600)
$^{208}$Pb 4.609 2781 4.08 3180 (160)
------------ ------- ------- ------------ ------------
: Energies in MeV and B(E2) of $2^+_1$ states obtained by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term.[]{data-label="tab2p"}
------------ ------ ------- ------ ------- ------ ------- ------- -------
$2^+_1$ (1) (2) (3) (tot)
E B(E2) E B(E2) E B(E2) E B(E2)
$^{78}$Ni 3.53 257 2.84 456 3.43 271 2.73 466
$^{100}$Sn 4.64 1103 3.95 1552 4.48 1041 3.84 1431
$^{132}$Sn 4.61 775 4.04 1182 4.53 770 3.97 1134
$^{208}$Pb 5.15 2305 4.65 3145 5.09 2123 4.61 2781
------------ ------ ------- ------ ------- ------ ------- ------- -------
: Energies in MeV and B(E2) of $2^+_1$ states obtained by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term.[]{data-label="tab2p"}
IVGDR D1S’ 79$A^{-1/3}$ Exp.
------------ ------- -------------- -------
$^{78}$Ni 20.31 18.49
$^{100}$Sn 19.98 17.02
$^{132}$Sn 18.33 15.52
$^{208}$Pb 16.50 13.33 13.43
: Energies in MeV of the first $3^-$ state and corresponding B(E3) in $10^{6}e^2 fm^6$ calculated by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term. Experimental data from Ref. [@spear] is also listed.[]{data-label="tab3D1Sp"}
------------------------ ------- ------ ------- ------ ------- ------ ------- ------ ------
$^{208}Pb$ (1) (2) (3) (tot) Exp.
$<E>$ EWSR $<E>$ EWSR $<E>$ EWSR $<E>$ EWSR EWSR
$\left[ 0- 140\right]$ 15.88 1.63 15.70 1.62 16.71 1.59 16.50 1.59 1.78
$\left[ 0- 20\right]$ 15.10 1.41 15.31 1.47 15.83 1.33 15.86 1.42
$\left[ 10- 20\right]$ 15.20 1.39 15.17 1.49 15.90 1.32 15.95 1.41 1.37
------------------------ ------- ------ ------- ------ ------- ------ ------- ------ ------
: Energies in MeV of the first $3^-$ state and corresponding B(E3) in $10^{6}e^2 fm^6$ calculated by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term. Experimental data from Ref. [@spear] is also listed.[]{data-label="tab3D1Sp"}
$1^-_{sp}$ T=0 (1) (2) (3) (tot)
---------------- ------------ ----------- --------- -------
$^{132}$Sn $\in \Im$ $\in \Im$ 2205.78 4.26
$^{208}$Pb $\in \Im $ $\in \Im$ 1605.19 2.29
: Energies in MeV of the first $3^-$ state and corresponding B(E3) in $10^{6}e^2 fm^6$ calculated by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term. Experimental data from Ref. [@spear] is also listed.[]{data-label="tab3D1Sp"}
------------ ------ ------- ------ ------- ------ ------- ------- ------- ----- -------------
$3^-_1$ (1) (2) (3) (tot) Exp
E B(E3) E B(E3) E B(E3) E B(E3) E B(E3)
$^{78}$Ni 7.95 0.170 7.80 0.221 7.87 0.181 7.70 0.231
$^{100}$Sn 7.26 0.130 6.95 0.149 7.13 0.128 6.82 0.147
$^{132}$Sn 5.78 0.123 5.60 0.139 5.72 0.124 5.53 0.140
$^{208}$Pb 3.55 0.725 3.38 0.782 3.57 0.677 3.39 0.727 2.6 0.611 (120)
------------ ------ ------- ------ ------- ------ ------- ------- ------- ----- -------------
: Energies in MeV of the first $3^-$ state and corresponding B(E3) in $10^{6}e^2 fm^6$ calculated by leaving out from the D1S’ p-h interaction: (1) the spin-orbit and the Coulomb terms, (2) the Coulomb term, (3) the spin-orbit term, (tot) no term. Experimental data from Ref. [@spear] is also listed.[]{data-label="tab3D1Sp"}
|
---
abstract: 'We present Clusterrank, a new algorithm for identifying dispersed astrophysical pulses. Such pulses are commonly detected from Galactic pulsars and rotating radio transients (RRATs), which are neutron stars with sporadic radio emission. [ More recently, isolated, highly dispersed pulses dubbed fast radio bursts (FRBs) have been identified as the potential signature of an extragalactic cataclysmic radio source distinct from pulsars and RRATs.]{} Clusterrank helped us discover 14 pulsars and 8 RRATs in data from the Arecibo 327 MHz Drift Pulsar Survey (AO327). The new RRATs have DMs in the range $23.5 - 86.6$ pc cm$^{-3}$ and periods in the range $0.172 - 3.901$ s. The new pulsars have DMs in the range $23.6 - 133.3$ pc cm$^{-3}$ and periods in the range $1.249 - 5.012$ s, and include two nullers and a mode-switching object. We estimate an upper limit on the all-sky FRB rate of $10^5$ day$^{-1}$ for bursts with a width of 10 ms and flux density $\gtrsim 83$ mJy. The DMs of all new discoveries are consistent with a Galactic origin. In comparing statistics of the new RRATs with sources from the RRATalog, we find that both sets are drawn from the same period distribution. In contrast, we find that the period distribution of the new pulsars is different from the period distributions of canonical pulsars in the ATNF catalog or pulsars found in AO327 data by a periodicity search. This indicates that Clusterrank is a powerful complement to periodicity searches and uncovers a subset of the pulsar population that has so far been underrepresented in survey results and therefore in Galactic pulsar population models.'
author:
- 'J. S. Deneva$^{1,*}$, K. Stovall$^{2}$, M. A. McLaughlin$^{3}$, M. Bagchi$^{3,4}$, S. D. Bates$^{4}$, P. C. C. Freire$^{5}$, J. G. Martinez$^{5,6}$, F. Jenet$^{6}$, N. Garver-Daniels$^{3}$'
title: New Discoveries from the Arecibo 327 MHz Drift Pulsar Survey Radio Transient Search
---
Introduction
============
The field of fast radio transient detection as a means of discovering new radio sources first came to the forefront when [@McLaughlin06] found 11 such transients in archival Parkes Multibeam Survey data. They were called Rotating Radio Transients (RRATs) as the differences between pulse arrival times for each object were found to be multiples of one interval, the rotation period. RRAT rotation periods are on the order of a few hundreds to a few thousands of milliseconds, consistent with rotating neutron stars. The average RRAT rotation period is larger than the average normal pulsar rotation period. However, for some RRATs detected in only one or two observations the published period may be multiples of the actual period because of the small number of pulses detected. Furthermore, there are observational selection effects which result in pulsars with longer periods being detected with higher signal-to-noise in single-pulse searches [@McLaughlin03].
Unlike normal pulsars, RRATs appear to not be emitting a pulse on every rotation, as evidenced by the fact that these objects were missed by Fast Fourier Transform-based periodicity searches. As more RRATs were discovered (see the RRATalog[^1]) and more follow-up observations accumulated, the diversity in emission patterns has made it increasingly likely that different processes are responsible for the intermittency of what initially appeared as one new class of radio sources.
Some objects discovered by single-pulse searches are slow pulsars selected against in surveys with short integration times where there are not enough pulses for a detection to be made via periodicity search. Some RRATs discovered e.g. at 1.4 GHz appear as normal pulsars when observed at a lower frequency [@Deneva09]. This is consistent with the explanation of [@Weltevrede06] that in such cases the intermittency is due to a pulse intensity distribution with a long high-flux tail, such that as the pulsar flux density drops off at higher frequencies, only the brightest pulses remain detectable. RRATs emitting short sequences of pulses on consecutive rotations may be extreme nullers and/or old pulsars close to the death line, where the radio emission mechanism gradually begins to turn off (@Zhang07, @Burke10). In other cases, a single pulse or a single short sequence of pulses is detected and the RRAT is never seen again despite many follow-up observations [@Deneva09]. These detections are consistent with cataclysmic events or a mechanism which generates rare conditions in an otherwise quiescent neutron star magnetosphere. [@Cordes08] argue that this emission pattern can be explained by sporadic accretion of debris from a circumpulsar asteroid belt.
All RRATs known to date have dispersion measures (DMs, the integrated column density of ionized gas along the line of sight) consistent with a Galactic origin. [@Lorimer07] reported a 1.4 GHz Parkes detection of a fast radio transient outside of the Galactic plane with a DM significantly exceeding the estimated contribution of Galactic ionized gas along the line of sight. More Parkes detections of transients with similar properties were made by [@Thornton13], [@Petroff15], and [@Ravi15], the latter from a targeted observation of the Carina dwarf spheroidal galaxy. [@Spitler14] detected a transient with similar properties using Arecibo, also at 1.4 GHz. [Apart from their high DMs, most of these fast radio bursts (FRBs) differ from RRATs in that they are only detected with one pulse in their discovery observations. Despite many follow-up observations, so far repeat pulses have been definitively detected only from the Arecibo FRB [@Spitler16].]{} The combination of seemingly extragalactic origin and, until recently, the lack of repeat bursts has suggested cataclysmic events producing a single coherent radio pulse detectable to Gpc distances, such as coalescing neutron stars [@Hansen01], evaporating black holes [@Rees77], or collapsing supramassive neutron stars [@Falcke14]. Dissenting views have attributed FRBs to Galactic flaring stars (@Loeb14, @Maoz15) and atmospheric phenomena [@Kulkarni14].
Understanding the nature of fast radio transients is important in figuring out how their progenitors fit into the evolution of our Galaxy and galaxies in general. They may even provide an independent test of various evolutionary scenarios. For example, if RRATs are assumed to be intermittent since formation and comprise a neutron star population separate from normal pulsars, the Galactic core-collapse supernova rate is too low to account for both populations [@kk08]. This is not the case if RRATs represent a stage in the evolution of pulsars, even though they may outnumber other pulsar types since their sporadic pulses make them less likely to be discovered by pulsar surveys. All FRBs known to date have been found at 1.4 GHz even though surveys conducted at 350 MHz with the Green Bank telescope search DMs up to 1000 pc cm$^{-3}$ [@Karako15]. It is still unknown whether that is due only to selection effects or has intrinsic causes as well.
In this paper we report the results of running Clusterrank, a new algorithm for identifying astrophysical radio transients, on data collected by the Arecibo 327 MHz Drift Pulsar Survey (AO327). Section \[sec\_obs\] describes the AO327 survey setup and observations, Section \[sec\_sp\] gives details on the single-pulse search code whose output Clusterrank operates on, and Section \[sec\_clusterrank\] focuses on Clusterrank implementation and performance. Sections \[sec\_psrs\] and \[sec\_rrats\] present new pulsars and RRATs, respectively, and Section \[sec\_stats\] analyzes the statistics of both types of discoveries. Finally, Section \[sec\_frblimits\] places limits on the FRB population.
AO327 Survey Observations {#sec_obs}
=========================
The AO327 drift survey is running since 2010 during Arecibo telescope downtime or unassigned time. It aims to search the entire Arecibo sky (declinations from $-1{\ifmmode^{\circ}\else$^{\circ}$\fi}$ to $38{\ifmmode^{\circ}\else$^{\circ}$\fi}$) for pulsars and transients at 327 MHz. Phase I of the survey covers declinations from $-1{\ifmmode^{\circ}\else$^{\circ}$\fi}$ to $28{\ifmmode^{\circ}\else$^{\circ}$\fi}$, and Phase II will cover the remainder of the sky accessible to Arecibo. Under normal operating conditions, AO327 does not get observing time within $\pm 5{\ifmmode^{\circ}\else$^{\circ}$\fi}$ from the Galactic plane. Frequencies higher than 327 MHz are more suitable for pulsar and transient searches within the Galactic plane because of significant dispersion and scattering due to Galactic ionized gas. However, telescope time occasionally becomes available on short notice due to technical problems that render regularly scheduled projects unable to observe. AO327 is a filler project in such cases, and some of its discoveries were made during unscheduled encroachments on the Galactic plane.
[In this paper we present single-pulse search results from analyzing 882 h of data taken with the Arecibo 327 MHz receiver and the Mock spectrometer backend, up until March 2014, when AO327 began using the newer PUPPI backend. An analysis of PUPPI data will be presented in a future paper.]{} The effective integration time is $T_{\rm obs} = 60$ s for AO327 observations, corresponding to the drift time through the beam at 327 MHz. For Mock observations, the number of channels is $N_{\rm ch} = 1024$, the bandwidth is $\Delta\nu = 57$ MHz, the sampling time is $dt = 125~\mu$s, the receiver temperature is $T_{\rm rec} = 115$ K, and the gain is $G = 11$ K/Jy.
Figure \[fig\_smin\] shows the intrinsic minimum detectable flux density $S_{\rm int,min}$ vs. DM for AO327 using the Mock spectrometer, for the Green Bank North Celestial Cap survey (GBNCC, @Stovall14), and for the GBT350 drift survey [@Lynch13]. According to the radiometer equation [applied to single pulse detection [@Cordes03]]{} S\_[int,min]{} = () ,\[eqn\_smin\] where $SNR_{\rm min} = 6$ is the detection threshold, the sky temperature $T_{\rm sky} = 50$ K [@Haslam82] (appropriate for a source out of the Galactic plane), $W_{\rm int}$ is the intrinsic pulse width, and $W_{\rm obs}$ is the observed broadened pulse width. The two pulse width quantities are related by W\_[obs]{} = (W\_[int]{}\^2 + dt\^2 + \_s\^2 + t\_[DM,1ch]{}\^2)\^[1/2]{}, where $\tau_s$ is the scattering broadening estimated from Eqn. 7 in [@Bhat04], and $\Delta t_{\rm DM,1ch}$ is the dispersion delay across a channel width. [@Deneva13] present a more detailed discussion of AO327 search volume, sensitivity to periodic sources, and comparisons with other pulsar surveys as well as between the different backends that have been used in AO327 observations.
![The minimum detectable single-pulse flux density $S_{\rm min}$ vs. DM for AO327 using the Mock spectrometer (red), GBNCC (green), and GBT350 (blue), for several intrinsic pulse widths $W_{\rm int}$. The inflection point in each curve corresponds to the transition from dispersion-limited to scattering-limited detection regime. \[fig\_smin\]](fig1.eps){width="\textwidth"}
Single-Pulse Search {#sec_sp}
===================
Data are dedispersed with 6358 trial DMs in the range $0 - 1095$ pc cm$^{-3}$. The spacing between successive trial DMs increases from 0.02 to 1.0 pc cm$^{-3}$ such that at high DMs the smearing due to a pulsar’s actual DM being halfway between two trial DMs is much smaller than the scattering broadening estimated from the empirical fit of [@Bhat04]. Because scattering broadening dwarfs the sampling time even at moderate DMs, during dedispersion data are downsampled by a factor that increases as the trial DM increases (Table \[tab\_dedisp\]).
---------------- ---------------- ---------------- --------------------
Low DM High DM DM step $N_{\rm downsamp}$
(pc cm$^{-3}$) (pc cm$^{-3}$) (pc cm$^{-3}$)
0.00 36.94 0.02 1
36.96 58.35 0.03 2
58.38 99.13 0.05 4
99.18 201.08 0.10 8
201.18 482.88 0.30 16
483.18 890.68 0.50 32
891.18 1095.18 1.00 64
---------------- ---------------- ---------------- --------------------
: The step between successive trial DMs and the downsampling factors used for different DM ranges in processing AO327 Mock data. As the trial DM increases, uncorrectable scattering broadening begins to dominate sensitivity. The progressively increasing DM spacing and downsampling factor are chosen such that computational efficiency is maximized for each DM range while the increased dispersion smearing is still negligible compared to scattering broadening. \[tab\_dedisp\]
[We use the PRESTO[^2] tool [single\_pulse\_search.py]{}]{} to search each dedispersed time series for pulses. Each radio pulse, astrophysical or terrestrial, is typically detected as a cluster of events above a signal-to-noise threshold at multiple closely spaced trial DMs. We use the word “event” to refer to such a detection at a single trial DM. The one-dimensional time series are flattened with a piecewise linear fit where each piece is 1000 bins long. Then the time series are convolved with a set of boxcar functions with widths ranging from 1 to 300 bins. Because the time series may already have been downsampled during dedispersion, the same boxcar function may correspond to different absolute widths in seconds for different time series. We cap the width of boxcars such that they do not exceed 0.1 s for any time series. This corresponds to the maximum pulse duration detectable in our search. Observed RRAT and FRB pulse widths range from a fraction of a millisecond to a few tens of milliseconds[^3]$^{,}$[^4]. We construct a list of events with a signal-to-noise ratio ($SNR$) $\geq 5$ for each time series, where SNR = . The sum is over successive bins $S_i$ covered by the boxcar function, $S_0 \approx 0$ is the baseline level after flattening, $\sigma \approx 1$ is the root-mean-square noise after normalization, and $W_{\rm box}$ is the boxcar width in number of bins. This definition of $SNR$ has the advantage that it gives approximately the same result regardless of the downsampling factor used for the time series, as long as the pulse is still resolved. If there are several events with $SNR > 5$ detected with boxcars of different widths from the same portion of data, only the event with the highest $SNR$ is retained in the final list.
The scattering broadening observed for most known FRBs (DMs $\sim 500 - 1000$ pc cm$^{-3}$) is $\lesssim 1$ ms at 1400 MHz. Assuming a Kolmogorov scattering spectrum such that $\tau_s \propto f^{-4}$, this corresponds to a scattering time of $\sim 300$ ms at 327 MHz. The widest boxcar template PRESTO uses for event detection in dedispersed time series is 150 bins. At the maximum downsampling factor of 64, with our sampling time of $81.92~\mu$s, this corresponds to a template width of $\sim 800$ ms. Assuming an intrinsic pulse width of 5 ms, a 6-$\sigma$ detection of a pulse with $W_{\rm obs} = 800$ ms corresponds to $S_{\rm int} = 1.5$ Jy, and a similar detection of a pulse with $W_{\rm obs} = 300$ ms corresponds to $S_{\rm int} = 0.9$ Jy.
The event list produced by PRESTO is used to make plots like Figure \[fig\_0156spplot\], which are then inspected by eye [to look for clusters of events indicative of dispersed pulses. Because the spacing between trial DMs changes significantly within the full range of DMs used in the search (Table \[tab\_dedisp\]), single-pulse search plots are made for four subsets of the full range of trial DMs.]{} AO327 data are processed in 1-minute “beams”, corresponding to the maximum transit time through the Arecibo beam at 327 MHz. To date, we have processed $\sim 882$ h of Mock drift data, resulting in a total of 423360 single-pulse search plots. Because AO327 is a blind all-sky survey, the vast majority of these plots contain only events due to Gaussian noise or radio frequency interference (RFI). Since human inspection of all single-pulse search plots would require an excessive amount of time, ideally we want this task to be reliably accomplished by an algorithm able to distinguish astrophysical dispersed pulses from terrestrial RFI or noise, in a constantly changing RFI environment. Below we describe such an algorithm, called Clusterrank, that enabled us to quickly discover 22 new pulsars and RRATs.
![Single-pulse search plot of the discovery observation of RRAT J0156+04. Top: Histograms of the number of events vs. DM (left) and event $SNR$ vs. DM (right). Bottom: Events are plotted vs. DM and time. Larger marker sizes correspond to higher $SNR$. Events belonging to clusters identified by Clusterrank are shown in red if the cluster $R^2 > 0.8$, [magenta if $0.7 < R^2 \leq 0.8$, cyan if $0.6 < R^2 \leq 0.7$, green if $0.5 < R^2 \leq 0.6$, and blue if $R^2 \leq 0.5$. (There are no clusters with $0.7 < R^2 \leq 0.8$ or $0.5 < R^2 \leq 0.6$ in this case.)]{} The two clusters of events shown in red correspond to the two superimposed peaks in the $SNR$ vs. DM histogram on upper right. The plot title identifies that the cluster whose $SNR$ vs. DM signature most closely matches Eqn. \[eqn\_lsqfit\] has $R^2 = 0.97$, the arrival time of the highest-$SNR$ event within the cluster is $t = 13.07$ s since the start of the data span shown, and the best-fit DM is 27.46 pc cm$^{-3}$. \[fig\_0156spplot\]](fig2.eps){width="\textwidth"}
Clusterrank {#sec_clusterrank}
===========
Clusterrank[^5] operates on the event lists produced by the PRESTO single-pulse search for a 1-minute span of AO327 drift data. Events are sorted by DM and time and clusters of events are identified such that the DM and time gaps between sorted events do not exceed a threshold. We have set the maximum acceptable DM gap to 1 pc cm$^{-3}$, the largest spacing in our trial DM list. We use a maximum acceptable time gap corresponding to the product of the raw data time resolution and the largest boxcar function width used in the single-pulse search: 0.125 ms $\times$ 150 samples $\approx 19$ ms. The minimum number of events per cluster that would trigger further processing is set to 50. The DM gap, time gap, and minimum events per cluster are tunable parameters and the values chosen for processing the AO327 Mock data set strike a balance between detecting as many clusters likely to be caused by astrophysical pulses as possible and avoiding further processing of the excessive number of smaller clusters occurring randomly due to Gaussian noise fluctuations. Similar to PRESTO’s [single\_pulse\_search.py]{}, Clusterrank considers and plots four separate DM ranges: $0 - 40$ pc cm$^{-3}$, $30 - 120$ pc cm$^{-3}$, $100 - 500$ pc cm$^{-3}$, and $500 - 1000$ pc cm$^{-3}$.
The determination of how likely a cluster is to indicate the presence of a dispersed pulse in the data hinges on the analytical expression describing how a pulse’s amplitude in the dedispersed time series changes as the trial DM varies with respect to the actual pulsar DM. [@Cordes03] derive the ratio of the peak flux density of a Gaussian pulse dedispersed with a DM error $\delta$DM to the peak flux density if the same pulse is dedispersed with no DM error. [We substitute event $SNR$ for the peak flux density. The DM error $\delta{\rm DM}_{\rm i} = {\rm DM}_{\rm i} - {\rm DM}_{\rm psr}$, where the index $i$ refers to an event in the cluster. If $SNR_{\rm psr}$ is the pulse SNR for $\delta{\rm DM}=0$, the resulting equations are = \^[-1]{}[Erf]{}(), \[eqn\_lsqfit\] where = 6.91 10\^[-3]{} . \[eqn\_zeta\] Here $\Delta\nu_{\rm MHz}$ is the bandwidth in MHz, $W_{\rm ms}$ is the observed pulse width in ms, and $\nu_{\rm GHz}$ is the center observing frequency in GHz. We perform least-squares fitting using the the [Optimize.leastsq]{} module of SciPy and the recorded $SNR$s and DMs of the events in a cluster. The free parameters in the fit are $W_{\rm ms}$, ${\rm DM_ {psr}}$, and $SNR_{\rm psr}$. ]{}The initial guess values passed to the least-squares fitting function are 10 ms as the width, and the DM and $SNR$ of the event with the highest $SNR$ in the cluster.
Due to pulse substructure, noise, as well as the imperfect selection of a best-width boxcar filter for pulses with low $SNR$, in a cluster of events there are often outliers that significantly deviate from the $SNR$ vs. trial DM dependence predicted by Eqn. \[eqn\_lsqfit\] and Eqn. \[eqn\_zeta\]. We perform three iterations of identifying outliers, removing them from the cluster, and redoing the least-squares fit with the remaining events. [An event is rejected as an outlier if $|SNR(\delta {\rm DM_i}) - SNR_{\rm i}| > |SNR(\delta {\rm DM_i}) - 5|/2$. The baseline SNR difference of 5 was chosen to correspond to the minimum SNR for which events are recorded. In absolute terms, the rejection criterion is more stringent for events further away in DM from the peak in SNR vs. DM space. This effectively rejects the flat tails at SNR = 5 exhibited by many clusters.]{} Figures \[fig\_0156pulse1\] and \[fig\_0544pulse1\] show the resulting improvement in the final fit for two clusters of events containing outliers and the effect outliers can have on the quality of the initial fit. [The bottom panels of the two figures show the sloping signature of the event clusters in time-DM space. This is due to dispersion under- or overcorrection away from the actual RRAT DM smearing the pulse and shifting its peak in the dedispersed time series to a later or earlier time, respectively.]{}
![Clusterrank fit for the brigher pulse of RRAT J0156+04 from the discovery observation shown in Figure \[fig\_0156spplot\]. Top: a dashed curve shows the best fit of $SNR$ vs. DM without outlier rejection. A solid curve shows the best fit after three iterations of identifying outliers, removing them from the cluster, and redoing the fit. Good points used in the final fit are shown in red, and outliers are shown in black. A vertical line is drawn through the event with the highest $SNR$, whose $SNR$ and DM are used as seeds for the initial least-squares fit. There are several outliers at low $SNR$ close to the best-fit DM, indicating a two-peak pulse shape, with one component significantly weaker than the other. [Bottom: the structure of the cluster is shown in DM-time space.]{} \[fig\_0156pulse1\]](fig3.eps){width="70.00000%"}
![Clusterrank fit for the single pulse in the discovery observation of RRAT J0544+20. Top: a dashed curve shows the best fit of $SNR$ vs. DM without outlier rejection. A solid curve shows the best fit after three iterations of identifying outliers, removing them from the cluster, and redoing the fit. Good points used in the final fit are shown in red, and outliers are shown in black. A vertical line is drawn through the event with the highest $SNR$, whose $SNR$ and DM are used as seeds for the initial least-squares fit. There are several outliers at low $SNR$ close to the best-fit DM, indicating a two-peak pulse shape, with one component significantly weaker than the other. Bottom: the structure of the cluster is shown in DM-time space. \[fig\_0544pulse1\]](fig4.eps){width="70.00000%"}
Test Statistic {#sec_ts}
--------------
As a measure of the goodness of fit for each cluster we use the correlation coefficient for the final fit of the cluster events $SNR$ vs. DM R\^2 = 1 - , where $SNR_i$ is the $i$-th event’s $SNR$, $\widehat{SNR}_i$ is the $i$-th event’s predicted $SNR$ based on Eqn. \[eqn\_lsqfit\], and $\overline{SNR}$ is the mean $SNR$ for the cluster. The number of events can vary widely from one cluster to another and we find that in this situation $R^2$ is a better test statistic than the reduced $\chi^2$ or the root-mean-square residual from the least-squares fit. Hereafter we use the term “score” to refer to the $R^2$ value of a cluster.
After all clusters in one of the considered DM ranges are fitted, the highest score for that DM range and beam is recorded. Plots are viewed in decreasing order of the recorded best score values. The range of possible values for $R^2$ is from zero (no correlation between cluster events and fit) to unity (perfect correlation). We find that pulses from known pulsars that would be unambiguously identified as such on visual inspection when viewed in isolation from other pulses of the same pulsar are almost always fitted with $R^2 > 0.9$, and the remainder are fitted with $0.8 < R^2 < 0.9$. We therefore adopt $R^2 > 0.8$ as the threshold for visual inspection of plots.
We note that the score is independent of the DM span of the cluster or the magnitude of event $SNR$s in the cluster. A weak pulse conforming well to Eqn. \[eqn\_lsqfit\] will have a better score than a bright pulse that does not. The score is also independent of the number of events in the cluster, as long as it is above the minimum required for the cluster to be fitted. Nor does the score depend on a cluster containing all events generated by the same pulse, as long as the cluster is well fitted by Eqn. \[eqn\_lsqfit\]. This is the most significant difference between Clusterrank and codes like RRATtrap [@Karako15], which [rely on the event with the highest SNR in a cluster to be present near the middle of the DM span of the cluster]{}. Figures \[fig\_0630spplot\], \[fig\_0630pulse1\], and \[fig\_0630pulse2\] show the discovery of PSR J0630+19 made via fitting of two separate clusters corresponding to the two shoulders of a pulse in $SNR$ vs. DM space.
The PRESTO single-pulse search, which constructs the event lists that serve as input to Clusterrank, by default does not search blocks in the dedispersed time series containing very bright, broad pulses typical of RFI. This approach is very effective in reducing the number of recorded events due to terrestrial sources, which can be overwhelming in some beams. However, this RFI excision scheme sometimes has an unintended effect on bright astrophysical pulses such that the resulting signature in $SNR$ vs. DM space is two shoulders with a missing peak in-between. Unlike RRATtrap, the ability of Clusterrank to detect dispersed pulses is unaffected by this, [even if the gap in DM is large enough that the two shoulders are processed as separate clusters.]{}
![Single-pulse search plot of the discovery observation of PSR J0630+19. Histograms of the number of events vs. DM (left) and event $SNR$ vs. DM (right). Bottom: Events are plotted vs. DM and time. Larger marker sizes correspond to higher $SNR$. Events belonging to clusters identified by Clusterrank are shown in red if the cluster $R^2 > 0.8$. [In this case, the pulse yields three clusters of events at $t \sim 35$ s, with the cluster corresponding to the peak in $SNR$ vs. DM space consisting of only two events and therefore not fitted.]{} The discovery of this pulsar was made based on the fits of the two shoulders of the $SNR$ vs. DM signature of the pulse, detected as two separate clusters with scores of 0.96 and 0.81 (Figure \[fig\_0630pulse1\] and Figure \[fig\_0630pulse2\]). [The clusters at $DM \sim 53$ pc cm$^{-3}$, $t \sim 13$ and 52 s were not fitted because they contain too few events. They are unlikely to be pulses from PSR J0630+19 since their DM deviates significantly from the pulsar DM of 48 pc cm$^{-3}$.]{} \[fig\_0630spplot\]](fig5.eps){width="\textwidth"}
![Clusterrank fit resulting in the discovery of PSR J0630+19. Top: $SNR$ vs. DM of the cluster with initial fit and final fit after removal of outliers. Bottom: the highly irregular structure of the cluster in DM-time space. In this case, the two shoulders of the $SNR$ vs. DM signature of the pulse resulted in two separate clusters of events. \[fig\_0630pulse1\]](fig6.eps){width="\textwidth"}
![Clusterrank fit resulting in the discovery of PSR J0630+19. Top: $SNR$ vs. DM of the cluster with initial fit and final fit after removal of outliers. Bottom: the highly irregular structure of the cluster in DM-time space. In this case, the two shoulders of the $SNR$ vs. DM signature of the pulse resulted in two separate clusters of events. \[fig\_0630pulse2\]](fig7.eps){width="\textwidth"}
RFI Rejection {#sec_rfi}
-------------
PRESTO attempts to identify and remove RFI before making the event lists that Clusterrank operates on. Narrow-band and impulsive non-dispersed wideband signals are identified in the raw data and a time-frequency mask is constructed by PRESTO’s tool [rfifind]{}. During dedispersion, values of data points covered by the mask are replaced by a local average for that frequency channel. The PRESTO [single\_pulse\_search.py]{} ignores blocks in the dedispersed time series containing bright, broad pulses, as described above. However, even after these RFI excision steps, there is still a significant number of events due to RFI in many of the event lists that serve as input to Clusterrank. We identify RFI in several ways. First, if the final fit to a cluster yields a negative best-fit DM or $W_{\rm ms}$, the score for that cluster is set to zero. Second, if a cluster is not fit by a negative DM or $W_{\rm ms}$ but the best-fit DM is less than 1 pc cm$^{-3}$, the score for that cluster is set to zero. Figures \[fig\_rfipulse1\] and \[fig\_rfipulse2\] show two typical clusters with best-fit scores of 0.96 and 0.87 which are identified as RFI by these conditions.
![Clusterrank fit of a cluster due to terrestrial RFI. Top: $SNR$ vs. DM of the cluster with initial fit and final fit after removal of outliers. Bottom: the structure of the cluster in DM-time space. The score of this cluster calculated from the fit is 0.95 and would have caused the single-pulse search plots for this beam to be selected for human inspection. However, the best-fit DM is negative, a non-physical result, and the score is set to zero. \[fig\_rfipulse1\]](fig8.eps){width="\textwidth"}
![Clusterrank fit of a cluster due to terrestrial RFI. Top: $SNR$ vs. DM of the cluster with initial fit and final fit after removal of outliers. Bottom: the structure of the cluster in DM-time space. The score of this cluster calculated from the fit is 0.87 and would have caused the single-pulse search plots for this beam to be selected for human inspection. However, the best-fit DM is $< 1$ pc cm$^{-3}$ and the score is set to zero. \[fig\_rfipulse2\]](fig9.eps){width="\textwidth"}
A different problem is presented by beams that are so contaminated by RFI that there are tens to hundreds of clusters in one of the four DM ranges considered by Clusterrank. Since each cluster fit is essentially an independent hypothesis test, a large number of tests done for events in the same DM range means that the likelihood of at least one false positive (RFI cluster with score $> 0.8$) for that beam is high. In order to mitigate this, we use a modification of the Bonferroni correction to the familywise error rate [@Bonferroni36]. We divide the best cluster score for each of the four DM ranges considered per beam by the base-10 logarithm of the number of clusters in that DM range rounded to the nearest integer. This means that 32 or more clusters in a single DM range would trigger the correction for that range. A known pulsar with 32 or more pulses within the AO327 integration time of one minute that are moreover bright enough to be detected in a single-pulse search would be detected in our concurrent periodicity search. [RFI pulses are often bright enough to cover a large range of DMs, and one genuine dispersed pulse plotted alongside 32 or more RFI pulses may be difficult to distinguish visually even if all single pulse plots were subjected to human inspection.]{} Since Clusterrank is geared towards detecting individual pulses, we consider the Bonferroni correction a good tradeoff for identifying this type of RFI contamination and excluding plots suffering from it from human inspection. [On average, 2.7% of plots originally had $R^2 > 0.8$ but were excluded from human inspection after the Bonferroni correction was applied to their scores.]{}
Performance
-----------
In order to evaluate the performance of Clusterrank, we need to estimate the false positive and false negative rates, as well as what fraction of the total single-pulse search plots are selected for human inspection. Table \[tab\_performance\] shows that overall for the Mock portion of AO327, 1.2% of plots have score $> 0.9$ and 5.9% have $0.8 < {\rm score} < 0.9$. However, the percentage of plots with score $> 0.9$ decreases from 1.9% to 1.1% between 2010 and 2011, and holds at 1.1% for $2011 - 2013$. This is due to the fact that in early 2011, two sources of RFI in the 327 MHz band were identified on-site at Arecibo: cameras inside the Gregorian dome, and the rotation motors of the ALFA multibeam receiver. Subsequently, these were always disabled before the start of AO327 observing sessions. This decrease is evident for plots with scores of $0.8 - 0.9$ as well.
Table \[tab\_performance\] also shows what percentage of plots with score $> 0.8$ contain a detection of a known or new pulsar or RRAT. While the fraction of plots with a high score is driven by RFI, the fraction of high-ranked plots containing a detection is highly dependent on what part of the sky AO327 was observing in any year. As a low-frequency drift survey, AO327 does not typically get observing time near the oversubscribed inner Galactic plane. However, AO327 is often the only project that can take advantage of the telescope when it must be stationary for repairs or maintenance. During such times AO327 accumulates data and known pulsar detections in that region. The increased rate of detections in 2013 can be attributed to a lengthy painting job at the telescope platform during daytime in January - March 2013, coinciding with the time when the inner Galactic plane is above the horizon at Arecibo.
### False Positives
[While the ideal way to compare the false positive rates of Clusterrank and the RRATtrap code of [@Karako15] is to do it per pulse, the published false positive rate of RRATtrap is on a per-plot basis. In addition, the efficiency of both codes in reducing the number of plots for human inspection is based on a per-plot score. Therefore, we proceed by comparing per-plot rates between the two codes.]{}
RRATtrap scores 10% of plots as “excellent” and selects them for human inspection. Ninety per cent of these plots contain false positives, resulting in an overall false positive rate of 9%. If we consider Clusterrank plots with score $> 0.9$, the corresponding false positive rate is 1% (Table \[tab\_performance\]). If we consider Clusterrank plots with score $> 0.8$, the overall false positive rate is 7%. Clusterrank results are from AO327 and RRATtrap results are from GBNCC and GBT350. The RFI environment is more challenging at Arecibo, where the radio-quiet zone around the telescope is smaller. The fact that Clusterrank has a smaller false positive rate despite that means that it is very effective in distinguishing RFI from astrophysical pulses.
While 10% of excellent RRATtrap plots contain a detection, 5% of Clusterrank plots with score $> 0.9$ do. [The AO327 effective integration time and beam area area are 1 minute and 0.049 deg$^2$, respectively. The integration times of GBNCC and GBT350 are 2 and 2.3 minutes, respectively, while the beam area is 0.28 deg$^2$ for both GBT surveys. The remaining factor is the volume per unit solid angle searched, which depends on telescope sensitivity. Adapting the survey volume comparison in [@Deneva13] for single-pulse detections by using Eqn. \[eqn\_smin\], we find that for a 5 ms pulse $V_{\rm AO327,Mock}/V_{\rm GBNCC} \approx 4.5$ and $V_{\rm AO327,Mock}/V_{\rm GBT350} \approx 7.5$. Assuming that the 10% RRATtrap detection rate is the same for GBT350 and GBNCC data and normalizing by the product of beam area, integration time, and volume per unit solid angle, we find that Clusterrank makes one detection in 1.14 times the volume per RRATtrap detection in GBT350 data, and in 0.78 times the volume per RRATtrap detection in GBNCC data.]{}
[lccccc]{} Year & 2010 & 2011 & 2012 & 2013 & Overall\
& (%) & (%) & (%) & (%) & (%)\
Plots with $R^2 > 0.9$: & 1.9 & 1.1 & 1.1 & 1.1 & 1.2\
Plots with $0.8 \leq R^2 \leq 0.9$: & 7.3 & 5.5 & 6.1 & 5.4 & 5.9\
\
% of $R^2 > 0.9$ plots with detection: & 1.6 & 4.6 & 3.1 & 8.5 & 4.9\
% of $0.8 \leq R^2 \leq 0.9$ plots with detection: & 0.6 & 0.6 & 0.3 & 0.5 & 0.4\
### False Negatives
Clusterrank can produce three types of false negatives. Two are at the level of individual astrophysical pulses: (1) a pulse resulting in a cluster with $< 50$ events which is not fitted and (2) a pulse fitted with a score $< 0.8$. The third type of false negative is due to the Bonferroni correction described in Section \[sec\_rfi\] and is at the level of the best cluster score recorded per DM range per beam which determines whether the respective plot is selected for human inspection. [Precisely determining the rate for the latter type of false negative would require inspecting the plots that triggered the Bonferroni correction, which comprise $\sim 20\%$ of all plots. By inspecting a random subset of these plots, we estimate that 0.02% of all plots contain astrophysical pulses with $R^2 > 0.8$ but the best cluster score recorded for the plot was decreased to $< 0.8$ due to the Bonferroni correction. The false negatives in the inspected subset of plots were known pulsars whose high number of pulses within the beam triggered the correction.]{}
Determining the rates for the first two types of false negatives precisely is not possible without visually inspecting all single-pulse search plots, which is what Clusterrank allows us to avoid. However, from results for a random set of beams containing known and new pulsar and RRAT detections we calculate that 2% of astrophysical pulses result in clusters with $< 50$ events, which are not fitted by Clusterrank, and 27% of astrophysical pulses have a best fit with $R^2 < 0.8$, which in the absence of other pulses would not select the plot for visual inspection. Using the same method, [@Karako15] estimate that $20\%$ of astrophysical pulses are not scored as “excellent” (but may still be marked as “good”) by RRATtrap, at the expense of also producing more false positives than Clusterrank.
We note that in the case of known pulsars, Clusterrank false negatives tend to occur as the pulsar enters and exits the beam, or if it traverses only the edge of the beam. As the pulsar moves away from the beam center, pulse $SNR$ decreases. While $R^2$ does not directly depend on $SNR$, the $SNR$ vs. DM shape that Clusterrank is fitting gradually becomes less pronounced and the pulse is detected at fewer trial DMs.
### FRB Considerations {#sec_frbcons}
An isolated highly dispersed pulse may be very difficult to distinguish from a noise cluster of events either algorithmically or visually. Pulsar surveys typically use trial DM lists with the interval between successive DMs increasing as the trial DM value increases (Table \[tab\_dedisp\]). This is done to maximize computing efficiency: the detectability of pulsars with high DMs is limited by uncorrectable scattering broadening, not dispersion. However, it also means that an isolated, highly dispersed FRB pulse that is not bright enough to be detected at a large range of widely spaced DMs would be difficult to impossible to identify visually or by algorithms like Clusterrank and RRATRap, which rely on the $SNR$ vs. DM shape of the pulse. Figure \[fig\_double\] shows a simultaneous detection of the known pulsars J1914$+$0219 (DM = 233.8 pc cm$^{-3}$) and J1915$+$0227 (DM = 192.6 pc cm$^{-3}$). Most of the two pulsars’ pulses are not recognized as clusters and would be fitted poorly because they are detected at too few DMs. In isolation, each of those pulses would be difficult to distinguish from clumps of noise events elsewhere on the plot.
The spacing between successive trial DMs in the scattering-limited detection regime is typically informed by the fit of scattering time vs. DM made by [@Bhat04], which is based on observations of Galactic sources. Unlike Galactic pulsars with DM $\gtrsim 500$ pc cm$^{-3}$, FRBs exhibit little to no scattering at 1.4 GHz. This can be explained by the fact that for FRBs, which are seen outside of the plane of our Galaxy, the bulk of the scattering material is in the host galaxy. For a scattering screen of the same size, the subtended angle as seen from Earth would be much smaller for the extragalactic source, essentially at the limit of the scattering screen being a point source. Therefore the difference in travel time for unscattered photons vs. photons scattered by the edges of the screen would be much smaller for an extragalactic than for a Galactic source. Correspondingly, the exponential scattering tail of the observed pulse caused by the spread of photon travel times due to scattering would be less prominent or absent for the extragalactic source. For these reasons, in order to maximize the chance of detecting highly dispersed, non-repeating FRBs, surveys should deliberately oversample the DM search space at high DMs.
### Clusterrank at High Frequencies
In order to evaluate how well Clusterrank performs on data taken at a higher frequency commonly used in pulsar searching, we located PRESTO single-pulse search output files for the discovery observations of eight RRATs[^6] and one FRB [@Spitler14] discovered by the PALFA survey at 1.4 GHz. [They were found by human inspection alone or facilitated by RRATtrap.]{} We ran Clusterrank on each set of files with no change in the algorithm or parameter values described above while appropriately specifying the PALFA observing frequency and bandwidth. The FRB received a score of 0.94. [Six of the RRATs received scores of $0.88 - 0.99$]{}. One RRAT was a false negative: its sole pulse resulted in fewer than 50 recorded events and therefore it was not fitted. The latter was the only data set from the older WAPP backend, which was used by PALFA until 2009. The lower sensitivity of WAPP vs. Mock PALFA observations means that there are fewer DMs, with larger spacings, in the trial DM list used to process WAPP data, than is the case for Mock data. Therefore, single-pulse search output from WAPP data has fewer events per pulse on average, and in that case an adjustment in the minimum number of events per cluster would improve the performance of Clusterrank.
[The observation of one RRAT was severely contaminated by RFI, resulting in $> 1000$ clusters in the DM range containing the RRAT pulse. While the RRAT pulse received a score of 0.89, some RFI clusters received a score of 0.99. The high number of RFI clusters triggered the Bonferroni correction step in our algorithm (Section \[sec\_rfi\]), which yielded an overall score of 0.33 for the DM range containing the RRAT pulse.]{} RFI in the PALFA bandwidth is dominated by several radars emitting pulses chirped at a variable rate. We find that in this situation the ability of Clusterrank to identify dispersed pulses based only on their shoulder shape at 327 MHz (Section \[sec\_ts\]) becomes a liability at 1400 MHz. This is due to the fact that for the same DM range and pulse width, this shape becomes more linear with increasing frequency and therefore less likely to be uniquely identified with Eqn. \[eqn\_lsqfit\] (@Cordes03, Figure 4). This can be remedied by rejecting pulses whose best-fit DM is outside the DM range spanned by the cluster.
![Detection of the known pulsars J1914$+$0219 (DM = 233.8 pc cm$^{-3}$) and J1915$+$0227 (DM = 192.6 pc cm$^{-3}$) in the same beam. Most pulses are detected at very few DMs and result in clusters with $< 50$ events, which excludes them from being fitted by Clusterrank. However, each of these pulses taken in isolation is difficult to distinguish from noise clusters elsewhere in the DM vs. time panel, either visually or algorithmically. [Events belonging to clusters identified by Clusterrank are shown in red if the cluster $R^2 > 0.8$, magenta if $0.7 < R^2 \leq 0.8$, cyan if $0.6 < R^2 \leq 0.7$, green if $0.5 < R^2 \leq 0.6$, and blue if $R^2 \leq 0.5$. (There are no clusters with $0.5 < R^2 \leq 0.6$ or $R^2 \leq 0.5$ in this case.)]{} \[fig\_double\]](fig10.eps){width="\textwidth"}
New Pulsars {#sec_psrs}
===========
![Average pulse profile (top) and subintegration vs. pulse phase (bottom) for a confirmation observation of PSR J1941+01 at 327 MHz. The pulsar switches between two modes with distinct pulse profiles at different phases. The dark areas across most of the period at $t < 150$ s are due to RFI.\[fig\_1941\]](fig11.eps)
Clusterrank has facilitated the discovery of 22 new objects to date. Confirmation observations for all candidates use the 327 MHz receiver with the PUPPI backend and a $5 - 10$ minute integration time. Each confirmation observation is dedispersed with a range of DMs corresponding to the range over which pulses were detected in the discovery. If a period can be derived from single pulses either in the discovery or confirmation, the time series at the DM for which the pulse $SNR$ peaks is searched for periodic emission within a narrow range around that period. If a period cannot be derived or if the narrow search does not detect periodic emission, we perform a blind acceleration search of the time series. Periodicity searches of confirmation observations revealed that 14 of our 22 single-pulse discoveries are long-period pulsars, and Table \[tab\_psrs\] summarizes their properties. In order to derive the peak flux density, we use a sky temperature $T_{\rm sky} = 50$ K [@Haslam82]. The PUPPI backend provides a bandwidth of 68 MHz and 2816 channels. The receiver temperature $T_{\rm rec} = 115$ K and gain $G = 11$ K/Jy are the same as for Mock observations. The peak flux density is S\_[pk]{} = , where $N_{\rm bin}$ is the number of bins in the averaged pulse profile and $SNR_{\rm prof}$ is the peak $SNR$ of that profile.
The periods of the new pulsars are in the range $1.2 - 5.0$ s, with an average of 2.2 s. Surveys using short integration times and Fast Fourier Transform (FFT) periodicity search algorithms select against slow pulsars because there are few pulses per observation [[@Lazarus15]]{}, slow pulsars typically have duty cycles on the order of only $1 - 5\%$ [[@kgm04]]{}, and the pulses occur within a phase window that may be significantly wider than the width of an individual pulse. A Fast Folding periodicity search is more effective than an FFT search when there are few rotation periods within an observation (@Staelin69, @Kondratiev09). We plan to reprocess AO327 survey data with a Fast Folding search, which will be sensitive to slow periodic emitters missed by the PRESTO FFT-based periodicity search that moreover do not emit pulses bright enough to be detected by a single-pulse search. Exhaustive searches for pulsars that are selected against by most widely used search algorithms are important for constructing a more complete picture of the period and age distribution of the Galactic pulsar population and relating these statistics to independent measures of pulsar formation such as the Galactic supernova rate.
Two more narrowly defined subsets of slow pulsars that are selected against in FFT-based periodicity searches are also represented in Table \[tab\_psrs\] and overrepresented among Clusterrank discoveries compared to the general pulsar population. PSRs J1749+16 and J1750+07 null for tens of seconds at a time. The individual pulses of PSR J1750+07 have a peak flux density of up to $\sim 150$ mJy and are bright enough to trigger PRESTO’s “bad block” flagging (Section \[sec\_rfi\]), yet its integrated profile peak flux density is only 15.5 mJy because its nulling fraction is $> 50\%$.
PSR J1941+01 has the highest DM of all AO327 discoveries to date, 133 pc cm$^{-3}$. It is a mode-switching pulsar and alternates in a quasi-periodic manner between two states with distinct pulse shapes and phase windows (Figure \[fig\_1941\]). In addition, one of the modes exhibits subpulse drifting. The emission of pulsars with similar properties has been explained by the carousel model, where emission sub-beams circulate around the magnetic field axis, giving rise to emission patterns that repeat on time scales of many pulse periods as the observer’s line of sight crosses different sub-beam configurations (e.g. @Rankin08). We are pursuing multi-frequency polarimetric observations of J1941+01 in order to map its emission region cone in altitude as well as cross-section and defer a more detailed analysis to a separate paper.
-------------- ---------------- --------- --------- ---------------- ------------ ---------- --------- ------- --
Name RA DEC $P$ DM $W_{prof}$ $S_{pk}$ $N_{p}$ $R^2$
(hh:mm:ss)$^a$ (dd:mm) (ms) (pc cm$^{-3}$) (ms) (mJy)
J0011+08 00:11:34 08:10 2552.87 24.9 28 12.3 7 0.91
J0050+03 00:50:31 03:48 1366.56 26.5 33 15.2 7 0.87
J0611+04 06:11:18 04:06 1674.43 69.9 81 3.5 2 0.94
J0630+19 06:30:04 19:37 1248.55 48.1 35 3.6 1 0.96
J1656+00 16:56:41 00:26 1497.85 46.9 34 11.4 1 0.95
J1738+04 17:38:25 04:20 1391.79 23.6 28 14.1 8 0.91
J1743+05 17:43:16 05:29 1473.63 56.1 55 5.9 3 0.90
J1749+16 17:49:29 16:24 2311.65 59.6 61 7.3 6 0.81
J1750+07 17:50:40 07:33 1908.81 55.4 60 15.5 3 0.94
J1938+14 19:38:19 14:42 2902.51 74.2 95 5.2 4 0.85
J1941+01$^b$ 19:41:58 01:46 1404.73 133.3 40 18.4 6 0.95
J1946+14 19:46:52 14:42 2282.44 50.3 50 11.4 3 0.90
J1956+07 19:56:35 07:16 5012.48 61.3 125 3.6 3,2$^c$ 0.96
J2105+07 21:05:27 07:57 3746.63 52.6 126 35.1 5 0.97
-------------- ---------------- --------- --------- ---------------- ------------ ---------- --------- ------- --
: New pulsars discovered by AO327 via a single-pulse search. All objects were discovered via single-pulse search and identified by the Clusterrank code described in this paper. $R^2$ is the value for the highest-ranked pulse in the discovery Mock observation. Confirmation observations with the more sensitive PUPPI backend yielded periodic detections and many pulses for all objects. $W_{prof}$ is the full-width, half-maximum width of the folded pulse profile and $S_{pk}$ is the peak flux density derived from it. [$N_{p}$ is the number of pulses in the discovery observation.]{} \[tab\_psrs\]
$^a$ RA and DEC are given in the J2000 coordinate system. The uncertainties in both coordinates are 7.5[\^$^{\prime}$]{}, the 327 MHz beam radius, unless otherwise indicated.\
$^b$ J1941+01 is a mode-switching pulsar and exhibits two distinct pulse profiles corresponding to two modes. $W_{prof}$ and $S_{pk}$ given here refer to the state with the brighter peak.\
$^c$ J1956+07 was identified by Clusterrank in two 1-minute data spans from observations taken on different days.
New RRATs {#sec_rrats}
=========
Eight of the objects discovered with the help of Clusterrank do not exhibit periodic emission in follow-up observations and therefore we provisionally classify them as RRATs (Table \[tab\_rrats\]). We were able to estimate the rotation periods for four RRATs based on the intervals between pulses detected within one observation. In the case of J1603+18, we detected three pulses emitted on consecutive rotations, separated by intervals of $\sim 0.503$ s. For the remaining four RRATs, the intervals between detected pulses are uneven and significantly longer, and therefore the estimated period may be an integer multiple of the actual rotation period.
We calculate the peak flux density of the brightest pulse in the discovery observation of each RRAT from the radiometer equation for single pulses: S\_[pk]{} = , where $SNR_{\rm pk}$ is the peak signal-to-noise ratio of the brightest pulse, and $W$ is its full width at half maximum.
[RRAT candidate J0156+04 (Table \[tab\_rrats\]) remains unconfirmed. Six confirmation attempts of 10 minutes each were made using the 327 MHz Arecibo receiver and the PUPPI backend. J0156+04 exhibits the typical signature of a dispersed pulse with one peak in $SNR$ vs. DM that is easily fitted by Clusterrank and recognized visually (Figure \[fig\_0156spplot\], Figure \[fig\_0156pulse1\]). However, even the brighter of the two detected pulses is too weak for the dispersion sweep to be visible in a plot of the raw data in time-frequency space around the pulse arrival time.]{}
The properties of J0156+04, two or more pulses in close succession and consistent non-detections in multiple follow-up observations, are shared by a small subset of radio transients detected by virtually every pulsar survey using a single-pulse search. Two recent examples are J1928+15 [@Deneva09] and J1336$-$20 [@Karako15]. This type of transient emission may indicate an object that is dormant or not beamed toward the Earth and whose magnetosphere is perturbed sporadically by accretion of debris from an asteroid belt [@Cordes08].
---------- ---------------- --------- ------ ---------------- ------ ---------- --------- ------- ------------- ------
Name RA DEC P DM $W$ $S_{pk}$ $N_{p}$ $R^2$ Rate Conf
(hh:mm:ss)$^a$ (dd:mm) (ms) (pc cm$^{-3}$) (ms) (Jy) (hr$^{-1}$)
J0156+04 01:56:01 04:02 - 27.5 3.8 0.3 2 0.97 $\leq 2$
J0544+20 05:44:12 20:50 - 56.9 2.3 0.3 1 0.95 4 Y
J0550+09 05:50:28 09:51 1745 86.6 22.5 0.1 3 0.93 47 Y
J1433+00 14:33:30 00:28 - 23.5 3.8 0.3 1 0.94 2 Y
J1554+18 15:54:17 18:04 - 24.0 7.6 0.2 1 0.89 11 Y
J1603+18 16:03:34 18:51 503 29.7 8.8 0.2 1 0.94 4 Y
J1717+03 17:17:56 03:11 3901 25.6 8.4 0.2 1 0.91 8 Y
J1720+00 17:20:55 00:40 3357 46.2 7.2 0.2 1 0.97 33 Y
---------- ---------------- --------- ------ ---------------- ------ ---------- --------- ------- ------------- ------
: New RRATs discovered by the AO327 drift survey. All objects were discovered via single-pulse search and identified by the Clusterrank code described in this paper. $R^2$ is the value for the highest-ranked pulse in the discovery Mock observation. $W$ is the full-width at half-maximum of the brightest detected pulse, and $S_{pk}$ is its peak flux density. [$N_{p}$ is the number of pulses in the discovery observation.]{} Also listed is the average pulse rate, defined as the ratio of the total number of pulses detected to the total observation time. For objects that have been detected in only one observation we take this to be an upper limit. The last column lists if an object has had a successful confirmation detection after the discovery. \[tab\_rrats\]
$^a$ RA and DEC are given in the J2000 coordinate system. The uncertainties in both coordinates are 7.5[\^$^{\prime}$]{}, the 327 MHz beam radius, unless otherwise indicated.
Perytons {#sec_per}
========
Two bright signals assigned high scores by Clusterrank were also not detected in follow-up observations. Further inspection revealed that their sweep in time-frequency space does not completely conform to the cold-plasma dispersion relation. We classify them as perytons, terrestrial RFI mimicking a dispersed signal [@Burke11].
The total follow-up observation time is one hour for peryton P1907+06, and 1.6 h for peryton P1017+02. P1017+02 (Figure \[fig\_1017spplot\], Figure \[fig\_1017pulse\]) and P1907+06 (Figure \[fig\_1907spplot\], Figure \[fig\_1907pulse\]) are qualitatively different from J0156+04, as well as from any other single-pulse source detected by AO327. They are readily recognized visually since the corresponding clusters of events have a limited extent in DM and a definite peak in $SNR$ vs. DM that is consistent between pulses from the same source. However, they exhibit complex substructure and secondary peaks in $SNR$ vs. DM space. This corresponds to a similarly complex resolved substructure in each pulse, such that different components are aligned and summed at different trial DMs. P1017+02 and P1907+06 also differ from the RRATs in Table \[tab\_rrats\] in that they are brighter by an order of magnitude.
Figure \[fig\_peryton\] shows the time-frequency structure of the brightest peryton pulse, also shown in Figures \[fig\_1017spplot\] and \[fig\_1017pulse\]. All pulses from P1017+02 and P1907+06 exhibit the same tickmark-like signature, indicating a chirped signal with an abrupt sign reversal of the chirp rate. Such signals have also been detected by the GBT350 survey (C. Karako-Argaman, private communication[^7]) and anecdotal accounts suggest that they may be generated in the process of shutting down the transmitters of some aircraft.
![Single-pulse search plot of the detection of peryton P1017+02. Top: Histograms of the number of events vs. DM (left) and event $SNR$ vs. DM (right). Bottom: Events are plotted vs. DM and time. Larger marker sizes correspond to higher $SNR$. Events belonging to clusters identified by Clusterrank are shown in red if the cluster $R^2 > 0.8$, magenta if $0.7 > R^2 > 0.8$, and blue if $R^2 < 0.5$. The multi-peaked DM vs. $SNR$ signatures of the two pulses at $t \sim 39$ s and $t \sim 56$ s present a challenge for Clusterrank and indicate a multi-peaked pulse profile. \[fig\_1017spplot\]](fig12.eps){width="\textwidth"}
![Clusterrank fit for the pulse with the highest score in the observation of peryton P1017+02. Top: $SNR$ vs. DM for events in the cluster corresponding to this pulse, along with initial and final fits. Bottom: the structure of the cluster is shown in DM-time space. \[fig\_1017pulse\]](fig13.eps){width="\textwidth"}
![A time-frequency plot of the raw data around the arrival time of the brighter pulse of peryton P1017+02 ($t \sim 56$ s in Figure \[fig\_1017spplot\]; Figure \[fig\_1017pulse\]). The red line shows the dispersion sweep for the best-fit DM of this pulse, 21.97 pc cm$^{-3}$ according to the cold-plasma dispersion relation. Darker vertical lines are caused by wide-band, non-dispersed terrestrial RFI. \[fig\_peryton\]](fig14.eps){width="\textwidth"}
![Single-pulse search plot of the detection of peryton P1907+06. Top: Histograms of the number of events vs. DM (left) and event $SNR$ vs. DM (right). Bottom: Events are plotted vs. DM and time. Larger marker sizes correspond to higher $SNR$. Events belonging to clusters identified by Clusterrank are shown in red if the cluster $R^2 > 0.8$, magenta if $0.7 > R^2 > 0.8$, green if $0.6 > R^2 > 0.5$,and blue if $R^2 < 0.5$. The multi-peaked DM vs. $SNR$ signatures of the four pulses at $DM \sim 24$ pc cm$^{-3}$ present a challenge for Clusterrank and indicate a multi-peaked pulse profile. \[fig\_1907spplot\]](fig15.eps){width="\textwidth"}
![Clusterrank fit for the pulse with the highest score in the observation of peryton P1907+06. Top: $SNR$ vs. DM for events in the cluster corresponding to this pulse, along with initial and final Clusterrank fits. Bottom: the structure of the cluster is shown in DM-time space. \[fig\_1907pulse\]](fig16.eps){width="\textwidth"}
Distributions and Populations {#sec_stats}
=============================
[Following the analysis presented by [@Karako15], we compare the properties of Clusterrank discoveries with those of known pulsars and RRATs.]{} The average period for pulsars found by Clusterrank is 2.2 s. This is larger than the 0.9 s average period for non-MSP ($P > 0.02$ s) pulsars from the ATNF catalog[^8], but similar to the 2.3 s average period of RRATs with measured periods in the RRATalog[^9]. However, a Kolmogorov-Smirnov (K-S) two-sample test between Clusterrank pulsar discoveries and RRATalog objects yields a p-value of 0.017, suggesting that the two sets are not drawn from the same period distribution. [In contrast, the same type of test between Clusterrank RRAT discoveries with period estimates and RRATalog objects yields a p-value of 0.93, indicating consistency with the null hypothesis of the same underlying period distribution for both sets.]{} Of the 115 objects currently in the RRATalog, 85 were discovered by surveys operating at 1400 MHz and using significantly longer integration times than AO327. This suggests that neither the observing frequency nor the number of rotations within a standard survey observation result in strong selection effects when we consider the period distribution of RRAT discoveries. The latter may be explained by the fact that follow-up observations of newly discovered RRATs are typically longer than the discovery observation and thus slow pulsars discovered by single-pulse search are likely to be identified as periodic emitters when reobserved. The weak dependence on observing frequency is due to the fact that dispersion ($\propto \nu^{-2}$) and scattering broadening ($\propto \nu^{-4}$) do not significantly select against detecting RRATs at 327 vs. 1400 MHz due to their long periods.
Since the DM and spatial distribution of a set of pulsar discoveries depends on what region of the sky survey observations are targeting, we can meaningfully compare Clusterrank discoveries only with AO327 periodicity search discoveries. Performing two-sample K-S tests between these two sets we obtain a p-value of 0.60 for their DM distributions, and p-values of 0.87 and 0.98 for the Galactic latitude and longitude, respectively. Therefore we can conclude that Clusterrank discoveries are drawn from the same spatial and DM distribution as AO327 periodicity search discoveries.
FRB Population Limits {#sec_frblimits}
=====================
We did not find any FRBs in the AO327-Mock data set presented in this paper. However, as we outline in Section \[sec\_frbcons\], the methods usually employed for calculating the optimal trial DM list for a pulsar search assume a relationship between dispersion and scattering typical of Galactic sources that does not hold for known FRBs. We plan to reprocess all AO327 survey data with a trial DM list optimized for detecting highly dispersed but not significantly scattered FRB pulses.
The first upper limits on the all-sky FRB rate are from Parkes pulsar surveys and their FRB detections. For FRBs with a flux density $S \gtrsim 3$ Jy at 1.4 GHz, [@Thornton13] estimate a rate of $1.0^{+0.6}_{-0.5} \times 10^4$ sky$^{-1}$ day$^{-1}$ from high-latitude data, and [@Burke14] estimate a rate of $\sim 2 \times 10^3$ sky$^{-1}$ day$^{-1}$ from intermediate- and low-latitude data. [@Burke14] argue that the difference is statistically significant but acknowledge that the estimates use assumptions whose validity about FRBs is unknown. [@Rane15] derive a limit of $3.3^{+5.0}_{-2.5} \times 10^3$ day$^{-1}$ sky$^{-1}$ for bursts with a flux density $> 0.1$ Jy at latitudes $|b| < 60{\ifmmode^{\circ}\else$^{\circ}$\fi}$ and argue that this is consistent with rates from other Parkes surveys. [@Karako15], who did not find any FRBs in GBT350 drift data, derive a limit on the rate of bursts with $S \gtrsim 260$ mJy and widths $\sim 10$ ms at 350 MHz and obtain $\sim 1 \times 10^4$ sky$^{-1}$ day$^{-1}$.
We assume that FRBs follow Poisson statistics in order to calculate a similar rate limit from AO327-Mock sky coverage. The Poisson probability of detecting exactly $k$ FRBs in a survey of total duration $T$ is P(X=k) = , where $\theta$ is the beam area and $r$ is the burst rate. The probability of detecting at least one FRB is $P(X>0) = 1 - P(X=0) = 1 - e^{-\left(r \theta T\right)}$. We calculate that for a 99% chance of detecting at least one FRB in the 882 h of AO327-Mock data, the rate is $\sim 1 \times 10^5$ sky$^{-1}$ day$^{-1}$ for 10 ms bursts with $S \gtrsim 83$ mJy. The limit derived from AO327-Mock is less stringent than the limits from GBT350 or the Parkes surveys, because AO327-Mock has significantly less total on-sky time and smaller beam size. Since the spectral indices of FRBs are on the whole unknown, we can meaningfully compare the AO327-Mock limit only with the GBT350 limit. AO327-Mock searches 5 times more volume per unit time than GBT350 (@Deneva13, Figure 4). Therefore, the AO327-Mock FRB rate limit normalized to the GTB350 search volume is $\sim 2 \times 10^4$ sky$^{-1}$ day$^{-1}$. This estimate will improve when results from AO327-PUPPI are included, in addition to results from reprocessing AO327-Mock data with a DM list tailored for FRB detection.
Summary
=======
We have developed Clusterrank, a new algorithm to automatically rank clusters of events recorded by single-pulse searches based on each cluster’s likelihood of being generated by a dispersed astrophysical pulse. Clusterrank enabled us to quickly identify 8 RRATs and 14 slow pulsars missed by an FFT-based periodicity search in AO327 drift survey data. The new RRATs have DMs in the range $22.5 - 86.6$ pc cm$^{-3}$. Five of these sources have period estimates from pulse arrival times; their periods are in the range $0.172 - 3.901$ s. The new pulsars have DMs in the range $23.6 - 133.3$ pc cm$^{-3}$ and periods in the range $1.249 - 5.012$ s.
We find that the periods of RRATs found by Clusterrank are drawn from the same distribution as the periods of sources in the RRATalog, and that the periods of pulsars and RRATs discovered by Clusterrank are consistent with having the same underlying distribution. We also find that there is no significant difference between the underlying DM or spatial distributions of new sources found by AO327 via periodicity search vs. new sources found via using Clusterrank on PRESTO single-pulse search output.
Although we search AO327 data with DMs up to 1000 pc cm$^{-3}$, we have not yet found any highly dispersed pulses indicative of FRBs. We identify a common optimization in constructing trial DMs lists for pulsar surveys that likely hinders the identification of such pulses either visually or algorithmically and recommend that the DM search space be deliberately oversampled for DM $\gtrsim 500$ pc cm$^{-3}$ compared to what is optimal for Galactic sources.
We thank Ben Arthur and Chen Karako-Argaman for useful discussions. J.S.D. was supported by the NASA Fermi Guest Investigator program and the Chief of Naval Research. M.A.M., M.B., and S.D.B. were supported by NSF award numbers 0968296 and 1327526. The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association.
Bhat, N. D. R., Cordes, J. M., Camilo, F., Nice, D. J. & Lorimer, D. R. 2004, ApJ, 605, 759
Bonferroni, C. E. 1936, Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze
Burke-Spolaor, S., & Bailes, M. 2010, MNRAS, 402, 855
Burke-Spolaor, S. & Bannister, K. W. 2014, ApJ, 792, 19
Burke-Spolaor, S., Bailes, M., Ekers, R., Macquart, J.-P., & Crawford, F. 2011, ApJ, 727, 18
Cordes, J. M. & McLaughlin, M. A. 2003, ApJ, 596, 1142
Cordes, J. M., & Shannon, R. M. 2008, ApJ, 682, 1152
Deneva, J. S. et al. 2009, ApJ, 703, 2259
Deneva, J. S. et al. 2013, ApJ, 775, 51
Falcke, H. & Rezzolla, L. 2014, A&A, 562, 137
Hansen, B. M. S. & Lyutikov, M. 2001, MNRAS, 322, 695
Haslam, C. G. T, Salter, C. J., Stoffel, H. & Wilson, W. E. 1982, A&AS, 47, 1
Karako-Argaman, C. et al. 2015, 2015, ApJ, 809, 67
Keane, E. F. & Kramer, M. 2008, MNRAS, 391, 2009
Kolonko, M., Gil, J. & Maciesiak, K. 2004, A&A, 428, 943
Kondratiev, V. I. et al. 2009, ApJ, 702, 692
Kulkarni, S. R., Ofek, E. O., Neill, J. D., Zheng, Z. & Juric, M. 2014, ApJ, 797, 70
Lazarus, P. et al. 2015, ApJ, 812, 81
Loeb, A., Shvartzvald, Y. & Maoz, D. 2014, MNRAS, 439, 46L
Lorimer, D. R. et al. 2007, Science, 318, 777
Lynch, R. S., Boyles, J. et al. 2013, ApJ, 763, 81
Maoz, D. et al. 2015, arXiv:1507.01002
McLaughlin, M. A. & Cordes, J. M. 2003, ApJ, 596, 982
McLaughlin, M. A. et al. 2006, Nature, 439, 817
Petroff, E. et al. 2015, MNRAS, 447, 246
Rane, A. et al. 2015, arXiv:1505.00834
Rankin, J. M. & Wright, G. A. E. 2008, MNRAS, 385, 1923
Ravi, V., Shannon, R. M. & Jameson, A. 2015, ApJ, 799L, 5
Rees, M. J. 1977, Nature, 266, 333
Spitler, L. G. et al. 2014, ApJ, 790, 101
Spitler, L. G. et al. 2016, Nature, DOI:10.1038/nature17168
Staelin, D. H. 1969, Proc. of the IEEE, 57, 724
Stovall, K., Lynch, R. S., Ransom, S. M., et al. 2014, ApJ, 791, 67
Thornton, D. et al. 2013, Science, 341, 53
Weltevrede, P. et al. 2006, ApJ, 645, L149
Zhang, B., Gil, J., & Dyks, J. 2007, MNRAS, 374, 1103
[^1]: http://astro.phys.wvu.edu/rratalog
[^2]: http://www.cv.nrao.edu/\~sransom/presto
[^3]: http://astro.phys.wvu.edu/rratalog
[^4]: http://astro.phys.wvu.edu/FRBs/FRBs.txt
[^5]: http://github.com/juliadeneva/clusterrank
[^6]: http://www2.naic.edu/\~palfa/newpulsars
[^7]: http://www.physics.mcgill.ca/\~karakoc/waterfall\_0526-1908.png
[^8]: http://www.atnf.csiro.au/research/pulsar/psrcat
[^9]: http://astro.phys.wvu.edu/rratalog
|
---
abstract: 'Finite mixture models are among the most popular statistical models used in different data science disciplines. Despite their broad applicability, inference under these models typically leads to computationally challenging non-convex problems. While the Expectation-Maximization (EM) algorithm is the most popular approach for solving these non-convex problems, the behavior of this algorithm is not well understood. In this work, we focus on the case of mixture of Laplacian (or Gaussian) distribution. We start by analyzing a simple equally weighted mixture of two single dimensional Laplacian distributions and show that every local optimum of the population maximum likelihood estimation problem is globally optimal. Then, we prove that the EM algorithm converges to the ground truth parameters almost surely with random initialization. Our result extends the existing results for Gaussian distribution to Laplacian distribution. Then we numerically study the behavior of mixture models with more than two components. Motivated by our extensive numerical experiments, we propose a novel stochastic method for estimating the mean of components of a mixture model. Our numerical experiments show that our algorithm outperforms the Naïve EM algorithm in almost all scenarios.'
address: |
$^{\star \dagger}$University of Southern California\
Email : {$^{\star}$ barazand, $^{\dagger}$ razaviya}@usc.edu
bibliography:
- 'mybib.bib'
title: 'On the Behavior of the Expectation-Maximization Algorithm for Mixture Models'
---
Finite mixture model, Gaussian/Laplacian mixture model, EM algorithm, non-convex optimization
Introduction
============
The ability of finite mixture distributions [@pearson1894contributions] to model the presence of subpopulations within an overall population has made them popular across almost all engineering and scientific disciplines [@melnykov2010finite; @zhang2015finite; @titterington1985statistical; @mclachlan2004finite]. While statistical identifiability for various mixture models has been widely studied [@teicher1963identifiability; @allman2009identifiability], Gaussian mixture model (GMM) has drawn more attention due to its wide applicability [@day1969estimating; @wolfe1970pattern]. Started by Dasgupta[@dasgupta1999learning], there have been multiple efforts for finding algorithm with polynomial sample/time complexity for estimating GMM parameters [@vempala2004spectral; @arora2005learning; @chaudhuri2008learning; @dasgupta2007probabilistic; @moitra2010settling; @hsu2013learning; @belkin2010polynomial]. Despite statistical guarantees, these methods are not computationally efficient enough for many large-scale problems. Moreover, these results assume that the data is generated from an exact generative model which never happens in reality. In contrast, methods based on solving maximum likelihood estimation (MLE) problem are very popular due to computational efficiency and robustness of MLE against perturbations of the generative model [@donoho1988automatic]. Although MLE-based methods are popular in practice, the theory behind their optimization algorithms (such as EM method) is little understood. Most existing algorithms with theoretical performance guarantees are not scalable to the modern applications of massive size. This is mainly due to the combinatorial and non-convex nature of the underlying optimization problems. Recent advances in the fields of non-convex optimization has led to a better understandings of the mixture model inference algorithms such as EM algrithm. For example, [@balakrishnan2017statistical] proves that under proper initialization, EM algorithm exponentially converges to the ground truth parameters. However, no computationally efficient initialization approach is provided. [@xu2016global] globally analyzes EM algorithm applied to the mixture of two equally weighted Gaussian distributions. While [@daskalakis2016ten] provides global convergence guarantees for the EM algorithm, [@jin2016local] studies the landscape of GMM likelihood function with more than 3 components and shows that there might be some spurious locals even for the simple case of the equally weighted GMM.
In this work, we revisit the EM algorithm under Laplacian mixture model and Gaussian mixture model. We first show that, similar to the Gaussian case, the maximum likelihood estimation objective has no spurious local optima in the symmetric Laplacian mixture model (LMM) with $K=2$ components. This Laplacian mixture structure has wide range of applications in medical image denoising, video retrieval and blind source separation [@bhowmick2006laplace; @klein2014fisher; @mitianoudis2005overcomplete; @amin2007application; @rabbani2009wavelet]. For the case of mixture model with $K \geq 3$ components, we propose a stochastic algorithm which utilizes the likelihood function as well as moment information of the mixture model distribution. Our numerical experiments show that our algorithm outperforms the Naïve EM algorithm in almost all scenarios.
Problem Formulation
===================
The general mixture model distribution is defined as $$P(\textbf{x}; \textbf{w}, K, \boldsymbol{\theta}) = \sum_{k = 1}^{K} w_{k}f(\textbf{x}; \boldsymbol{\theta}_{k})$$ where $K$ is the number of mixture components; $\textbf{w} = (w_{1}, w_{2},..., w_{K})$ is the non-negative mixing weight with $\sum_{k = 1}^{ K} w_{k} = 1$ and $\boldsymbol{\theta}$ = $(\boldsymbol{\theta}_{1}, \boldsymbol{\theta}_{2},..., \boldsymbol{\theta}_{K})$ is the distribution’s parameter vector. Estimating the parameters of the mixture models $(\textbf{w}, \boldsymbol{\theta}, K)$ is central in many applications. This estimation is typically done by solving MLE problem due to its intuitive justification and its robust behavior [@donoho1988automatic]. The focus of our work is on the population likelihood maximization, i.e., when the number of samples is very large. When parameters $\textbf{w}$ and $K$ are known, using the law of large numbers, MLE problem leads to the following *population risk* optimization problem [@xu2016global; @daskalakis2016ten; @jin2016local]: $$\label{E_Eq30}
\boldsymbol{\theta^{*}} = \arg\max_{\boldsymbol{\theta}} \;\;\mathbb{E} \Bigg[\log \
\Big(\sum _{k = 1} ^{ K } w_{k} f(\textbf{x};\boldsymbol{\theta}_{k})\Big)\Bigg]$$ In this paper, we focus on the case of equally weighted mixture components, i.e., $w_k = 1/K, \;\forall k$ [@xu2016global; @daskalakis2016ten; @jin2016local; @srebro2007there]. We also restrict ourselves to two widely-used Gaussian mixture models and Laplacian mixture models [@bhowmick2006laplace; @klein2014fisher; @mitianoudis2005overcomplete; @amin2007application; @rabbani2009wavelet; @vempala2004spectral; @arora2005learning; @chaudhuri2008learning; @dasgupta2007probabilistic]. It is worth mentioning that even in these restricted scenarios, the above MLE problem is non-convex and highly challenging to solve.
EM for the case of $K=2$
========================
Recently, it has been shown that the EM algorithm recovers the ground truth distributions for equally weighted Gaussian mixture model with $K=2$ components [@xu2016global; @daskalakis2016ten]. Here we extend this result to single dimensional Laplacian mixture models.
Define the Laplacian distribution with the probability density function $L(x;\mu,b) = \frac{1}{2b}e^{-\frac{|x-\mu|}{b}}$ where $\mu$ and $b$ control the mean and variance of the distribution. Thus, the equally weighted Laplacian mixture model with two components has probability density function: $$P(x;\mu_{1},\mu_{2},b) = \frac{1}{2} L(x;\mu_{1},b) + \frac{1}{2} L(x;\mu_{2},b).$$ In the population level estimation, the overall mean of the data, i.e., $\frac{\mu_1 + \mu_2}{2}$ can be estimated accurately. Hence, without loss of generality, we only need to estimate the normalized difference of the two means, i.e., $\mu^* \triangleq \frac{\mu_1 - \mu_2}{2}$. Under this generic assumption, our observations are drawn from the distribution $$P(x;\mu^*,b) = \frac{1}{2} L(x;\mu^*,b) + \frac{1}{2} L(x;-\mu^*,b).$$ Our goal is to estimate the parameter $\mu^*$ from observations $x$ at the population level. Without loss of generality, and for simplicity of the presentation, we set $b=1$, define $p_{\mu}(x) \triangleq P(x;\mu,1)$ and $L(x;\mu) \triangleq L(x;\mu,1)$. Thus, the $t$-th step of the EM algorithm for estimating the ground truth parameter $\mu^*$ is: $$\begin{aligned}\label{EMLaplacianIterate}
\lambda^{t+1} = \frac{E_{x \sim p_{\mu^*}} \left[x\frac{0.5 L(x;\lambda^{t})}{p_{\lambda^{t}}(x)}\right]}{E_{x \sim p_{\mu}^*} \left[\frac{0.5 L(x;\lambda^{t})}{p_{\lambda^{t}}(x)}\right]},
\end{aligned}$$ where $\lambda^{t}$ is the estimation of $\mu^*$ in $t$-th iteration; see [@daskalakis2016ten; @xu2016global; @jin2016local] for the similar Gaussian case. In the rest of the paper, without loss of generality, we assume that $ \lambda^0,\mu^*> 0$. Further, to simplify our analysis, we define the mapping $$M(\lambda, \mu) \triangleq \frac{E_{x \sim p_{\mu}} \left[x\frac{0.5 L(x;\lambda)}{p_{\lambda}(x)}\right]}{E_{x \sim p_{\mu}} \left[\frac{0.5 L(x;\lambda)}{p_{\lambda}(x)}\right]}.$$ It is easy to verify that $M(\mu^*,\mu^*) = \mu^*$, $M(-\mu^*,-\mu^*) = -\mu^*$, $M(0,0) = 0$, and $\lambda^{t+1} = M(\lambda^t,\mu^*)$. In other words, $\lambda \in \{ \mu^*,-\mu^*, 0\}$ are the fixed points of the EM algorithm. Using symmetry, we can simplify $M(\cdot, \cdot)$ as $$\begin{aligned}
M(\lambda, \mu)
& = E_{x \sim L(x;\mu)}\left[ x\frac{ L(x;\lambda ) - L(x;-\lambda )}{ L(x;\lambda ) + L(x;-\lambda )} \right]
\end{aligned}$$ Let us first establish few lemmas on the behavior of the mapping $M(\cdot,\cdot)$.
\[lemma220\] The derivative of the mapping $M(\cdot)$ with respect to $\lambda$ is positive, i.e., $0 < \frac{\partial }{\partial \lambda} M(\lambda,\mu)$.
First notice that $\frac{\partial}{\partial \lambda} M(\lambda,\mu)$ is equal to $$E_{x \sim L(x;\mu)} \left[2 x \frac{( \textrm{sign}(x - \lambda) + \textrm{sign}(x + \lambda) )(e^{-|x-\lambda| - |x+\lambda|}) ) }{( e^{-|x-\lambda|} + e^{-|x+\lambda|} ) ^{2}} \right].$$ We prove the lemma for the following two different cases separately:
Case 1) $\mu < \lambda$: $$\nonumber
\begin{aligned}
& \frac{\partial M}{\partial \lambda}
= \frac{2}{(e^{\lambda}+e^{-\lambda})^{2}} \Big[e^{-\mu}\int_{-\infty}^{-\lambda} xe^{x} {\mathop{}\!\mathrm{d}}x +
e^{\mu} \int_{\lambda}^{\infty} xe^{-x} {\mathop{}\!\mathrm{d}}x \Big]\\
=& 2 \frac{(\lambda + 1) e^{-\lambda} (e^{-\mu} + e^{\mu}) }{(e^{-\lambda} + e^{\lambda})^{2}} = \frac{(\lambda + 1) e^{-\lambda} (\cosh(\mu)) }{\cosh(\lambda)^{2}} > 0. \\
\end{aligned}$$
Case 2) $\mu > \lambda$ $$\nonumber
\begin{aligned}
& \frac{\partial M}{\partial \lambda}
= \frac{2e^{-\mu}\Big[\int_{-\infty}^{-\lambda} xe^{x} {\mathop{}\!\mathrm{d}}x + \int_{\lambda}^{\mu} xe^{x} {\mathop{}\!\mathrm{d}}x + e^{2\mu} \int_{\mu}^{\infty} xe^{-x} {\mathop{}\!\mathrm{d}}x \Big]}{(e^{\lambda}+e^{-\lambda})^{2}} \\
= & \frac{2}{(e^{\lambda}+e^{-\lambda})^{2}} \Big[e^{-\mu}\Big((\lambda+1)e^{-\lambda}-(\lambda-1)e^{\lambda}\Big) +2\mu\Big] \\
\geq & \frac{2}{(e^{\lambda}+e^{-\lambda})^{2}} \Big[e^{-\mu}\Big((\mu+1)e^{-\mu}-(\mu-1)e^{\mu}\Big) +2\mu\Big] >0.\; {\hfill\ensuremath{\blacksquare}}\end{aligned}$$
\[lemma2\] For $ 0<\lambda < \eta $, we have $$\nonumber
\begin{aligned}
\frac{\partial}{\partial \eta} M(\lambda,\eta) = 1- 2\frac{e^{-\eta}\lambda + e^{-\lambda}}{e^{\lambda}+e^{-\lambda}} > 1 - 2\frac{e^{-\lambda}\lambda + e^{-\lambda}}{e^{\lambda}+e^{-\lambda}} >0
\end{aligned}$$
When $ \eta> \lambda$, it is not hard to show that $
M(\lambda, \eta) = \frac{1}{2} e^{-\eta}\Big\{ \tanh(\lambda)(\lambda + 1)e^{-\lambda} + (\lambda - 1)e^{\lambda} + (\lambda + 1)e^{-\lambda} - (\lambda - 1)e^{\lambda} \tanh(\lambda) \Big\}+ \tanh(\lambda)\eta.$ Hence, $$\nonumber
\begin{aligned}
& \frac{\partial M}{\partial \eta}
= -e^{-\eta}\left\{\frac{\lambda + 1}{e^{\lambda}+e^{-\lambda}}
+\frac{\lambda - 1}{e^{\lambda}+e^{-\lambda}} \right\} + \tanh(\lambda) \\
= \;& \frac{-2e^{-\eta}\lambda + e^{\lambda}-2e^{-\lambda} + e^{-\lambda} }{e^{\lambda}+e^{-\lambda}}
> 1 - 2 \frac{\lambda e^{-\lambda} + e^{-\lambda}}{e^{\lambda} + e^{-\lambda}} > 0,
\end{aligned}$$ where the last two inequalities are due to the facts that $\lambda <\eta$ and $\frac{\lambda e^{-\lambda} + e^{-\lambda}}{e^{\lambda} + e^{-\lambda}} <1/2$. [$\blacksquare$]{}
\[TH1\] Without loss of generality, assume that $\lambda^0,\mu^*>0$. Then the EM iterate defined in is a contraction, i.e., $
\bigg|\frac{\lambda^{t+1}-\mu^*}{\lambda^{t}-\mu^* } \bigg|< \kappa < 1, \;\forall t,
$ where $\kappa = \max \ \{\kappa_1,\kappa_{2}\}$, $\kappa_{1} = \frac{(\mu^* + 1) e^{-\mu^*} }{cosh(\mu^*)} $, and $\kappa_{2} = 2 \frac{\lambda^{0} e^{-\lambda^{0}} + e^{-\lambda^{0}}}{e^{\lambda^{0}} + e^{-\lambda^{0}}}$.
Theorem \[TH1\] shows that the EM iterates converge to the ground truth parameter which is the global optimum of the MLE.
First of all, according to the Mean Value Theorem, $\exists \; \xi$ between $\lambda^{t}$ and $\mu^*$ such that:
$$\nonumber
\begin{aligned}
\frac{\lambda^{t+1} - \mu^*}{\lambda^{t} - \mu^*} = \frac{M(\lambda^{t}, \mu^*) - M(\mu^*, \mu^*)}{\lambda^{t}- \mu^*} = \frac{\partial }{\partial \lambda} M(\lambda,\mu^*)\Big|_{\lambda = \xi} > 0, \\
\end{aligned}$$
where the inequality is due to lemma \[lemma220\]. Thus, $\lambda^{t}$ does not change sign during the algorithm. Consider two different regions: $\mu^* > \lambda$ and $\mu^* < \lambda$. When $\mu^* < \lambda$, case 1 in Lemma \[lemma220\] implies that
$$\nonumber
\begin{aligned}
\frac{\partial M}{\partial \lambda} \Big|_{\lambda = \xi} & = \frac{(\xi + 1) e^{-\xi} (\cosh(\mu^*)) }{\cosh(\xi)^{2}}\leq \frac{(\mu^* + 1) e^{-\mu^*} }{\cosh(\mu^*)} = \kappa_{1} < 1.
\end{aligned}$$
The last two inequalities are due to the fact that $\mu^*<\xi<\lambda $, and the fact that $f(\xi) = \frac{(\xi + 1) e^{-\xi} }{cosh(\xi)^{2}}$ is a positive and decreasing function in $\mathbb{R}^{+}$ with $f(0) = 1$. On the other hand, when $\mu^* > \lambda$, the Mean Value Theorem implies that
$$\nonumber
\begin{aligned}
&\frac{\lambda^{t+1} - \mu^* }{\lambda^{t} - \mu^*} = \frac{\lambda^{t+1} - \lambda^{t} }{\lambda^{t} - \mu^*}+1
= \frac{M(\lambda^{t},\mu^*) - M(\lambda^{t},\lambda^{t})}{\lambda^{t} - \mu^*} + 1 \\
= \;&1 - \frac{\partial }{\partial \mu^*} M(\lambda^{t},\mu^*)\Big|_{\mu^* = \eta} \leq 2 \frac{\lambda^{t} e^{-\lambda^{t}} + e^{-\lambda^{t}}}{e^{\lambda^{t}} + e^{-\lambda^{t}}} \\
\leq \;& 2 \frac{\lambda^{0} e^{-\lambda^{0}} + e^{-\lambda^{0}}}{e^{\lambda^{0}} + e^{-\lambda^{0}}} =\kappa_{2} < 1,
\end{aligned}$$
where the last two inequalities are due to lemma \[lemma2\] and the facts that 1) $\lambda^{t}$ does not change sign and 2) the function $f(\lambda) = 2 \frac{\lambda e^{-\lambda} + e^{-\lambda}}{e^{\lambda} + e^{-\lambda}} $ is positive and decreasing in $\mathbb{R}^{+}$ with $f(0) = 1$. Hence, $ \frac{\lambda^{t+1} - \mu^* }{\lambda^{t} - \mu^*} < \kappa_{2} < 1.
$ Combining the above two cases will complete the proof. [$\blacksquare$]{}
Modified EM for the case of $K\geq 3$ {#Section 4}
=====================================
In [@srebro2007there] it is conjectured that the local optima of the population level MLE problem for any equally weighed GMM is globally optimal. Recently, [@jin2016local] has rejected this conjecture by providing a counter example with $ K = 3$ components. Moreover, they have shown that the local optima could be arbitrary far from ground truth parameters and there is a positive probability for the EM algorithm with random initialization to converge to these spurious local optima. Motivated by [@jin2016local], we numerically study the performance of the EM algorithm in both GMMs and LMMs.
![Naïve EM fails to recover the ground truth parameter.[]{data-label="fig:Naive"}](Fig4.png){width=".8\linewidth" height=".6\linewidth"}
**Numerical Experiment 1**: Figure \[fig:Naive\] presents the convergence plots of the EM algorithm with four different initializations. Two of these initializations converge to the global optima, while the other two fails to recover the ground truth parameters and they are trapped in spurious local optima. To understand the performance of the EM algorithm with random initialization, we ran the EM algorithm for different number of components $K$ and dimensions $d$. First we generate the $d$-dimensional mean vectors ${\boldsymbol{\mu}}_k\sim N(\textbf{0}, 5 \mathbf{I})$, $k=1, \ldots,K$. These vectors are the mean values of different Gaussian components. For each Gaussian component, the variance is set to $1$. Thus the vectors ${\boldsymbol{\mu}}_1,{\boldsymbol{\mu}}_2,\ldots, {\boldsymbol{\mu}}_K$ will completely characterize the distribution of the GMM. Then, $30,000$ samples are randomly drawn from the generated GMM and the EM algorithm is run with 1000 different initializations, each for $3000$ iterations. The table in Figure \[fig:Naive\] shows the percentage of the times that the EM algorithm converges to the ground truth parameters (global optimal point) for different values of $K$ and $d$. As can be seen in this table, EM fails dramatically especially for larger values of $K$. By examining the spurious local optima in the previous numerical experiment, we have noticed that many of these local optima fail to satisfy the first moment condition. More specifically, we know that any global optimum of the MLE problem should recover the ground truth parameter – up to permutations [@teicher1963identifiability; @allman2009identifiability]. Hence, any global optimum $\hat{{\boldsymbol{\mu}}} = (\hat{{\boldsymbol{\mu}}}_1,\ldots,\hat{{\boldsymbol{\mu}}}_K)$ has to satisfy the first moment condition $
\mathbb{E}(\textbf{x}) = \sum_{k = 1}^{K}\frac{1}{K}\hat{{\boldsymbol{\mu}}}_{k}.
$ Without loss of generality and by shifting all data points, we can assume that $\mathbb{E}(\textbf{x}) = 0$. Thus, $\hat{{\boldsymbol{\mu}}}$ must satisfy the condition $$\label{EqMC}
\sum_{k = 1}^{K}\hat{{\boldsymbol{\mu}}}_{k} = 0.$$ However, according to our numerical experiments, many spurious local optima fail to satisfy . To enforce condition , one can regularize the MLE cost function with the first order moment condition and solve $$\label{eq:ModifiedGMMK3}
\begin{aligned}
\max_{\boldsymbol{\mu}} \;
\mathbb{E}_{{\boldsymbol{\mu}}^*}\left[ \log
\left( \sum_{k = 1} ^{K} \frac{1}{K} f(\textbf{x};\boldsymbol{\mu}_{k}) \right)\right] - \frac{M}{2} {\left\lVert\sum_{k = 1}^{K} \boldsymbol{\mu}_{k}\right\rVert}_{2}^2,
\end{aligned}$$ where $M >0$ is the regularization coefficient. To solve , we propose the following iterative algorithm: $$\label{eq:ModifiedEMAlgo}
\begin{aligned}
&\boldsymbol{{\mu}}_k^{t+1} = \frac{\mathbb{E}_{{\boldsymbol{\mu}}^*} [\textbf{x} w^t_k(\textbf{x})] + M K {\boldsymbol{\mu}}_{k}^{t}- M\sum_{j = 1}^{K} \boldsymbol{\mu}_j^t}{ M K+ \mathbb{E}_{\boldsymbol{\mu^*}}[w_k^t(\textbf{x})]},\;\forall k,
\end{aligned}$$ where $w_k^t(\textbf{x}) \triangleq \frac{ f(\textbf{x}; \boldsymbol{\mu}_{k}^{t})}{\sum_{j=1}^K f(\textbf{x}; \boldsymbol{\mu}_{j}^{t}) }, \forall k = 1,\ldots,K$. This algorithm is based on the successive upper-bound minimization framework [@razaviyayn2014parallel; @razaviyayn2013unified; @hong2016unified]. Notice that if we set $M=0$ in , we obtain the naïve EM algorithm. The following theorem establishes the convergence of the iterates in .
Any limit point of the iterates generated by is a stationary point of .
**Proof Sketch** Let $g({\boldsymbol{\mu}})$ be the objective function of . Using Cauchy-Schwarz and Jensen’s inequality, one can show that $g({\boldsymbol{\mu}}) \geq \widehat{g}({\boldsymbol{\mu}}, {\boldsymbol{\mu}}^t) \triangleq\mathbb{E}_{{\boldsymbol{\mu}}^*}\left[ \sum_k\left(w_{k}^t(\textbf{x}) \log
\left(\frac{f(\textbf{x};\boldsymbol{\mu}_{k})}{ f({\mathbf{x}};{\boldsymbol{\mu}}_k^t)}\right) \right)\right] - \frac{M}{2} K \sum_k{\left\lVert \boldsymbol{\mu}_{k} - \boldsymbol{\mu}_{k}^{t} \right\rVert}_{2}^2 - M \langle \sum_k({\boldsymbol{\mu}}_k - {\boldsymbol{\mu}}_k^t),\sum_k{\boldsymbol{\mu}}_{k}^{t}\rangle + g({\boldsymbol{\mu}}^t) $. Moreover, $\widehat{g}({\boldsymbol{\mu}}^t,{\boldsymbol{\mu}}^t) = g({\boldsymbol{\mu}}^t)$. Thus the assumptions of [@razaviyayn2013unified Proposition 1] are satisfied. Furthermore, notice that the iterate is obtained based on the update rule ${\boldsymbol{\mu}}^{t+1} = \arg\min_{{\boldsymbol{\mu}}} \widehat{g}({\boldsymbol{\mu}}, {\boldsymbol{\mu}}^t)$. Therefore, [@razaviyayn2013unified Theorem 1] implies that every limit point of the algorithm is a stationary point. [$\blacksquare$]{}\
**Numerical Experiment 2**: To evaluate the performance of the algorithm defined in , we repeat the Numerical Experiment 1 with the proposed iterative method . Figure \[fig:NaiveVSModified\] shows the performance of the proposed iterative method . As can be seen from the table, the regularized method still fails to recover the ground truth parameters in many scenarios. More specifically, although the regularization term enforces , it changes the likelihood landscape and hence, it introduces some new spurious local optima.\
![Modified EM based on regularized MLE .[]{data-label="fig:NaiveVSModified"}](Fig5.png){width=".8\linewidth" height=".25\linewidth"}
In our numerical experiment 2, we observed that many of the spurious local optima are tied to a fixed value of $M$. In other words, after getting stuck in a spurious local optimum point, changing the value of $M$ helps us escape from that local optimum. Notice that the global optimal parameter ${\boldsymbol{\mu}}^*$ is the solution of for any value of $M$. Motivated by this observation, we consider the following objective function:
$$\label{EQS}
\begin{aligned}
\max_{\boldsymbol{\mu}} \; \mathbb{E}_{\lambda \sim \Lambda} \Bigg[\mathbb{E}_{{\boldsymbol{\mu}}^*}\left[ \log
\left( \sum_{k = 1} ^{K} \frac{1}{K} f(\textbf{x};\boldsymbol{\mu}_{k}) \right)\right]
-\frac{\lambda}{2} {\left\lVert\sum_{k = 1}^{K}\boldsymbol{\mu}_{k}\right\rVert}_{2}^2 \Bigg],
\end{aligned}$$ where $\Lambda$ is some continuous distribution defined over $\lambda$. The idea behind this objective is that each sampled value of $\lambda$ leads to different set of spurious local optima. However, if a point $\widehat{{\boldsymbol{\mu}}}$ is a fixed point of EM algorithm for any value of $\lambda$, it must be a stationary point of the MLE function and also it should satisfy the first moment condition . Based on this objective function, we propose algorithm \[alg:StochasticBSUMAlg\] for estimating the ground truth parameter.
**Input:** [Number of iterations: $N_{Itr}$, distribution $\Lambda$, Initial estimate: $\boldsymbol{\mu^{0}}$ ]{}\
**Output:** [$\boldsymbol{\hat{\mu}}$]{}\
**Numerical Experiment 3**: To evaluate the performance of Algorithm 1, we repeat the data generating procedure in Numerical Experiment 1. Then we run Algorithm 1 on the generated data. Figure \[fig:Multi\] shows the performance of this algorithm. As can be seen from this figure, the proposed method significantly improves the percentage of times that a random initialization converges to the ground truth parameter. For example, the proposed method converges to the global optimal parameter $70\%$ of the times for $K=9, d = 3$, while the naïve EM converges for $19\%$ of the initializations (comparing Fig. \[fig:Naive\] and Fig. \[fig:Multi\]).\
**Remark:** While the results in this section are only presented for GMM model, we have observed similar results in LMM model. These results are omitted due to lack of space.
![Performance of Stochastic multi-objective EM. []{data-label="fig:Multi"}](Fig6.png){width=".8\linewidth" height=".25\linewidth"}
Conclusion
==========
In this paper, first the convergence behavior of the EM algorithm for equally weighted Laplacian mixture model with two components is studied. It is shown that the EM algorithm with random initialization converges to the ground truth distribution with probability one. Moreover, the landscape of the equally weighted mixture models with more than two components is revisited. Based on our numerical experiments, we proposed a modified EM approach which significantly improves the probability of recovering the ground truth parameters.
|
---
abstract: 'We construct entire functions with hyperbolic and simply parabolic Baker domains on which the functions are not univalent. The Riemann maps from the unit disk to these Baker domains extend continuously to certain arcs on the unit circle. The results answer questions posed by Fagella and Henriksen, Baker and Domínguez, and others.'
address:
- 'Mathematisches Seminar, Christian–Albrechts–Universität zu Kiel, Ludewig–Meyn–Str. 4, D–24098 Kiel, Germany'
- 'Department of Mathematical Sciences, Tsinghua University, P. R. China'
author:
- Walter Bergweiler
- 'Jian-Hua Zheng'
title: Some examples of Baker domains
---
Introduction and main results {#intr}
=============================
The *Fatou set* $\FF(f)$ of an entire function $f$ is the subset of the complex plane $\C$ where the iterates $f^n$ of $f$ form a normal family. Its complement $\JJ(f)=\C\setminus\FF(f)$ is the *Julia set*. The connected components of $\FF(f)$ are called *Fatou components*. As in the case of rational functions there exists a classification of periodic Fatou components (cf. [@Bergweiler1993 Section 4.2]), the new feature for transcendental functions being Baker domains. By definition, a periodic Fatou component $U$ is called a *Baker domain* if $f^n|_U\to\infty$ as $n\to\infty$. The first example of a Baker domain was already given by Fatou [@Fatou1926 Exemple I] who proved that $f(z)=z+1+e^{-z}$ has a Baker domain $U$ containing the right halfplane. Since then many further examples have been given; see [@Rippon2008] for a survey.
By a result of Baker [@Baker1975], the domains today named after him are simply connected. A result of Cowen [@Cowen1981] then leads to a classification of Baker domains, which has turned out to be very useful. We introduce this classification following Fagella and Henriksen [@FagellaHenriksen], but note that there are a number of equivalent ways to state it; see section \[classification\] for a detailed discussion. For simplicity, and without loss of generality, we consider only the case of an invariant Baker domain; that is, we assume that $f(U)\subset U$. We define an equivalence relation on $U$ by saying that $u,v\in U$ are equivalent if there exist $m,n\in\N$ such that $f^m (u)=f^n(v)$ and we denote by $U/f$ the set of equivalence classes. The result of Fagella and Henriksen is the following [@FagellaHenriksen Proposition 1].
Let $U$ be an invariant Baker domain of the entire function $f$. Then $U/f$ is a Riemann surface conformally equivalent to exactly one of the following cylinders:
- $\{z\in\C: -s<\im z< s\}/\Z$, for some $s>0$;
- $\{z\in\C: \im z> 0\}/\Z$;
- $\C/\Z$.
We call $U$ *hyperbolic* in case (a), *simply parabolic* in case (b) and *doubly parabolic* in case (c). More generally, the above classification holds when $U$ is a simply connected domain, $U\neq\C$, and $f:U\to U$ is holomorphic without fixed point in $U$.
While we defer a detailed discussion of this classification to section \[classification\], we indicate where the names for the different types of Baker domains come from. Since $U$ is simply connected, there exists a conformal map $\phi:\D\to U$, where $\D$ is the unit disk. Then $g=\phi^{-1}\circ f\circ \phi$ maps $\D$ to $\D$. (In fact, it can be shown that $g$ is an inner function.) By the Denjoy-Wolff-Theorem, there exists $\xi\in\overline{\D}$ such that $g^n\to\xi$ as $n\to\infty$. As $f$ has no fixed point in $U$ and thus $g$ has no fixed point in $\D$, we actually find that $\xi\in\partial \D$. Now suppose that $g$ extends analytically to a neighborhood of $\xi$. Then $g(\xi)=\xi$ and $U$ is hyperbolic if $\xi$ is an attracting fixed point, $U$ is simply parabolic if $\xi$ is a parabolic point with one petal and $U$ is doubly parabolic if $\xi$ is a parabolic point with two petals.
We note that the Baker domain $U$ of Fatou’s example $f(z)=z+1+e^{-z}$ mentioned above is doubly parabolic. For example, this follows directly from Lemma \[lemma2.2\] below. It is known (see the remark after Lemma \[lemma2.3\] below) that if $U$ is a doubly parabolic Baker domain, then $f|_U$ is not univalent. Equivalently, $U$ contains a singularity of the inverse function of $f:U\to U$. (For Fatou’s example it can be checked directly that $f:U\to U$ is not univalent, as $U$ contains the critical points $2\pi i k$, with $k\in\Z$.) On the other hand, $f|_U$ may be univalent in a hyperbolic or simply parabolic Baker domain. The first examples of Baker domains where the function is univalent where given by Herman [@Herman1985 p. 609] and Eremenko and Lyubich [@EremenkoLyubich1987 Example 3]. We mention some further examples of Baker domains:
- simply parabolic Baker domains in which the function is univalent ([@BakerWeinreich Theorem 3], [@BaranskiFagella Section 5.3] and [@FagellaHenriksen Section 4, Example 2]);
- hyperbolic Baker domains in which the function is univalent ([@BaranskiFagella Sections 5.1 and 5.2], [@Bergweiler1995 Theorem 1] and [@FagellaHenriksen Section 4, Example 1];
- hyperbolic Baker domains in which the function is not univalent ([@Rippon Theorem 3], and [@RipponStallard Theorems 2 and 3];
We note that there are no examples of simply parabolic Baker domains in which the function is not univalent. Therefore it was asked in [@FagellaHenriksen Section 4] and [@Zheng1 p. 203] whether such domains actually exist. Our first result says that this is in fact the case.
\[thm1\] There exists an entire function $f$ with a simply parabolic Baker domain in which $f$ is not univalent.
It turns out that the function constructed also provides an answer to a question about the boundary of Baker domains which arises from the work of Baker and Weinreich [@BakerWeinreich], Baker and Domínguez [@BakerDominguez], Bargmann [@Bargmann] and Kisaka [@Kisaka1997; @Kisaka1998].
For a conformal map $\phi:\D\to U$ let $\Xi$ be the set of $\xi\in\partial \D$ such that $\infty$ is contained in the cluster set of $\phi$ at $\xi$ and let $\Theta$ be the set of $\xi\in\partial \D$ such that $\lim_{r\to 1} \phi(r\xi)=\infty$. Clearly, $\overline{\Theta}\subset \Xi$. The sets $\Theta$ and $\Xi$ depend on the choice of the conformal map $\phi$, but we will only be concerned with the question whether $\Xi$ is equal to $\partial \D$ or $\Theta$ is dense in $\partial \D$, and these statements are independent of $\phi$.
Devaney and Goldberg [@DevaneyGoldberg] showed that if $\lambda\in\C\setminus\{0\}$ is such that $f(z)=\lambda e^z$ has an attracting fixed point and $U$ denotes its attracting basin, then $\Theta$ is dense in $\partial \D$. Baker and Weinreich [@BakerWeinreich Theorem 1] proved that if $f$ is an arbitrary transcendental entire function and $U$ is an unbounded invariant Fatou component of $f$ which is not a Baker domain and which thus – by the classification of periodic Fatou components – is an attracting or parabolic basin or a Siegel disk, then $\Xi=\partial \D$. Baker and Domínguez [@BakerDominguez Theorem 1.1] showed, under the same hypothesis, that if $\Theta\neq \emptyset$, then $\overline{\Theta}=\partial \D$. Under additional hypotheses this had been proved before by Kisaka [@Kisaka1997; @Kisaka1998].
As shown by Baker and Weinreich [@BakerWeinreich Theorem 3], the above results need not hold for Baker domains: they gave an example of a Baker domain bounded by a Jordan curve on the sphere. Clearly, for this example the sets $\Xi$ and $\Theta$ consist of only one point. On the other hand, Baker and Weinreich [@BakerWeinreich Theorem 4] showed that if a Baker domain $U$ is bounded by a Jordan curve on the sphere, then $f$ is univalent in $U$.
If $U$ is a Baker domain, then $\infty$ is accessible in $U$ and thus we always have $\Theta\neq \emptyset$ for Baker domains. Baker and Domínguez [@BakerDominguez Theorem 1.2] showed that if $U$ is a Baker domain where $f$ is not univalent, then $\overline{\Theta}$ contains a perfect subset of $\partial \D$. Again, this had been proved before by Kisaka [@Kisaka1997; @Kisaka1998] under additional hypotheses.
Baker and Domínguez [@BakerDominguez p. 440] asked whether even $\overline{\Theta}=\partial \D$ if $U$ is a Baker domain where $f$ is not univalent. It was shown by Bargmann [@Bargmann Theorem 3.1] that this is in fact the case for doubly parabolic Baker domains. With $\overline{\Theta}$ replaced by $\Xi$ this result appears in [@Kisaka1997]. The question whether these results also hold for hyperbolic and simply parabolic Baker domains was left open in these papers.
It turns out that the Baker domain constructed in Theorem \[thm1\] can be chosen such that $$\label{Xi}
\Xi\neq \partial \D.$$ In particular, this implies that $\overline{\Theta}\neq \partial \D$. A modification of the method also yields an example of a hyperbolic Baker domain with this property. We thus have the following result.
\[thm2\] There exists an entire function $f_1$ with a simply parabolic Baker domain $U_1$ such that $f_1|_{U_1}$ is not univalent and the set $\Xi$ defined above satisfies .
There also exists an entire function $f_2$ with a hyperbolic Baker domain $U_2$ satisfying such that $f_2|_{U_2}$ is not univalent.
Classification of Baker domains {#classification}
===============================
We describe the classification given by Cowen [@Cowen1981] following König [@Koenig1999] and Bargmann [@Bargmann]; see also [@Rippon2008 Section 5] for a discussion of this classification. Let $U$ be a domain and let $f:U\to U$ be holomorphic. We say that a subdomain $V$ of $U$ is *absorbing* for $f$, if $V$ is simply connected, $f(V)\subset V$ and for any compact subset $K$ of $U$ there exists $n=n(K)$ such that $f^{n}(K)\subset V$. (Cowen used the term *fundamental* instead of absorbing.) Let $\H=\{z\in\C:\re z>0\}$ be the right halfplane.
\[def2.1\] Let $f:U\to U$ be holomorphic. Then $(V, \varphi, T, \Omega)$ is called an *eventual conjugacy* of $f$ in $U$, if the following statements hold:
- $V$ is absorbing for $f$;
- $\varphi:U\rightarrow \Omega\in\{\mathbb{H},\mathbb{C}\}$ is holomorphic and $\varphi$ is univalent in $V$;
- $T$ is a Möbius transformation mapping $\Omega$ onto itself and $\varphi(V)$ is absorbing for $T$;
- $\varphi(f(z))=T(\varphi(z))$ for $z\in U$.
König [@Koenig1999] used the term *conformal* conjugacy. With the terminology *eventual* conjugacy we have followed Bargmann [@Bargmann].
If $U$ is the basin of an attracting (but not superattracting) fixed point $\xi$, then an eventual conjugacy is given by the solution of the Schröder-Kœnigs functional equation $\varphi(f(z))=\lambda \varphi(z)$, with $\lambda=f'(\xi)\neq 0$. Similarly, eventual conjugacies in parabolic domains are given by the solutions of Abel’s functional equation $\varphi(f(z))=\varphi(z)+1$.
In what follows, we assume that $f$ has no fixed points in $U$. Clearly, this is the case for Baker domains $U$. The result of Cowen can now be stated as follows.
\[lemma2.1\] Let $U\neq\C$ be a simply connected domain and $f:U\to U$ a holomorphic function without fixed point in $U$. Then $f$ has an eventual conjugacy $(V, \varphi, T, \Omega)$. Moreover, $T$ and $\Omega$ may be chosen as exactly one of the following possibilities:
- $\Omega=\H$ and $T(z)=\lambda z$, where $\lambda >1$;
- $\Omega=\H$ and $T(z)=z+i$ or $T(z)=z-i$;
- $\Omega=\C$ and $T(z)=z+1$.
It turns out (cf. [@FagellaHenriksen]) that the cases listed in Lemma \[lemma2.1\] correspond precisely to the cases of Theorem A. Thus $U$ is hyperbolic in case (a), simply parabolic in case (b) and doubly parabolic in case (c).
König [@Koenig1999 Theorem 3] has given the following useful characterization of the different cases.
\[lemma2.2\] Let $U\neq\C$ be an unbounded simply connected domain and $f:U\to U$ a holomorphic function such that $f^n |_U\to \infty$ as $n\to\infty$. For $w_0\in U$ put $$w_n=f^n(w_0)
\quad\text{and}\quad
d_n=\dist(w_n,\partial U).$$ Then
- $U$ is hyperbolic if there exists $\beta>0$ such that $|w_{n+1}-w_n|/d_n\geq \beta$ for all $w_0\in U$ and all $n\geq 0$;
- $U$ is simply parabolic if $ \liminf_{n\to\infty}|w_{n+1}-w_n|/d_n>0$ for all $w_0\in U$, but $$\inf_{w_0\in U} \limsup_{n\to\infty}\frac{|w_{n+1}-w_n|}{d_n}=0 ;$$
- $U$ is doubly parabolic if $ \lim_{n\to\infty}|w_{n+1}-w_n|/d_n=0$ for all $w_0\in U$.
Denote by $\rho_U(\cdot,\cdot)$ the hyperbolic metric in a hyperbolic domain $U$, using the normalization where the density $\lambda_\D$ in the unit disk is given by $\lambda_\D(z)=2/(1-|z|^2)$. Considering the hyperbolic metric instead of the Euclidean metric in Lemma \[lemma2.2\] leads to the following result; see [@Bargmann Lemma 2.6] or [@Zheng1 Theorem 2.2.11].
\[lemma2.3\] Let $U\neq\C$ be a simply connected domain and $f:U\to U$ a holomorphic function without fixed point in $U$. For $z\in U$ put $$\ell(z)=\lim_{n\to\infty}\rho_U(f^{n+1}(z),f^n(z)).$$ Then
- $U$ is hyperbolic if $\inf_{z\in U}\ell(z)>0$;
- $U$ is simply parabolic if $\ell(z)>0$ for all $z\in U$, but $\inf_{z\in U}\ell(z)=0$;
- $U$ is doubly parabolic if $\ell(z)=0$ for all $z\in U$.
Note that the sequence $\left( \rho_U(f^{n+1}(z),f^n(z))\right)_{n\in\N}$ is non-increasing by the Schwarz-Pick Lemma (Lemma \[lemmaSP\] below). Thus the limit defining $\ell(z)$ exists.
Let now $U$ be an invariant Baker domain of the entire function $f$. It is not difficult to see that if $f|_U$ is univalent, then $f(U)=U$. Also, the Schwarz-Pick Lemma says that if $f|_U$ is univalent, then $$\rho_U(f^{n+1}(z),f^n(z))=\rho_U(f(z),z)$$ for all $n\in\N$ and $z\in U$. It now follows from Lemma \[lemma2.3\], as already mentioned in the introduction, that $U$ cannot be doubly parabolic if $f|_U$ is univalent.
\[lemma2.5\] Let $f: U\to U$ be as in Lemma and let $U_0$ be an absorbing domain for $f$. Then $f:U_0\to U_0$ and $f: U\to U$ are of the same type according to the classification given in Theorem A or Lemma .
Let $(V,\varphi,T,\Omega)$ be an eventual conjugacy of $f$ in $U$. Since, by the definition of an absorbing domain, $V$ and $U_0$ are simply connected, the components of $V\cap U_0$ are also simply connected. It was shown by Cowen [@Cowen1981 p. 79-80] (see also [@Bargmann Lemma 2.3]) that there exists a component $W$ of $V\cap U_0$ which is absorbing for $f$ in $U$. Moreover, $\varphi(W)$ is absorbing for $T$ in $\Omega$. Thus $(W,\varphi|_W,T,\Omega)$ is an eventual conjugacy of both $f:U\to U$ and $f:U_0\to U_0$. The conclusion follows.
We note that a somewhat different approach to classifying holomorphic self-maps of $\H$, based on the sequences $(\rho_\H(f^{n+1}(z),f^n(z)))_{n\in\N}$, was developed by Baker and Pommerenke [@BakerPommerenke; @Pommerenke1979]; see also [@Bonfert]. As shown by König [@Koenig1999 Lemma 3], this leads to the same classification as above.
We mention that Baker domains may also be defined for functions meromorphic in the plane. In general, Baker domains of meromorphic functions are multiply connected. König [@Koenig1999 Theorem 1] has shown that if $U$ is a Baker domain of a meromorphic function with only finitely many poles, then an eventual conjugacy in $U$ exists and the conclusion of Lemma \[lemma2.2\] holds. On the other hand, eventual conjugacies need not exist for Baker domains of meromorphic functions with infinitely many poles [@Koenig1999 Theorem 2]. For further results on Baker domains of meromorphic functions we refer to [@Zheng1]. In particular, we note that if $U$ is a multiply connected Baker domain of a meromorphic function $f$ such that there exists an eventual conjugacy in $U$, then $U$ contains at least two singularities of $f^{-1}$; see [@Zheng1 p. 200]. Thus $f|_U$ is not univalent in a multiply connected Baker domain $U$ with an eventual conjugacy.
More generally, one may also consider functions meromorphic outside a small set. For such functions the classification of Baker domains is discussed in [@Zheng Section 4].
The classification of Baker domains mentioned above appears in various other questions related to Baker domains. Besides the papers already cited we mention [@Bergweiler2001; @BergweilerDrasinLangley; @BuffRueckert; @Lauber].
Preliminary lemmas {#prelims}
==================
The following result, already used in section \[classification\], is known as the Schwarz-Pick Lemma; see, e.g., [@Steinmetz p.12].
\[lemmaSP\] Let $U$ and $V$ be simply connected hyperbolic domains and $f:U\to V$ be holomorphic. Then $\rho_V(f(z_1), f(z_2))\leq
\rho_U(z_1, z_2)$ for all $z_1,z_2\in U$. If there exists $z_1,z_2\in U$ with $z_1\neq z_2$ such that $\rho_V(f(z_1), f(z_2))=\rho_U(z_1, z_2)$, then $f$ is bijective.
If $U\subset V$, then we may apply the Schwarz-Pick Lemma to $f(z)=z$ and obtain $\rho_V(z_1), z_2)\leq
\rho_U(z_1, z_2)$ for all $z_1,z_2\in U$.
The following result is the analogue of the Schwarz-Pick Lemma for quasiconformal mappings [@LehtoVir Section II.3.3]. Note that a different normalization of the hyperbolic metric is used in [@LehtoVir].
\[lem1.1\] Let $U$ and $V$ be simply connected hyperbolic domains and $f:U\to V$ be a $K$-quasiconformal mapping. Then $$\label{hp}
\rho_V(f(z_1), f(z_2))\leq
M_K(\rho_U(z_1, z_2))$$ for $z_1,z_2\in U$, with $$\label{hp2}
M_K(x)=2\operatorname{arctanh}\left(\varphi_K
\left( \tanh\tfrac12 x\right)\right).$$ Here $\varphi_K$ is the Hersch-Pfluger distortion function.
The function $\arctanh\circ\; \varphi_K\circ \tanh$ appearing on the right hand side of has been studied in detail in a number of papers. For example, it was shown in [@QVV Theorem 1.6] that this function is strictly increasing and concave. Various estimates of this function in terms of elementary functions are known. We only mention [@Vuorinen88 Theorem 11.2] where it was shown that the conclusion of Lemma \[lem1.1\] holds with replaced by $\rho_V(f(z_1), f(z_2))\leq K\left(\rho_U(z_1, z_2)+\log 4\right)$. We do not need any explicit estimate for $M_K$, but the fact that holds for some non-decreasing function $M_K:[0,\infty)\to [0,\infty)$ suffices.
The following result [@Shishikura1987 Lemma 1] is the fundamental lemma for quasiconformal surgery. Here $\overline{\C}=\C\cup\{\infty\}$ denotes the Riemann sphere.
\[lem1.3\] Let $g:\overline{\C}\to\overline{\C}$ be a quasiregular mapping. Suppose that there are disjoint open subsets $E_1,\dots,E_m$ of $\overline{\C}$, quasiconformal mappings $\Phi_i:E_i\to E_i'\subset \overline{\C}$, for $i=1,\dots,m$, and an integer $N\geq 0$ satisfying the following conditions:
- $g(E)\subset E$ where $E=E_1\cup\dots\cup E_m$;
- $\Phi\circ g\circ \Phi_i^{-1}$ is analytic in $E_i'=\Phi_i(E_i)$, where $\Phi:E\to\overline{\C}$ is defined by $\Phi|_{E_i}=\Phi_i$;
- $g_{\overline{z}}=0$ a.e. on $\overline{\C}\setminus g^{-N}(E)$.
Then there exists a quasiconformal mapping $\psi:\overline{\C}\to\overline{\C}$ such that $\psi\circ g\circ \psi^{-1}$ is a rational function. Moreover, $\psi\circ \Phi_i^{-1}$ is conformal in $E_i'$ and $\psi_{\overline{z}}=0$ a.e. on $\overline{\C}\setminus \bigcup_{n\geq 0} g^{-n}(E)$.
Shishikura stated the result in [@Shishikura1987] for rational functions, but it holds for entire functions as well. A stronger result, stated for entire functions, can be found in [@KisakaShi Theorem 3.1].
Proof of Theorem \[thm1\] {#proof1}
=========================
Put $g(z)=e^{2\pi i\alpha}ze^z$, where $\alpha\in[0,1]\setminus\Q$ is chosen such that $g$ has a Siegel disk $S$ at $0$, and put $$h(z)=2\pi i(\alpha +m)+z+e^z$$ where $ m\in\mathbb{Z}$. Using $g(e^z)=\exp h(z)$ it can be shown that $h$ has a Baker domain $V=\log S$ on which $h$ is univalent. This example, with $m=0$, is due to Herman [@Herman1985 p. 609]; see also [@BakerWeinreich; @Bergweiler1995; @Bergweiler1995a]. We shall assume, however, that $m\geq 3$. It is not difficult to see that $V$ is simply parabolic.
There exists $r_0\in (0,1)$ and a $g$-invariant domain $S_0\subset S$ such that $$D(0,r_0)\subset S_0\subset D(0,1).$$ Here and in the following $D(a,r)$ denotes the open disk of radius $r$ around a point $a$. With $x_0=\log r_0$ we thus see that $V$ contains $H_0=\{z\in\mathbb{C}:\re <x_0\}$. Moreover, if $z\in H_0$, then $$\label{Imhn}
\im h^n(z)\geq 2\pi(\alpha+m)+\im h^{n-1}(z)-1> \im z+2n\pi
\quad\text{and}\quad
\re h^n (z)<0$$ for $n\in\N$.
For $x_1< x_0-\pi$ we define $$S(x_1)=\{z\in\mathbb{C}: \re z<x_1, \;|\im z|< 2\pi\}\cup D(x_1,2\pi)$$ and $$T(x_1)=\{z\in\mathbb{C}: \re z<x_1, \;|\im z|< \pi\}\cup D(x_1,\pi)$$ We also put $$k(z)=2\pi i(\alpha+m)+z+\exp(e^{-z}-L),$$ for a large constant $L$ to be determined later.
Now we define $F:\mathbb{C}\rightarrow\mathbb{C}$ as follows. For $z\in\mathbb{C}\setminus S(x_1)$ we put $F(z)=h(z)$, for $z\in
\overline{T(x_1)}$ we put $F(z)=k(z)$ and for $z\in S(x_1)\setminus \overline{T(x_1)}$ we define $F(z)$ by interpolation. Thus for $x\leq x_1$ and $\pi\leq y\leq 2\pi$ we put $$F(x+iy)=\frac{y-\pi}{\pi}h(x+2\pi i)+\frac{2\pi-y}{\pi}k(x+\pi i),$$ for $x\leq x_1$ and $-2\pi\leq y\leq -\pi$ we put $$F(x+iy)=\frac{-y-\pi}{\pi}h(x-2\pi i)+\frac{2\pi+y}{\pi}k(x-\pi i),$$ and for $\pi\leq r\leq 2\pi$ and $-\pi/2\leq \varphi\leq \pi/2$ we put $$F(x_1+re^{i\varphi})= \frac{r-\pi}{\pi}h(x_1+2\pi e^{i\varphi})
+\frac{2\pi-r}{\pi}k(x_1+\pi e^{i\varphi}).$$ We claim that $F$ is quasiregular. By definition, $F$ is holomorphic in $T(x_1)$ and in $\C\setminus\overline{S(x_1)}$. So it suffices to consider the dilatation in $S(x_1)\setminus \overline{T(x_1)}$. We first consider the subregion $A
=\{x+iy: x\leq x_1, \;\pi<y<2\pi\}$ of $S(x_1)\setminus \overline{T(x_1)}$. For $z=x+iy\in A$ we have $$\label{FinOmega1}
F(z)=2\pi i(\alpha+m)+z + P(z)$$ with $$\label{RinOmega1}
P(x+iy)=\frac{y-\pi}{\pi}e^x+\frac{2\pi-y}{\pi}\exp(-e^{-x}-L)).$$ It is easy to see that if $|x_1|$ is large enough, then $|P_x(z)|\leq 1/4$ and $|P_y(z)|\leq 1/4$ for $z\in A$. Thus $|P_z(z)|\leq 1/4$ and $|P_{\overline{z}}(z)|\leq 1/4$ and hence $|F_z(z)|\geq 3/4$ and $|F_{\overline{z}}(z)|\leq 1/4$ for $z\in A$. It follows that $F$ is quasiregular in $ A$. The argument for the domain $\overline{A}=\{x+iy: x\leq x_1, \; -2\pi<y<-\pi\}$ is analogous.
Now we consider the region $B=
\{x_1+re^{i\varphi}: \pi\leq r\leq 2\pi,\; -\pi/2\leq \varphi\leq \pi/2\}$. For $z=x_1+re^{i\varphi}\in B$ we have $$\label{FinOmega2}
F(z)=2\pi i(\alpha+m)+z + Q(z)$$ with $$\label{RinOmega2}
\begin{aligned}
Q(x_1+re^{i\varphi})
=&\ \frac{r-\pi}{\pi}\exp\left(x_1+2\pi e^{i\varphi}\right)\\
&\ +\frac{2\pi-r}{\pi}\exp\left(-\exp\left(x_1+\pi e^{i\varphi}\right)-L\right).
\end{aligned}$$ The computation of the partial derivatives of $Q$ is more cumbersome than for $P$, but again it follows $|Q_z(z)|\leq 1/4$ and $|Q_{\overline{z}}(z)|\leq 1/4$ for $z\in B$ if $|x_1|$ and $L=L(x_1)$ are large enough. As before this implies that $F$ is quasiregular in $ B$.
It follows from , , and , together with the corresponding representation in $\overline{A}$, that $$\label{2.1}
\im F(z)>\im z+2\pi(\alpha+m)-1 >3\pi
\quad\text{and}\quad \re F(z)<x_0$$ for $z\in \overline{S(x_1)}\setminus T(x_1)$, provided $|x_1|$ and $L$ are large enough. Together with this implies that if $x_1$ and $L$ are suitably chosen, then every orbit passes through $\overline{S(x_1)}\setminus T(x_1)$, which is the set where $F$ is not holomorphic, at most once.
It now follows from Lemma \[lem1.3\], applied with $g=F$, $N=1$, $m=1$, $$E_1=\bigcup_{n=1}^\infty F^n
\left(S(x_1)\setminus\overline{T(x_1)}\right)$$ and $\Phi_1=\text{id}_{E_1}$, that there exists a quasiconformal map $\psi:\C\to\C$ such that $f=\psi\circ F\circ\psi^{-1}$ is an entire function.
It is easy to see that $f$ is transcendental. For example, let $a\in\mathcal{J}(h)$ such that $h^{-1}(a)$ is infinite. The complete invariance of $\JJ(h)$ yields that $h^{-1}(a)\cap S(x_1)=\emptyset$ so that $h^{-1}(a)\subset
F^{-1}(a)=\psi^{-1}(f^{-1}(\psi(a)))$. Therefore $f^{-1}(\psi(a))$ is infinite, which implies that $f$ is transcendental.
In the sequel, we will apply the concepts of the Fatou-Julia theory also to the quasiregular function $F$. For example, we can define $\JJ(F)$ as the set where the iterates of $F$ are not normal and find that $\JJ(f)=\psi(\JJ(F))$.
It follows from and that $$\im F^n(z)\rightarrow\infty\quad\text{for}\
z\in V\setminus\bigcup_{k=0}^\infty h^{-k}(T(x_1))$$ as $n\rightarrow\infty$. With $W_0=\{z\in\C: \re z< x_0,\; \im z> 2 \pi\}$ we have $$V\setminus\bigcup_{k=0}^\infty h^{-k}(T(x_1))\supset
W_0$$ and thus find that $F$ has a Baker domain $W$ containing $W_0$. Using we see that $\overline{S(x_1)}\setminus T(x_1)
\subset W$.
We now show that $T(x_1)\cap \JJ(F)\neq\emptyset$. In order to do so we note that if $x\in\R$ is sufficiently large, then $$\left| F\left(-x+i\frac{\pi}{2}\right)\right|
=\left| k\left(-x+i\frac{\pi}{2}\right)\right|
=\left|2\pi i(\alpha+m)-x+i\frac{\pi}{2}
+\exp\left(i e^x-L)\right)\right|
\leq 2x$$ while $$\left| F'\left(-x+i\frac{\pi}{2}\right)\right|
=\left| k'\left(-x+i\frac{\pi}{2}\right)\right|
=\left| 1-ie^x \exp\left(i e^x-L)\right)\right|
\geq e^{x-L}-1
\geq e^{x/2}.$$ Given $a_1,a_2\in \JJ(F)$ it now follows from Landau’s theorem that if $x$ is large enough, then there exists $j\in\{1,2\}$ and $z\in D(-x+i\pi/2,1)$ such that $F(z)=a_j$. By the complete invariance of $\JJ(F)$ we thus have $D(-x+i\pi/2,1)\cap \JJ(F)\neq\emptyset$. In particular, $T(x_1)\cap \JJ(F)\neq\emptyset$.
Since $F$ has the unbounded Fatou component $W$, a result of Baker [@Baker1975] yields that $F$ has no multiply connected Fatou components. Since $\overline{S(x_1)}\setminus T(x_1)\subset W\subset\FF(F)$, this implies that $T(x_1)$ contains an unbounded component $\Gamma$ of $\JJ(F)$.
For large $x\in\R$ we consider $w_1=-x-2\pi i,\ w_2=F(w_1)$ and $w_3=F(w_2)$. Obviously, $w_j\in W$ for $j=1,2,3$. We note that $w_1$ is “below" the strip $T(x_1)$ while $w_2$ and $w_3$ are “above" this strip by . In fact, we have $$w_2=2\pi i(\alpha+m-1)-x+e^{w_1}\in W_0$$ and $$w_3=2\pi i(\alpha+m)+w_2+e^{w_2}=2\pi i(2\alpha+2m-1)-x+e^{w_1}
+e^{w_2}\in W_0.$$ We choose $\delta>0$ such that $r_1=2\pi(\alpha+m)+\delta<r_2=2\pi(2\alpha+2m-3)-\delta$ and find that $$w_2\in D(w_3,r_1)\subset
D(w_3,r_2 )$$ for sufficiently large $x$. This implies that $$\rho_{D(w_3,r_2 )}(w_2,w_3)= \rho_{\D}((w_2-w_3)/r_2,0)
\leq 2\arctanh (r_1/r_2).$$ As $D(w_3,r_2 )\subset W_0\subset W$, the Schwarz-Pick Lemma now yields that $$\label{w2w3}
\rho_W(w_2,w_3)\leq
\rho_{W_0}(w_2,w_3)\leq
\rho_{D(w_3,r_2 )}(w_2,w_3)\leq 2\arctanh (r_1/r_2)$$ for large $x$.
Next we show that $$\label{w1w2}
\rho_W(w_1,w_2)
\rightarrow \infty\quad\text{as } x\rightarrow \infty.$$ In order to do so, we note that the preimage of $\Gamma$ under $F$ contains an unbounded continuum $\Gamma'$ which is contained in $\{z\in\C: \re z< x_1,\; y_1<\im z< y_2\}$ for suitable $y_1,y_2$ satisfying $y_1<y_2<-2\pi$. Let now $\gamma$ be a curve connecting $w_1$ and $w_2$ in $W$. It follows that there exists $x_2\leq x_1$ such that if $t\leq x_2$, then there exists $z\in\gamma$, $\zeta\in\Gamma$ and $\zeta'\in\Gamma'$ such that $\re \zeta=\re \zeta'=\re z=t$ and $y_1<\im \zeta'<\im z< \im \zeta<\pi$. This implies that the density $\lambda_W$ of the hyperbolic metric in $W$ satisfies $$\lambda_W(z)\geq \frac{1}{2\dist(z,\partial W)}\geq
\frac{1}{2\min \{|z-\zeta|,|z-\zeta'|\}}
\geq \frac{1}{\pi-y_1}.$$ From this we can deduce that if $x<x_2$, then $$\int_\gamma \lambda_W(z)|dz| \geq \frac{|x-x_2|}{\pi-y_1}.$$ As this holds for all curves $\gamma$ connecting $w_1$ and $w_2$, we obtain $$\rho_W(w_1,w_2)\geq \frac{|x-x_2|}{\pi-y_1},$$ from which follows.
Put $U=\psi(W)$. Since $f=\psi\circ F\circ\psi^{-1}$ we find that $U$ is a Baker domain of $f$. Let $v_j=\psi(w_j)$, for $j=1,2,3$, and denote by $K$ the dilatation of $\psi$. It follows from Lemma \[lem1.1\] and that $$\rho_{{U}}(v_2,v_3)\leq M_K(2\arctanh (r_1/r_2)).$$
Suppose now that $f:{U}\rightarrow {U}$ is univalent. The Schwarz-Pick Lemma yields that $\rho_{{U}}(v_2,v_3)=\rho_{{U}}(f(v_1),f(v_2))=\rho_{{U}}(v_1,v_2)$. Noting that $\psi^{-1}$ is also $K$-quasiconformal, we deduce from Lemma \[lem1.1\] that $$\rho_W(w_1,w_2)\leq M_K(\rho_{{U}}(v_1,v_2))
=M_K(\rho_{{U}}(v_2,v_3)).$$ Combining the last two estimates we obtain $$\rho_W(w_1,w_2)\leq M_K(M_K(2\arctanh (r_1/r_2))),$$ which contradicts . Thus $f:{U}\rightarrow {U}$ is not univalent.
It remains to show that $U$ is a simply parabolic Baker domain. In order to do so we recall that $V$ is a simply parabolic Baker domain of $h$. We also note that it follows from the construction of $V$ that there exists an absorbing domain $V_0$ of $h$ in $V$ satisfying $V_0 \subset \{z\in\mathbb{C}:\re z>2\pi\}$. Clearly, $V_0$ is also an absorbing domain of $F$ in $W$. Hence $U_0=\psi(V_0)$ is an absorbing domain of $f$ in $U$.
Since $\psi$ is analytic in $V_0$, we find for $v\in V$ and $w=\psi(v)\in \psi(V)=U$ and large $n$ that $$\rho_{U_0}(f^{n+1}(w),f^{n}(w))
=\rho_{\psi(V_0)}(\psi(h^{n}(v)),\psi(h^{n+1}(v)))
=\rho_{V_0}(h^{n+1}(v),h^{n}(v)).$$ Since $V$ is a simply parabolic Baker domain of $h$, Lemma \[lemma2.5\] now yields that $U$ is simply parabolic. This completes the proof of Theorem \[thm1\].
Proof of Theorem \[thm2\] {#proof2}
=========================
Let $\alpha$, $g$, $h$, $\psi$ and $f$ be as in the proof of Theorem \[thm1\]. Baker and Weinreich [@BakerWeinreich Theorem 3] used results of Ghys [@Ghys] and Herman [@Herman] to show that for suitably chosen $\alpha$ the boundary of the Siegel disk $S$ of $h$ is a Jordan curve. Actually, by a recent result of Zakeri [@Zakeri], this is the case if $\alpha$ has bounded type. Thus in this case the Baker domain $V$ of $h$ is bounded by a Jordan curve on the sphere. Let $\gamma$ be a Jordan arc in $\partial V\cap\{z:\im z>2\pi\}$. It follows from the construction that $\gamma\subset\partial W$ and that the points of $\gamma$ are accessible from $W$. Thus $\partial U$ contains the Jordan arc $\psi(\gamma)$ consisting of points accessible from $U$. This implies that $\Xi \not=\partial\D$. Thus $f_1=f$ and $U_1=U$ have the desired property.
The construction of $f_2$ and $U_2$ is similar. Here our starting point are the functions $g(z)=\frac{1}{2}z^2e^{2-z}$ and $h(z)=2-\log 2+2z-e^z$ considered in [@Bergweiler1995]. The function $g$ has a superattracting basin $B$ at the origin which is bounded by a Jordan curve and $V=\log B$ is a hyperbolic Baker domain of $h$ where $h$ is univalent. By a similar reasoning as in the proof of Theorem \[thm1\] we will now use quasiconformal surgery to construct an entire function $f_2$ with a Baker domain $U_2$ where $f$ is not univalent.
Here we put, for large a positive integer $M$, $$S(M)=\{z\in\mathbb{C}:| \re z+ 2\pi M|<2\pi, \;
\im z< -2\pi\}\cup D(-2\pi M-2\pi i,2\pi)$$ and $$T(M)=\{z\in\mathbb{C}:
| \re z+ 2\pi M|<\pi, \; \im z< -2\pi\}\cup D(-2\pi M-2\pi i,\pi).$$ Next we put $k(z)=2-\log 2+2z +\exp(e^{-iz}-L)$ for a large constant $L$ and define $F:\C\to\C$ by $F(z)=h(z)$ for $z\in \C\setminus S(M)$, by $F(z)=k(z)$ for $z\in \overline{T(M)}$ and by interpolation in $S(M)\setminus \overline{T(M)}$. Similarly as in the proof of Theorem \[thm1\] we see that $F$ is quasiregular if $M$ and $L=L(M)$ are large enough and that there exists a quasiconformal map $\psi$ such that $f_2=\psi\circ F\circ \psi^{-1}$ is a transcendental entire function. Noting that $V\cap\H$ is invariant under $h$ and thus under $F$ we see that $F$ has a Baker domain $W$ containing $V\cap\H$. Thus $U_2=\psi(W)$ is a Baker domain of $f_2$, and the construction shows that $\partial U_2$ contains a Jordan arc consisting of points accessible from $U_2$.
To show that $U_2$ is hyperbolic we use again Lemma \[lemma2.5\], noting that the domain $V_0=\{z\in\C: \re z< 3\pi M\}$ is absorbing for $h$ and $F$ in $V$ and that $\psi$ is analytic in $V_0$. Finally, the reasoning that $f_2$ is not univalent in $U_2$ is similar to that in the proof of Theorem \[thm1\].
[20]{}
I. N. Baker, The domains of normality of an entire function. [Ann. Acad. Sci. Fenn. Ser. A I Math.]{} 1 (1975), 277–283.
I. N. Baker and Domínguez, Boundaries of unbounded Fatou components of entire functions. Ann. Acad. Sci. Fenn. Math. 24 (1999), 437–-464.
I. N. Baker and Ch. Pommerenke, On iteration of analytic functions in a halfplane. II. J. London Math. Soc. (2) 20 (1979), 255–258.
I. N. Baker and J. Weinreich, Boundaries which arise in the iteration of entire functions. [Rev. Roumaine Math. Pures Appl.]{} 36 (1991), 413–420.
K. Baranski and N. Fagella, Univalent Baker domains. Nonlinearity 14 (2001), 411–429.
D. Bargmann, Iteration of inner functions and boundaries of components of the Fatou set. In “Transcendental Dynamics and Complex Analysis”. London Math. Soc. Lect. Note Ser. 348. Edited by P. J. Rippon and G. M. Stallard, Cambridge Univ. Press, Cambridge, 2008, 1–36.
W. Bergweiler, Iteration of meromorphic functions. [Bull. Amer. Math. Soc. (N. S.)]{} 29 (1993), 151–188.
W. Bergweiler, Invariant domains and singularities. Math. Proc. Camb. Phil. Soc. 117 (1995), 525–536.
W. Bergweiler, On the Julia set of analytic self–maps of the punctured plane. Analysis 15 (1995), 251–256.
W. Bergweiler, Singularities in Baker domains. Comput. Methods Funct. Theory 1 (2001), 41-–49.
W. Bergweiler, D. Drasin and J. K. Langley, Baker domains for Newton’s method. Ann. Inst. Fourier 57 (2007), 803–-814.
P. Bonfert, On iteration in planar domains. Michigan Math. J. 44 (1997), 47–68.
X. Buff and J. Rückert, Virtual immediate basins of Newton maps and asymptotic values. Int. Math. Res. Not. 2006, 65498, 1–-18.
C. C. Cowen, Iteration and the solution of functional equations for functions analytic in the unit disk. Trans. Amer. Math. Soc. 265 (1981), 69–-95.
R. L. Devaney and L. Goldberg, Uniformization of attracting basins for exponential maps. Duke Math. J. 55 (1987), 253–-266.
A. E. Eremenko and M. Yu. Lyubich, Examples of entire functions with pathological dynamics. [J. London Math. Soc.]{} (2) 36 (1987), 458–468.
N. Fagella and C. Henriksen, Deformation of entire functions with Baker domains. Discrete Cont. Dynam. Systems 15 (2006), 379–394.
P. Fatou, Sur l’itération des fonctions transcendantes entières. [Acta Math.]{} 47 (1926), 337–360.
E. Ghys, Transformations holomorphes au voisinage d’une courbe de Jordan. C. R. Acad. Sci. Paris Sér. I Math. 298 (1984), 385–388.
M. Herman, Conjugaison quasi symétrique des difféomorphismes du cercle à des rotations et applications aux disques singuliers de Siegel, I. Unpublished manuscript, available at http://www.math.kyoto-u.ac.jp/$\sim$mitsu/Herman/index.html
M. Herman, Are there critical points on the boundary of singular domains? [Comm. Math. Phys.]{} 99 (1985), 593–612.
M. Kisaka, On the connectivity of Julia sets of transcendental entire functions. Sci. Bull. Josai Univ. 1997, Special issue no. 1, 77–87.
M. Kisaka, On the connectivity of Julia sets of transcendental entire functions. Ergodic Theory Dynam. Systems 18 (1998), 189–205.
M. Kisaka and M. Shishikura, On multiply connected wandering domains of entire functions. In “[Transcendental dynamics and complex analysis]{}”, edited by P. J. Rippon and G. M. Stallard, LMS Lecture Note Series 348, Cambridge University Press, 2008, 217–250.
H. König, Conformal conjugacies in Baker domains. J. London Math. Soc. (2) 59 (1999), 153–170.
A. Lauber, Bifurcations of Baker domains. Nonlinearity 20 (2007), 1535–1545.
O. Lehto and K. Virtanen, Quasiconformal mappings in the plane. Springer-Verlag, Berlin, 1973.
Ch. Pommerenke, On iteration of analytic functions in a halfplane. J. London Math. Soc. (2) 19 (1979), 439–447.
S. L. Qiu, M. K. Vamanamurthy and M. Vuorinen, Bounds for quasiconformal distortion functions. J. Math. Anal. Appl. 205 (1997), 43-–64.
P. J. Rippon, Baker domains of meromorphic functions. Ergodic Theory Dynam. Systems 26 (2006), 1225–1233.
P. J. Rippon, Baker domains. In “Transcendental Dynamics and Complex Analysis”. London Math. Soc. Lect. Note Ser. 348. Edited by P. J. Rippon and G. M. Stallard, Cambridge Univ. Press, Cambridge, 2008, 371–395.
P. J. Rippon and G. M. Stallard, Families of Baker domains II. Conform. Geom. Dyn. 3 (1999), 67–78.
M. Shishikura, On the quasi-conformal surgery of rational functions. [Ann. Sci. École Norm. Sup.]{} (4) 20 (1987), 1–29.
N. Steinmetz, [Rational iteration]{}. Walter de Gruyter, Berlin 1993.
M. Vuorinen, Conformal geometry and quasiregular mappings. Lecture Notes in Mathematics 1319, Springer-Verlag, Berlin, 1988.
S. Zakeri, On Siegel disks of a class of entire functions. Duke Math. J. 152 (2010), 481–532.
J. H. Zheng, Iteration of functions meromorphic outside a small set. Tohoku Math. J. 57 (2005), 23–43.
J. H. Zheng, Dynamics of transcendental meromorphic functions (Chinese). Monograph of Tsinghua University, Tsinghua University Press, Beijing, 2006.
|
---
abstract: |
We say that a finite metric space $X$ can be embedded almost isometrically into a class of metric spaces $C$, if for every $\epsilon > 0$ there exists an embedding of $X$ into one of the elements of $C$ with the bi-Lipschitz distortion less than $1 + \epsilon$. We show that almost isometric embeddability conditions are equal for following classes of spaces
\(a) Quotients of Euclidean spaces by isometric actions of finite groups,
\(b) $L_2$-Wasserstein spaces over Euclidean spaces,
\(c) Compact flat manifolds,
\(d) Compact flat orbifolds, (e) Quotients of connected compact bi-invariant Lie groups by isometric actions of compact Lie groups. (This one is the most surprising.)
We call spaces which satisfy this conditions finite flat spaces. Since Markov type constants depend only on finite subsets we can conclude that connected compact bi-invariant Lie groups and their quotients have Markov type $2$ with constant $1$.
address: 'Steklov Institute of Mathematics, Russian Academy of Sciences, 27 Fontanka, 191023 St.Petersburg, Russia, University of Cologne, Albertus-Magnus-Platz, 50923 Köln, Germany and Mathematics and Mechanics Faculty, St. Petersburg State University, Universitetsky pr., 28, Stary Peterhof, 198504, Russia.'
author:
- Vladimir Zolotov
bibliography:
- 'circle.bib'
title: Finite flat spaces
---
Introduction
============
Motivation and the main result.
-------------------------------
Let $X,Y$ be a pair of metric spaces. For $f:X \rightarrow Y$ the bi-Lipschitz constant of $f$ is the infimum of $c \ge 1$ s.t., $$\frac{1}{c}d_Y(f(x_1),f(x_2)) \le d_X(x_1,x_2) \le c d_Y(f(x_1),f(x_2)), \text{for every $x_1,x_2 \in X$}.$$
We say that a finite metric space $X$ can be embedded almost isometrically into a class of metric spaces $C$ if for every $\epsilon > 0$ the exists an embedding of $X$ into one of the elements of $C$ with the bi-Lipshitz distortion less than $1 + \epsilon$.
The study of conditions for isometric embeddability of finite metric spaces into $L_p$ spaces has a long history. For an overview of results see [@DL]. In the recent years the study of isometric embeddability conditions for Alexandrov spaces of non-negative curvarute started to develop, see [@AKP], [@ANN], [@LPZ]. Despite a certain progress in understanding necessary conditions, we lack embeddability results. Which motivates the study of isometric embeddability conditions for subclasses of Alexandrov spaces with even more restricted geometry.
One can consider following subclasses: compact flat manifolds, quotients of Euclidean spaces by isometric actions of finite groups, quotients of connected compact bi-invariant Lie groups by isometric actions of compact Lie groups, $2$-Wasserstein spaces over Euclidean spaces. In the present work we prove that all those classes contain the same finite subspaces. More precisely we have the following theorem.
\[MainThm\] Let $X$ be a finite metric space. Suppose that $X$ can be almost isometrically embedded into one of the following classes of spaces.
1. [$L_2$-Wasserstein spaces over Euclidean spaces, ]{}\[FFDWass\]
2. [Quotients of Euclidean spaces by isometric actions of finite groups,]{}\[FFDQFG\]
3. [Compact flat orbifolds,]{}\[FFDFO\]
4. [Compact flat manifolds,]{}\[FFDFM\]
5. [Quotients of connected compact bi-invariant Lie groups by isometric actions of compact Lie groups. \[FFDLie\] (Here the group acting by isometries doesn’t have to be a subgroup of the original group. We also do not assume that the group acting by isometries is connected. In particular finite groups are fine.)]{}
6. [$L_2$-Wasserstein spaces over connected compact bi-invariant Lie groups.]{}\[FFDLieWass\]
Then $X$ can be almost isometrically embedded into all of those classes.
We call spaces which could be almost isometrically embedded into all classes from Theorem \[MainThm\] *finite flat spaces*.
The main claim of Theorem \[MainThm\] is $(\ref{FFDLie}) \Rightarrow (\ref{FFDQFG})$. In particular we have that finite subspace of an $n$-dimensional sphere with its standard intrinsic metrics could be almost isometrically embedded into quotients of Euclidean spaces by isometric actions of finite groups.
Previously it was known that a bi-quotient of a compact connected Lie group with a bi-invariant metric could be presented as a quotient of Hilbert space by an isometric action of a certain group, see [@Devil] Problem “Quotient of Hilbert space”, [@LPZ] proof of Proposition 1.5 and [@TT] Section 4.
Question about a synthetic definition.
--------------------------------------
Let $T$ be a non-oriented tree with $n$ vertexes, we denote by $V(T)$ the set of its vertices and by $E(T)$ the set of its edges. For a metric space $X$ we say that $T$ comparison holds in $X$, if for every map $f:V(T) \rightarrow X$ there exists a map into Hilbert space $\wt f:V(T) \rightarrow \H$ s.t.,
1. [$d_X(f(v_1),f(v_2)) \le \vert \wt f(v_1) - \wt f(v_2)\vert $, for every $v_1, v_2 \in V(T)$.]{}
2. [$d_X(f(v_1),f(v_2)) = \vert \wt f(v_1) - \wt f(v_2)\vert $, for every $\{v_1,v_2\} \in E(T)$.]{}
For the theory related to $T$-comparison see [@LPZ]. The following question is a stronger version of [@LPZ Question 10.2].
Let $X$ be a finite metric space such that $T$-comparison holds in $X$ for every finite tree $T$. Is is true that $X$ has to be a finite flat space?
Application to the theory of Markov type.
-----------------------------------------
Recall that a Markov chain $\{Z_t\}^\infty_{t=0}$ with transition probabilities $a_{ij} := Pr[Z_{t+1} = j\vert Z_t = i]$ on the state space $\{1,\dots,n\}$ is stationary if $\pi_i = Pr[Z_t = i]$ does not depend on $t$ and it is reversible if $\pi_i a_{ij} = \pi_j a_{ji}$ for every $i, j \in \{1, . . . , n\}$.
Given a metric space $(X, d)$, we say that $X$ has Markov type $2$ if there exists a constant $K > 0$ such that for every stationary reversible Markov chain $\{Z_t\}^\infty_{t=0}$ on $\{1,\dots,n\}$, every mapping $f : \{1, . . . , n\} \rightarrow X$ and every time $t \in \N$, $$\E d^2(f(Z_t), f(Z_0)) \le K^2 t \E d^2(f(Z_1), f(Z_0)).$$ The least such $K$ is called the Markov type $2$ constant of $X$, and is denoted $M_2(X)$.
The notion of Markov type was introduced by K. Ball in his study of the Lipschitz extension problem [@Ball]. Major results in this direction were obtained later by Naor, Peres, Schramm and Sheffield [@NPSS]. The notion of Markov type has also found applications in the theory of bi-Lipschitz embeddings [@BLMN; @LMN]. For more applications of the notion of Markov type and its place in a bigger picture see a survey [@RibeIntro]. S.-I. Ohta and M. Pichot discovered that the notion of Markov Type $2$ is related to the non-negative curvature in the sense of Alexandrov. In [@OP] they showed that if a geodesic metric space has Markov type $2$ with constant $1$, then it is an Alexandrov space of non-negative curvature. In [@Ohta] it was shown that every Alexandrov space has Markov type $2$ with constant $1 + \sqrt{2}$. The constant was later improved to $\sqrt{1+\sqrt{2} + \sqrt{4\sqrt{2}-1} } = 2.08\dots$ by A.Andoni, A.Naor and O. Neiman, see [@ANN].
It is even possible that all Alexandrov spaces have Markov type $2$ with constant $1$. In [@ZMT] it was shown that some Alexandrov spaces such as compact flat manifolds, quotients of Euclidean spaces by isometric actions of finite groups and $2$-Wasserstein spaces over Euclidean spaces do have Markov type $2$ with constant $1$. The following corollary extends this list.
Let $M$ be a quotient of a connected compact bi-invariant Lie group by an isometric action of a compact Lie group. Then $M$ has Markov type $2$ with constant $1$. In particular standard spheres with their intrinsic metrics have Markov type $2$ with constant $1$.
For a Markov chain $\{Z_t\}^\infty_{t=0}$ and a map $f : \{1, . . . , n\} \rightarrow M$ we have to show that $$\E d^2(f(Z_t), f(Z_0)) \le t \E d^2(f(Z_1), f(Z_0)),\textit{ for every $t \in \N$.}$$ Fix $\ee > 0$. By Theorem \[MainThm\] there exists a compact flat manifold $M_\ee$ and a map $f_\ee:\{1, . . . , n\} \rightarrow M$ such that for every $i,j \in \{1, . . . , n\}$ we have $$\label{FMeq}
\frac{1}{(1+\ee)}d_{M_\ee}(f_\ee(i),f_\ee(j)) \le d_M(f(i),f(j)) \le (1 + \ee) d_{M_\ee}(f_\ee(i),f_\ee(j)).$$ Since compact flat manifolds have Markov type $2$ with constant $1$ we have $$\E d^2(f_\ee(Z_t), f_\ee(Z_0)) \le t \E d^2(f_\ee(Z_1), f_\ee(Z_0)),\textit{ for every $t \in \N$.}$$ Combining the last inequality with (\[FMeq\]) we obtain $$\E d^2(f(Z_t), f(Z_0)) \le (1+\ee)^2t \E d^2(f(Z_1), f(Z_0)),\textit{ for every $t \in \N$.}$$ Since $\ee$ is arbitrary we conclude that $$\E d^2(f(Z_t), f(Z_0)) \le t \E d^2(f(Z_1), f(Z_0)),\textit{ for every $t \in \N$.}$$
Preliminaries and notation
==========================
Let $X$ be a metric space, $n \in \N$ and $\ll > 0$. We denote by $\ll X$ a metric space with a scaled metric $$d_{\ll X}(x,y) = \ll d_{X}(x,y),$$ by $X \times X$ a metric space on a Cartesian product given by $$d_{X \times X}((x_1,x_2), (y_1,y_2))^2 = d_{X}(x_1,y_1)^2 + d_{X}(x_2,y_2)^2.$$ And by $X^n$ the corresponding power of $X$, $$X^n = X \times \dots \times X,\text{ ($n$ times)}.$$ We denote by $S_n$ we denote a symmetric group. Group $S_n$ acts by permutations on a metric space $X^n$ we denote the corresponding metric quotient by $X^n\overset{perm}{/}S_n$.
For a map $f:X \rightarrow Y$ between metric spaces $X$ and $Y$ we denote by $\vert f \vert_{Lip}$ the Lipschitz constant of $f$ i.e., $$\vert f \vert_{Lip} = \sup_{x_1,x_2 \in X}\frac{d_Y(f(x_1),f(x_2))}{d_X(x_1,x_2)} \in [0,\infty].$$
Now we are going to recall the definition of $2$-Wasserstein spaces. For a metric space $X$ we denote by $\PPP(X)$ the set of Borel probability measures with finite $2$-nd moment which means that $$\exists o \in X: \int_Xd^2(x,o)d\mu(x) < \infty.$$ Let $\mu,\nu \in \PPP(X)$. We say that a measure $q$ on $X \times X$ is a coupling of $\mu$ and $\nu$ iff its marginals are $\mu$ and $\nu$, that is $$q(A \times X) = \mu(A), q(X \times A) = \nu(A),$$ for all Borel measurable subsets $A \subset X$. The $2$-Wasserstein distance $d_{W^2}$ between $\mu$ and $\nu$ is defined by $$d_{W_2}(\mu,\nu) = \inf\Big\{\Big(\int_{X \times X}d^2(x,y)dq(x,y)\Big)^\frac{1}{2}: \text{$q$ is a coupling of $\mu$ and $\nu$ }\Big\}.$$ The $2$-Wasserstein space $\PPP(X)$ is the set of Borel probability measures with finite $2$-nd moment on $X$ equipped with $2$-Wasserstein distance.
Proof of Theorem \[MainThm\]
============================
The scheme of proof is cyclic $(\ref{FFDWass}) \Rightarrow (\ref{FFDQFG}) \Rightarrow (\ref{FFDFO}) \Rightarrow (\ref{FFDFM}) \Rightarrow (\ref{FFDLie}) \Rightarrow (\ref{FFDLieWass}) \Rightarrow (\ref{FFDWass})$ and the only really new arrow is $(\ref{FFDLieWass}) \Rightarrow (\ref{FFDWass})$.
The arrow $(\ref{FFDWass}) \Rightarrow (\ref{FFDQFG})$ is a direct implication of the following observation which is due to Sergey V. Ivanov and appears in [@ZMT], see Lemma 6.1 and the proof of Proposition 6.2.
For a metric space $X$ there exist a sequence of isometric embeddings $${\Phi_n:X^{2^n}\overset{perm}{/}S_{2^n} \rightarrow \PPP(X)}$$ such that images $I_n := \Phi_n\big(X^{2^n}\overset{perm}{/}S_{2^n}\big)$ satisfy,
1. [ $I_n \subset I_{n+1},\text{ for every $n \in \N$},$ ]{}
2. [ $\cup_{n = 1}^{\infty}I_n$ is dense in $\PPP(X).$ ]{}
Now we are going to provide $(\ref{FFDQFG}) \Rightarrow (\ref{FFDFO})$. Let $X$ be a finite subspace of a quotient space $\R^n/G$, where $G$ is a finite group acting by isometries. There exists an Euclidean space $\R^m$ and an action $\rho_1$ of $G$ on $\R^m$ by permutation of coordinates such that $X$ can be embedded isometrically into $\R^m/\rho_1$, see for example [@ZS Corollary 1]. For $M > 0$ we denote by $\rho_2^{(M)}$ an action of $\Z^m$ on $\R^m$ by shifts scaled by $M$, i.e. $$(\rho_2^{(M)}(a_1,\dots,a_m))(x_1,\dots,x_m) = (x_1 + Ma_1,\dots, x_m + Ma_m).$$ Note that the product action $\rho_1 \times \rho_2^{(M)}$ is discrete and the fundamental domain is bounded. Thus $\R^m/(\rho_1 \times \rho_2^{(M)})$ is a compact flat orbifold. If $M$ is big enough then $\R^m/(\rho_1 \times \rho_2^{(M)})$ contains an isometric copy of $X$.
An implication $ (\ref{FFDFO}) \Rightarrow (\ref{FFDFM})$ follows from the next proposition, see [@LytchaksFriends Proposition 3.3].
Any flat orbifold is the Gromov-Hausdorff limit of a sequence of closed flat manifolds.
The next arrow is $(\ref{FFDFM}) \Rightarrow (\ref{FFDLie})$. By Bieberbach’s Theorem [@Bie1; @Bie2] a flat manifold can be presented as a quotient of a flat torus by an isometric action of a finite group. Thus, we have $(\ref{FFDFM}) \Rightarrow (\ref{FFDLie})$.
To provide $ (\ref{FFDLie}) \Rightarrow (\ref{FFDLieWass})$ we simply apply the following proposition, see [@plift Theorem 3.2].
\[LiftLem\] Let $M$ be a compact Riemannian manifold and $\rho:G \rightarrow Iso(M)$ be an action by isometries of a compact Lie group $G$ on $M$. Let $N$ denotes the corresponding quotient space. There exists a lifting map $$\LL: \PP(N) \rightarrow \PP(M),$$ such that for every $\mu, \nu \in \PP(N)$ we have $$d_{W_2}(\LL(\mu),\LL(\nu)) = d_{W_2}(\mu, \nu),$$ $$\rho_{\sharp}(\LL(\mu)) = \mu,$$ where $\rho_{\sharp}:M \rightarrow N$ is the projection.
Finally we are going to provide the proof of $(\ref{FFDLieWass}) \Rightarrow (\ref{FFDWass})$. The following observation and its relation to our study were shown to the author by Alexander Lytchak.
\[LytchaksTrick\] Let $G$ be a connected bi-invariant Lie group. Consider an action $${\rho:G \rightarrow Iso(\sqrt{2} G \times \sqrt{2} G)},$$ given by $$\rho(g)(g_1,g_2) = (gg_1,gg_2).$$ Then the corresponding quotient space $(\sqrt{2} G \times \sqrt{2} G)/G$ is isometric to $G$.
Proof of Lemma \[LytchaksTrick\] is relatively straightforward and we omit it.
We denote by $E^n$ the Euclidean space of the dimension $n$. We are going to construct a sequence of maps $\wt \LL_m : \PPP(G) \rightarrow \PPP(E^{n(m)})$ indexed by positive integers $m = 1,2,3\dots$, such that bi-Lipschitz distortions of $\wt \LL_m$ tend to zero.
*Step 1: Construction of maps $\wt \LL_m$.* For $m \in \N$ we denote $M = M(m) = 2^{\frac{m}{2}}$. Lemma \[LytchaksTrick\] provides us a tower of groups $$\dots \overset{p_{m+1}}{\rightarrow} (MG)^{M^2} \overset{p_m}{\rightarrow} \dots \overset{p_3}\rightarrow 2G \times 2G \times 2G \times 2G \overset{p_2}{\rightarrow} \sqrt{2}G \times \sqrt 2 G \overset{p_1}{\rightarrow} G.$$ By Proposition \[LiftLem\] we can construct a tower of lifting maps, $$\dots \overset{\LL_{m+1}}{\leftarrow} \PPP((MG)^{M^2}) \overset{\LL_m}{\leftarrow} \dots \overset{\LL_3}{\leftarrow} \PPP(2G \times 2G \times 2G \times 2G) \overset{\LL_2}{\leftarrow} \PPP(\sqrt{2}G \times \sqrt 2 G) \overset{\LL_1}{\leftarrow} \PPP(G).$$
For a metric space $X$, some Euclidean space $E$, a map $A:X\rightarrow E$, and $C > 0$ we denote by $A^{(C)}: CX \rightarrow E$ a map given by $A^{(C)}(x) = CA(x)$. By the Nash embedding theorem (see [@Nash]) there exist $k \in \N$ and a bijective Riemanian isometric $C^1$-map $f:G \rightarrow E^k$. We define a map $F_m:(MG)^{M^2}\rightarrow E^{kM^2}$ by $$F_m(g_1,\dots,g_{M^2}) = (f^{(M)}(g_1),\dots,f^{(M)}(g_{M^2})).$$
The required map $\wt \LL_m:\PPP(G) \rightarrow \PPP(E^{kM^2})$ is defined by $$\wt \LL_m = {(F_m)}_\sharp \circ \LL_m \circ \dots \circ \LL_1.$$
*Step 2: Proving that bi-Lipschitz distortions of $\wt \LL_m$ tend to $1$.* Note that $ \LL_1, \dots , \LL_m$ and ${(F_m)}_\sharp$ are $1$-Lipschitz. Thus, $\wt \LL_m$ is also $1$-Lipschitz.
We introduce a map $\wt p_m : {\operatorname{Im}}(F_m) \rightarrow G$ given by $\wt p_m = p_1 \circ \dots \circ p_m \circ F_m^{-1}.$ Note that $ (\wt p_m)_\sharp \circ \wt \LL_m = {\operatorname{id}}$. Thus to show that bi-Lipshitz distortions of $\wt \LL_m$ tend to $1$ it suffices to show that $\lim_{m \rightarrow \infty}\vert \wt p_m\vert_{Lip} \le 1$. We denote by $D$ the diameter of $G$ and by $\vert \cdot \vert_{E^n}$ the standard norm in $E^n$. For a pair of points $x,y \in {\operatorname{Im}}(F_m)$ s.t, $\vert x - y\vert_{E^{kM^2}} > d$ we clearly have $$\vert x - y \vert_{E^{kM^2}} > D \ge d_G(\wt p_m(x), \wt p_m(y)).$$
Note that since $f$ is a Riemannian isometric $C^1$-map and $G$ is compact there exists $L > 0$, s.t $\vert f^{-1}\vert_{Lip} < L$. From the construction of $F_m$ we obtain that $\vert {F_m}^{-1}\vert_{Lip} < L$ for every $m \in \N$.
Thus, for every pair of points $x,y \in {\operatorname{Im}}(F_m)$ s.t, $\vert x - y\vert_{E^{kM^2}} \le D$ we have $$d_G({F_m}^{-1}(x),{F_m}^{-1}(y)) < L D.$$
Thus, $$\vert x - y\vert_{E^{kM^2}} \ge d_{(MG)^{M^2}}({F_m}^{-1}(x),{F_m}^{-1}(y))
\inf_{\wt x, \wt y \in (MG)^{M^2}, d_{(MG)^{M^2}}(\wt x,\wt y) < LD}
\frac{\vert F_m(\wt x) - F_m(\wt y)\vert_{E^{kM^2}}}{d_{(MG)^{M^2}}(\wt x, \wt y)} \ge$$ $$\ge d_G({\wt p_m}(x),{\wt p_m}(y))
\inf_{\wt x, \wt y \in (MG)^{M^2}, d_{(MG)^{M^2}}(\wt x,\wt y) < LD}
\frac{\vert F_m(\wt x) - F_m(\wt y)\vert_{E^{kM^2}}}{d_{(MG)^{M^2}}(\wt x, \wt y)} \ge$$ $$\ge d_G({\wt p_m}(x),{\wt p_m}(y))
\inf_{\wt x, \wt y \in (MG), d_{MG}(\wt x,\wt y) < LD}
\frac{\vert f^{(M)}(\wt x) - f^{(M)}(\wt y)\vert_{E^{k}}}{d_{MG}(\wt x, \wt y)} =$$ $$= d_G({\wt p_m}(x),{\wt p_m}(y))
\inf_{\wt x, \wt y \in G, d_G(\wt x,\wt y) < \frac{LD}{M}}
\frac{\vert f(\wt x) - f(\wt y)\vert_{E^{k}}}{d_{G}(\wt x, \wt y)}.$$
Note that since $f$ is a Riemannian isometric $C^1$-map and $G$ is compact $$\inf_{\wt x, \wt y \in G, d_G(\wt x,\wt y) < \frac{LD}{M}}\frac{\vert f(\wt x) - f(\wt y)\vert_{E^{k}}}{d_{G}(\wt x, \wt y)} \underset{m \rightarrow \infty}{\rightarrow} 1.$$
Acknowledgements {#acknowledgements .unnumbered}
----------------
I thank Sergey V. Ivanov and Alexander Lytchak for advising me during this work. I’m grateful to Nina Lebedeva for fruitful discussions. I thank the anonymous referee for numerous corrections. The paper is supported by the Russian Science Foundation under grant 16-11-10039.
|
---
abstract: 'Let $G=(V,E)$ be a simple undirected graph with $n$ vertices then a set partition $\pi=\{V_1, \ldots, V_k\}$ of the vertex set of $G$ is a connected set partition if each subgraph $G[V_j]$ induced by the blocks $V_j$ of $\pi$ is connected for $1\le j\le k$. Define $q_{i}(G)$ as the number of connected set partitions in $G$ with $i$ blocks. The partition polynomial is then $Q(G, x)=\sum_{i=0}^n q_{i}(G)x^i$. This paper presents a splitting approach to the partition polynomial on a separating vertex set $X$ in $G$ and summarizes some properties of the bond lattice. Furthermore the bivariate partition polynomial $Q(G,x,y)=\sum_{i=1}^n \sum_{j=1}^m q_{ij}(G)x^iy^j$ is briefly discussed, where $q_{ij}(G)$ counts the number of connected set partitions with $i$ blocks and $j$ intra block edges. Finally the complexity for the bivariate partition polynomial is proven to be $\sharp P$-hard.'
author:
- 'Frank Simon[^1]'
- 'Peter Tittmann[^2]'
- 'Martin Trinks[^3]'
nocite:
- '[@stanley:ASymmetricFunctionGeneralizationOfTheChromaticPolynomialOfAGraph]'
- '[@averbouchMakowskyTittmann:GraphPolynomial]'
title: Counting Connected Set Partitions of Graphs
---
The authors Frank Simon and Martin Trinks receive a grant from the European Union.
![image](esf_logo.eps){width="3cm"} ![image](euemblem_eps){width="2cm"}
[**Keywords:**]{} graph theory, bond lattice, chromatic polynomial, splitting formula, bounded treewidth, $\sharp P$-hard
[10]{} Ilya Averbouch, Johann A. Makowsky, and Peter Tittmann. A graph polynomial arising from community structure (extended abstract). In [*WG*]{}, pages 33–43, 2009.
C.D. Birkhoff and D.C. Lewis. Chromatic polynomials. , 60:335–351, 1946.
. . , pages 340–368, 1964.
Olivier Goldschmidt and Dorit S. Hochbaum. A polynomial algorithm for the k-cut problem for fixed k. , 19(1):24–37, 1994.
. . , [1990]{}.
B. D. Mc Kay. Combinatorial data. Private homepage, Sep 2009. <http://cs.anu.edu.au/~bdm/data/graphs.html>.
. . , [111(1)]{}:[166–194]{}, [1995]{}.
R.C. Read. An introduction to chromatic polynomials. , 4:52–71, 1968.
R. P. Stanley. Acyclic orientations of graphs. , 5:171–178, 1973.
. . , [47]{}:[504–512]{}, [Feb]{} [1975]{}.
W. T. Tutte. Chromials. , 411:243–266, 1974.
W.T. Tutte. On chromatic polynomials and the golden ratio. , 9:289–296, 1970.
[^1]: E-mail: [simon@hs-mittweida.de]{}, Hochschule Mittweida (FH) University of Applied Sciences, Fakultät Mathematik/Naturwissenschaften/Informatik, Technikumplatz 17, D-09648 Mittweida, Germany
[^2]: E-mail: [peter@hs-mittweida.de]{}, Hochschule Mittweida (FH) University of Applied Sciences, Fakultät Mathematik/Naturwissenschaften/Informatik, Technikumplatz 17, D-09648 Mittweida, Germany
[^3]: E-mail: [trinks@hs-mittweida.de]{}, Hochschule Mittweida (FH) University of Applied Sciences, Fakultät Mathematik/Naturwissenschaften/Informatik, Technikumplatz 17, D-09648 Mittweida, Germany
|
---
abstract: 'We discuss a few new characteristic features of the loop-induced MSSM Higgs-sector CP violation at the LHC based on two scenarios: $(i)$ CPX and $(ii)$ Trimixing.'
address: |
Center for Theoretical Physics, School of Physics and Astronomy,\
Seoul National University, Seoul 151-747, Korea\
jslee@muon.kaist.ac.kr
author:
- JAE SIK LEE
title: 'LHC Signatures of MSSM Higgs-sector CP Violation '
---
Introduction
============
Supersymmetric models contain many possible sources of CP violation beyond the SM CKM phase. In the Minimal Supersymmetric extension of the Standard Model (MSSM), for example, we have 8 CP phases when we even consider only the third generation, that is, stops, sbottoms, and staus:
- $\Phi_\mu\,[1]$: $~~~~W\,\supset\, \mu\, \hat{H}_2\cdot\hat{H}_1$
- $\Phi_i\,[3]$: $~~
-{\cal L}_{\rm soft}\, \supset\, \frac{1}{2}
( M_3 \, \widetilde{g}\widetilde{g}
+ M_2 \, \widetilde{W}\widetilde{W}
+ M_1 \, \widetilde{B}\widetilde{B}+{\rm h.c.})$
- $\Phi_{A_f}\,[3]$ with $f=t,b,\tau$:\
$~~~~~~~~~~~~
-{\cal L}_{\rm soft}\, \supset\,
A_t \, \widetilde{t}_R^*\,\widetilde{Q}_3\cdot H_2
-A_b \, \widetilde{b}_R^*\,\widetilde{Q}_3\cdot H_1
-A_\tau \, \widetilde{\tau}_R^*\,\widetilde{L}_3\cdot H_1+{\rm h.c.}$
- $\Phi_{m_{12}^2}\,[1]$: $-{\cal L}_{\rm soft}\, \supset\, -(m_{12}^2\,H_1\cdot H_2 + {\rm h.c.})$
The numbers of relevant CP phases are given in the brackets. These 8 CP phases are not all independent and physical observables depend on the combinations of ${\rm Arg}\left(M_i\mu (m_{12}^2)^*\right)$ and ${\rm Arg}\left(A_f\mu (m_{12}^2)^*\right)$.[@Dugan:1984qf; @Dimopoulos:1995kn] In the convention of ${\rm Arg}(m_{12}^2)=0$, we have 6 rephasing invariant CP phases: $${\rm Arg}(M_1\,\mu)\,, \ \
{\rm Arg}(M_2\,\mu)\,, \ \
{\rm Arg}(M_3\,\mu)\,; \ \
{\rm Arg}(A_t\,\mu)\, \ \
{\rm Arg}(A_b\,\mu)\, \ \
{\rm Arg}(A_\tau\,\mu)\,.$$
These non-vanishing CP phases can induce a significant CP-violating mixing between CP-even and CP-odd Higgs states via radiative corrections.[@Pilaftsis:1998pe; @Pilaftsis:1998dd; @Pilaftsis:1999qt; @Demir:1999hj; @Choi:2000wz; @Ibrahim:2000qj; @Ibrahim:2002zk] There are two approaches to calculate this CP-violating mixing. Here we use the calculation based on the renormalization-group-improved effective potential method including the Higgs-boson pole mass shift.[@Carena:2000yi; @Carena:2001fw] For the Feynman-diagrammatic approach, we refer to Ref. and references there in.
In this contribution, we discuss a few characteristic features of the Higgs-sector CP violation at the LHC which have been recently observed after the appearance of [**CPNSH**]{} Report.[@Accomando:2006ga] And we put emphasis on importance of the $\tau$-lepton polarization measurement to construct genuine CP-odd signal at the LHC. [@Ellis:2004fs; @Ellis:2006eh] For numerical analysis, two scenarios are considered: $(i)$ CPX[@Carena:2000ks] (Sec. 2) and $(ii)$ Trimixing[@Ellis:2004fs] (Sec. 3). See, for example, Ref. for detailed description and comparison of two scenarios with some numerical results. The code [CPsuperH]{}[@Lee:2003nt] is used to generate numerical outputs.
CPX Scenario
============
First we consider the constraint on the CPX scenario coming from the non-observation of an EDM in the Thallium atom. The contributions of the first and second generation phases, e.g. $\Phi_{A_{e,\mu}}$, $\Phi_{A_{d,s}}$ etc., to EDMs can be drastically reduced either by making these phases sufficiently small, or if the first- and second-generation squarks and sleptons are sufficiently heavy. In this case, the dominant contribution to EDMs occurs at two-loop level.[@Chang:1998uc; @Pilaftsis:1999td; @Chang:1999zw] We refer to Ref. for the explicit expression of the two-loop Higgs-mediated Thallium EDM in the [CPsuperH]{} conventions and notations.
![The rescaled Thallium EDM $\hat{d}_{\rm Tl}\equiv d_{\rm Tl}\times10^{24}$ in units of $e\,cm$ for the CPX scenario with $\Phi_{A_{t,b,\tau}}=\Phi_3=90^\circ$ on the $\tan\beta-M_{H_1}$ plane. We take $\Phi_\mu=0$ convention. In the right frame the CUSB bound from the decay $\Upsilon(1S)\rightarrow \gamma H_1$ is also shown as a thick solid line. See Ref. for details.[]{data-label="fig:dtl"}](dtl_cpx.eps "fig:"){width="6.2cm"} ![The rescaled Thallium EDM $\hat{d}_{\rm Tl}\equiv d_{\rm Tl}\times10^{24}$ in units of $e\,cm$ for the CPX scenario with $\Phi_{A_{t,b,\tau}}=\Phi_3=90^\circ$ on the $\tan\beta-M_{H_1}$ plane. We take $\Phi_\mu=0$ convention. In the right frame the CUSB bound from the decay $\Upsilon(1S)\rightarrow \gamma H_1$ is also shown as a thick solid line. See Ref. for details.[]{data-label="fig:dtl"}](dtl_cusb.eps "fig:"){width="6.2cm"}
In the left frame of Fig. \[fig:dtl\], the rescaled Thallium EDM $\hat{d}_{\rm Tl}\equiv d_{\rm Tl}\times10^{24}$ is shown on the $\tan\beta-M_{H_1}$ plane in units of $e\,cm$. The current upper limit is $|\hat{d}_{\rm Tl}|\lsim 1.3$.[@Regan:2002ta] We divide the plane into 4 regions depending on the size of $|\hat{d}_{\rm Tl}|$. The unshaded region is not allowed theoretically. We have $|\hat{d}_{\rm Tl}|< 1$ only in the narrow region filled with black squares when $\tan\beta\lsim 5$ and $M_{H_1}\lsim 8$ GeV. However, if we allow 10 %-level cancellation between the two-loop contributions and possible one-loop contributions not considered here, the (green) region with $1 \leq |\hat{d}_{\rm Tl}|< 10$ is allowed. Furthermore, if very strong 1 %-level cancellation is possible, most of the region can be made consistent with the Thallium EDM constraint if the lightest Higgs boson is not so light. In the right frame of Fig. \[fig:dtl\], we magnify the region with $3 \lsim \tan\beta \lsim 10$ and $M_{H_1} \lsim 15~{\rm GeV}$. This region is of particular interest since $H_1$ lighter than about 10 GeV has not been excluded by the LEP experiments for the given rage of $\tan\beta$.[@Schael:2006cr; @bechtle] The bound on this light Higgs boson comes from low-energy experiment. We find that the region $M_{H_1}\lsim 8$ GeV (the region below the thick solid CUSB line) is excluded by data on $\Upsilon(1S)$ decay.[@Franzini:1987pv] For details, see Ref. .
![The differential cross sections in units of fb/GeV at $\Phi_{A\mu} = 100^{\circ}$ (left frames) and $\Phi_{A\mu} = 105^{\circ}$ (right frames), versus the invariant mass $\sqrt{\hat{s}}$ of two muons (uppers frames) or two photons (lower frames). The charged Higgs-boson pole mass is solved to give $M_{H_1}=115$ GeV for $\tan \beta = 10$ and ${\rm Arg}(M_3\,\mu) = 180^{\circ}$ in the CPX scenario. See Ref. for details. []{data-label="fig:diff"}](diffsmu100.eps "fig:"){width="6.2cm"} ![The differential cross sections in units of fb/GeV at $\Phi_{A\mu} = 100^{\circ}$ (left frames) and $\Phi_{A\mu} = 105^{\circ}$ (right frames), versus the invariant mass $\sqrt{\hat{s}}$ of two muons (uppers frames) or two photons (lower frames). The charged Higgs-boson pole mass is solved to give $M_{H_1}=115$ GeV for $\tan \beta = 10$ and ${\rm Arg}(M_3\,\mu) = 180^{\circ}$ in the CPX scenario. See Ref. for details. []{data-label="fig:diff"}](diffsmu105.eps "fig:"){width="6.2cm"} ![The differential cross sections in units of fb/GeV at $\Phi_{A\mu} = 100^{\circ}$ (left frames) and $\Phi_{A\mu} = 105^{\circ}$ (right frames), versus the invariant mass $\sqrt{\hat{s}}$ of two muons (uppers frames) or two photons (lower frames). The charged Higgs-boson pole mass is solved to give $M_{H_1}=115$ GeV for $\tan \beta = 10$ and ${\rm Arg}(M_3\,\mu) = 180^{\circ}$ in the CPX scenario. See Ref. for details. []{data-label="fig:diff"}](diffsph100.eps "fig:"){width="6.2cm"} ![The differential cross sections in units of fb/GeV at $\Phi_{A\mu} = 100^{\circ}$ (left frames) and $\Phi_{A\mu} = 105^{\circ}$ (right frames), versus the invariant mass $\sqrt{\hat{s}}$ of two muons (uppers frames) or two photons (lower frames). The charged Higgs-boson pole mass is solved to give $M_{H_1}=115$ GeV for $\tan \beta = 10$ and ${\rm Arg}(M_3\,\mu) = 180^{\circ}$ in the CPX scenario. See Ref. for details. []{data-label="fig:diff"}](diffsph105.eps "fig:"){width="6.2cm"}
For the scenario with large $|\mu|$ and $|M_3|$ such as CPX, the threshold corrections to the bottom-quark Yukawa coupling should not be neglected especially for intermediate and large values of $\tan\beta$. In this case, the production cross sections of the three neutral Higgs bosons through $b\bar{b}$ fusion can deviate substantially from those obtained in CP conserving scenarios, thanks to the nontrivial role played by the threshold corrections combined with the CP-violating mixing in the neutral-Higgs-boson sector.[@Borzumati:2004rd] The largest deviations in the case of $H_1$ and $H_2$ are for values of $\Phi_{A\mu}\equiv {\rm Arg}(A_{t,b}\,\mu)$ around $100^\circ$, with a large enhancement for the production cross section of $H_1$ and a large suppression for that of $H_2$. To detect this large enhancement and/or suppression, we need to know whether it is possible to disentangle the two corresponding peaks in the invariant mass distributions of the $H_1$- and $H_2$-decay products at the LHC. To address this issue, we consider the Higgs-boson decays into muon and photon pairs. For these two decay modes, the invariant-mass resolutions are, respectively, $\delta M_{\gamma\gamma}\sim 1\,$GeV and $\delta M_{\mu\mu} \sim 3\,$GeV for a Higgs mass of $\sim 100\,$GeV.[@ATLASTDR:1999fr] In Fig. \[fig:diff\], we show the differential cross sections in units of fb/GeV taking two values of $\Phi_{A\mu}$. The upper two frames are for $H_{1,2}\rightarrow \mu^+\mu^-$ and the lower frames for $H_{1,2}\rightarrow \gamma\gamma$. For $\Phi_{A\mu}=100^{\circ}$ (left frames), by combining the muon-decay mode with the photon-decay mode, $H_2$ can be located more precisely and disentangled from $H_1$. For $\Phi_{A\mu}=105^\circ$ (right frames), actually, two well separated peaks may be observed. For details, we refer to Ref. .
Trimixing Scenario
==================
![The CP asymmetry ${\cal A}^{WW}_{\rm CP}$ as functions of $\Phi_A\equiv\Phi_{A_t}=\Phi_{A_b}=\Phi_{A_\tau}$ for $\Phi_3=-10^\circ$ (left frame) and $\Phi_3=-90^\circ$ (right frame) in the Trimixing scenario. We take $\Phi_\mu=0$. See Ref. for details. []{data-label="fig:acpww"}](acpwwm10m90_tri.eps){width="12cm"}
To construct CP asymmetry at the LHC, we consider the production of CP-violating MSSM $H_{1,2,3}$ bosons via $W^+ W^-$ collisions and their subsequent decays into $\tau^+
\tau^-$ pairs assuming the longitudinal polarization of $\tau$ leptons can be measured.[@Ellis:2004fs] In this case, one can define integrated CP asymmetry: $$\label{CPasym}
{\cal A}^{WW}_{\rm CP} \ \equiv \
%\frac{ \Delta\sigma^{WW}_{\rm CP} }{\sigma^{WW}_{\rm tot}}\,=\,
\frac{ \sigma^{WW}_{\rm RR}-\sigma^{WW}_{\rm LL} }{\sigma^{WW}_{\rm RR}+\sigma^{WW}_{\rm LL}}\ ,$$ where $$\begin{aligned}
\sigma_{RR}\ &=&\ \sigma(pp (WW)\ \to\ H\ \to\ \tau^+_R\tau^-_R X) \,, \nonumber \\
\sigma_{LL}\ &=&\ \sigma(pp (WW)\ \to\ H\ \to\ \tau^+_L\tau^-_L X) \, .\end{aligned}$$
In Fig. \[fig:acpww\], we show the CP asymmetry ${\cal A}^{WW}_{\rm CP}$ as functions of $\Phi_A\equiv\Phi_{A_t}=\Phi_{A_b}=\Phi_{A_\tau}$ for $\Phi_3=-10^\circ$ (left frame) and $\Phi_3=-90^\circ$ (right frame) taking $\Phi_\mu=0^\circ$. We observe the CP asymmetry is large over the whole region of $\Phi_A$ independently of $\Phi_3$. For more detailed discussion, see Ref. .
Conclusions
===========
We obtain the constraint $M_{H_1}\gsim 8$ GeV from the decay $\Upsilon(1S)\rightarrow \gamma H_1$. By combining the Higgs-boson decay mode into two muons with that into two photons, it is possible to disentangle two adjacent peaks with the mass difference larger than $\sim 3$ GeV at the LHC. The process $W^+ W^- \to H_{1,2,3} \to \tau^+ \tau^-$ is promising to probe CP violation through the CP asymmetry based on longitudinal $\tau$-lepton polarization.
Acknowledgments {#acknowledgments .unnumbered}
===============
I wish to thank F. Borzumati, J. Ellis, S. Scopel, and A. Pilaftsis for valuable collaborations. This work was supported in part by Korea Research Foundation and the Korean Federation of Science and Technology Societies Grant funded by the Korea Government (MOEHRD, Basic Research Promotion Fund).
References
==========
[00]{} M. Dugan, B. Grinstein and L. J. Hall, Nucl. Phys. B [**255**]{} (1985) 413. S. Dimopoulos and S. D. Thomas, Nucl. Phys. B [**465**]{} (1996) 23 \[arXiv:hep-ph/9510220\]. A. Pilaftsis, Phys. Rev. D [**58**]{} (1998) 096010 \[arXiv:hep-ph/9803297\]. A. Pilaftsis, Phys. Lett. B [**435**]{} (1998) 88 \[arXiv:hep-ph/9805373\]. A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**553**]{} (1999) 3 \[arXiv:hep-ph/9902371\]. D. A. Demir, Phys. Rev. D [**60**]{} (1999) 055006 \[arXiv:hep-ph/9901389\]. S. Y. Choi, M. Drees and J. S. Lee, Phys. Lett. B [**481**]{} (2000) 57 \[arXiv:hep-ph/0002287\]. T. Ibrahim and P. Nath, Phys. Rev. D [**63**]{} (2001) 035009 \[arXiv:hep-ph/0008237\]. T. Ibrahim and P. Nath, Phys. Rev. D [**66**]{} (2002) 015005 \[arXiv:hep-ph/0204092\]. M. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**586**]{} (2000) 92 \[arXiv:hep-ph/0003180\]. M. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B [**625**]{} (2002) 345 \[arXiv:hep-ph/0111245\]. S. Heinemeyer, W. Hollik, H. Rzehak and G. Weiglein, AIP Conf. Proc. [**903**]{} (2007) 149 \[arXiv:0705.0746 \[hep-ph\]\]. E. Accomando [*et al.*]{}, arXiv:hep-ph/0608079. J. R. Ellis, J. S. Lee and A. Pilaftsis, Phys. Rev. D [**70**]{} (2004) 075010 \[arXiv:hep-ph/0404167\]. J. R. Ellis, J. S. Lee and A. Pilaftsis, Mod. Phys. Lett. A [**21**]{} (2006) 1405 \[arXiv:hep-ph/0605288\]. M. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Phys. Lett. B [**495**]{} (2000) 155 \[arXiv:hep-ph/0009212\]. J. S. Lee, arXiv:0705.1089 \[hep-ph\]. J. S. Lee, A. Pilaftsis, M. Carena, S. Y. Choi, M. Drees, J. R. Ellis and C. E. M. Wagner, Comput. Phys. Commun. [**156**]{} (2004) 283 \[arXiv:hep-ph/0307377\]. D. Chang, W. Y. Keung and A. Pilaftsis, Phys. Rev. Lett. [**82**]{} (1999) 900 \[Erratum-ibid. [**83**]{} (1999) 3972\] \[arXiv:hep-ph/9811202\]. A. Pilaftsis, Phys. Lett. B [**471**]{} (1999) 174 \[arXiv:hep-ph/9909485\]. D. Chang, W. F. Chang and W. Y. Keung, Phys. Lett. B [**478**]{} (2000) 239 \[arXiv:hep-ph/9910465\]. J. R. Ellis, J. S. Lee and A. Pilaftsis, Phys. Rev. D [**72**]{} (2005) 095006 \[arXiv:hep-ph/0507046\]. B. C. Regan, E. D. Commins, C. J. Schmidt and D. DeMille, Phys. Rev. Lett. [**88**]{} (2002) 071805. S. Schael [*et al.*]{} \[ALEPH Collaboration\], Eur. Phys. J. C [**47**]{} (2006) 547 \[arXiv:hep-ex/0602042\]. See P. Bechtle in Ref. .
P. Franzini [*et al.*]{}, Phys. Rev. D [**35**]{} (1987) 2883. J. S. Lee and S. Scopel, Phys. Rev. D [**75**]{} (2007) 075001 \[arXiv:hep-ph/0701221\].
F. Borzumati, J. S. Lee and W. Y. Song, Phys. Lett. B [**595**]{} (2004) 347 \[arXiv:hep-ph/0401024\]. The Atlas Collaboration, Atlas: detector and physics performance technical design report, vol 2, CERN-LHCC-99-15, ATLAS-TDR-15 (1999).
F. Borzumati and J. S. Lee, Phys. Lett. B [**641**]{} (2006) 486 \[arXiv:hep-ph/0605273\].
|
---
abstract: 'Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results. Our code and models are available at [ http://www.robots.ox.ac.uk/ vgg/software/two\_stream\_action](http://www.robots.ox.ac.uk/~vgg/software/two_stream_action/)'
author:
- |
Christoph Feichtenhofer\
Graz University of Technology\
[[](mailto:feichtenhofer@tugraz.at)]{}
- |
Axel Pinz\
Graz University of Technology\
[[](mailto:axel.pinz@tugraz.at)]{}
- |
Andrew Zisserman\
University of Oxford\
[[](mailto:az@robots.ox.ac.uk)]{}
bibliography:
- 'shortstrings.bib'
- 'vgg\_local.bib'
- 'vgg\_other.bib'
- 'deep\_actions.bib'
title: 'Convolutional Two-Stream Network Fusion for Video Action Recognition'
---
#### Acknowledgments.
We are grateful for discussions with Karen Simonyan. Christoph Feichtenhofer is a recipient of a DOC Fellowship of the Austrian Academy of Sciences. This work was supported by the Austrian Science Fund (FWF) under project P27076, and also by EPSRC Programme Grant Seebibyte EP/M013774/1. The GPUs used for this research were donated by NVIDIA.
|